When AI Gets It Wrong!

When AI Gets It Wrong!

Hi, this is Wayne again with a topic “When AI Gets It Wrong!”.
( bright jingle ), Whoever made the tweet asking it how many LTT backpacks would fit in the trunk of a Tesla or whatever the question was –. Oh, I didn’t see that that’s hilarious. Someone made that tweet on the LTT handle.. Oh, that’s really funny., And it did it. It looked up the dimensions of the LTT backpack. It looked up the dimensions, — Shut up.

Of the trunk and it figured it out. How the ( beep ) did it? Do that Ask it? Let’S do it live Because I thought the dimensions for the backpack are in picture: form., [, Luke, ], Searching. Searching for that..

Now it’s searching for backpack dimensions. Shut up Shut up. Look at this Have different shapes and dimensions. Based on some rough estimates, I will try to answer it.. That’S insane. That’S actually nuts., Based on some videos of the Model Y trunk shut up. It can fit about five to seven standard carryon suitcases which have similar dimensions and capacity.

When AI Gets It Wrong!

Holy –, Which is accurate.. That statement is real., (, beep ). That’S crazy! Look at this.! How did this happen? How did Bing no offense Microsoft, but how did Bing just beat Google to the punch so dramatically at something, that’s so important and so core to their business? Well, there’s! Actually, a really good reason for it., So AI has been blowing up lately, both in the news and in real life applications across a ton of industries..

When AI Gets It Wrong!

So you know years ago it was only in relatively small things like helping doctors detect cancer early using advanced pattern. Recognition and then a little bit more over the years with things like autonomous vehicles., But now AI is everywhere.. It’S creating whole original pieces of art.. It’S holding conversations with humans all over the place..

It seems like we’ve just arrived at the beginning of the AI age.. There’S this chart that keeps popping up that hits extra hard, which is the time to reach 100 million users., And you can see the faster and faster adoption curves with these increasingly disruptive new technologies.. So the telephone took 75 years to hit this milestone. 100 million number.. Then the mobile phone reached the same mark in just 16 years., Netflix took only 10 years.

and Twitter took six and Gmail only took five.. Facebook in 48ish months was absolutely massive.. Instagram hit it in just 30 months.

Now TikTok. We view as this gigantic existential threat. Nine months to 100 million users.

ChatGPT, two months., I mean looking at numbers like that, like I buy it like, it seems almost obvious that we’re clearly on the precipice of something really really big. That’S gon na change, everything., So seeing Microsoft at the forefront of it. With this new Bing shouldn’t really be a surprise., I mean people are already talking to these chatbots and asking it all sorts of questions.. So it sort of feels natural having this chatbot act as your co-pilot for the web, alongside search instead of just a traditional search engine full of links., Like that sounds pretty sick.. But there is one thing: that’s gon na follow this conversational AI thing everywhere. It goes everywhere, you see it, which is that sometimes it’s just wrong.. Sometimes it just says things that aren’t true, because fundamentally the AI doesn’t know if it’s telling the truth or not..

It doesn’t understand that, like that’s, not part of the model. Like what we’re seeing is it taking our inputs and then creating outputs based on related words that are most likely to go together.. It’S not forming a sentence like humans. Do it’s generating a new sentence. And so adding it to a search engine like Bing.

It’S scraping all these relevant links and information and synthesizing new sentences just based on how it thinks things should be pieced together., It’s not sentient. It doesn’t understand what it’s saying, and so it’s definitely not fact checking itself.. So we have to keep that in the back of our mind through all of this right.

When AI Gets It Wrong!

Every time you see a headline., So it’s really interesting with these search engines right Because on one hand you have Bing who has everything to gain and then on the other hand, you have Google who has everything to lose.. I’Ve had access to this new Bing for a little bit.. It’S a limited preview before they push it live to the rest of the world.. I’Ve just been playing around with it..

Basically, it adds this chat experience alongside regular Bing.. It’S essentially the same experience as talking to ChatGPT, but instead of being limited to a fixed data, set that cuts off at 2021 it’ll pull from the entire current web that Bing can scrape from.. So, like I said, you can type in a question flip it over to chat and it’ll. Give you a sort of nicely written summary: that’s synthesized, based on what it finds for similar queries..

So if I ask it something kind of simple like what’s the average lifespan of a cheetah in the wild Gives me an answer right. It gives me a convincing bunch of sentences.. It actually gives me more information than I ask for.. It tells me about cheetahs and captivity too, which makes it you know, feel very convincing..

It also gives little footnotes and citations for some of its sources and it gives links at the end. If you want to dig in some more., It’s really impressive actually.. It looks good to me.. This is like a real product. That’S gon na ship to all over the world like people everywhere in the next month or two.

I think they said., But this could only come from Bing right now. Like the more you use it. The natural language is super super impressive., The fact that it gives me a convincing sounding couple of sentences in a row and strings it together, based on my input, super cool., But the more you use it.

The more you start to see these weird patterns and these habits, and these shortcomings. Again mostly in the fact that sometimes it’s just gon na be wrong.. A little game I like to play is ask it a question. You already know the answer to and then read what it says and spot the error..

So I asked it right now: okay, what are the best smartphone cameras right now And it gave me S23 Ultra Pixel, 7 Pro and iPhone 14 Pro Max with this nice little writeup with some specs for each.. That’S actually a pretty good list, but it is wrong about some of these numbers here.. The S23 Ultra has a 200 megapixel camera and a 12 megapixel front facing camera., But yeah. Okay, it’s mostly right..

I then asked it: what are the five best electric vehicles out right now And it gives me some five reasonable options., But I don’t know any expert that would put the Jaguar I-Pace on their list right now and leave the Rivian off., Like basically the answers that It gives are really convincing to someone who doesn’t know anything already about that subject.. But if you are already an expert in the subject that you ask it about, then you’ll find that the answers are like C plus, maybe B plus sometimes at best.. So now you see what’s happening like now. Suddenly, when you’re, asking ChatGPT or Bing about a factual thing or something you need help with suddenly, you also should probably add these layers on top.. Like am I a complete newb in this topic that I’m asking it about? Am I just willing to blindly trust whatever this spits out without any further research? Is a B plus answer gon na be good enough for me, even if it might have some possible errors in it? You know that might be good enough for just asking like you know how old a cheetah gets or something like that, but maybe not good enough for planning a trip or meal planning for someone with an allergy or something like that.. And then, if you look around the internet, people have gotten it to give increasingly more and more unhinged answers over time as it tries to simulate conversations and stay in the flow with natural language.. I’Ve seen anywhere from arguing about simple corrections to spewing weird stories about how it’s spied on its own developers or how it wants to be sentient to gaslighting people about things and lying about its previous answers and just saying some straight up. Scary stuff.

Just go to the Bing subreddit for, like an all, you can eat helping of all the insane stuff that Bing has said to just the people testing it over the past couple weeks. Like can you imagine, can you imagine if Google did this? If Google search at the top of search for people was just spewing out random stories and misinformation and like all kinds of insane unhinged things. That would not fly. Now to be fair, this version of Bing isn’t out yet to the public right. So it is still a small group testing phase, but even with this, like Microsoft knew that some of it is gon na get out there and potentially go viral.. It feels like they even basically programmed in lots of friendly emojis to try to soften the blow..

So when it knows it’s giving an answer to maybe a more controversial topic or something that it doesn’t have a super clear answer for you might get a little smiley face at the end. Just so you don’t, you know, take it too seriously., Also literally as of today, when I’m testing this, it started completely bailing on a lot of topics that might just be the slightest bit existential or dangerous.. It just says I prefer not to continue this conversation. And then it just stops. Just refuses to answer any more questions on that topic until you reset it., Which seems like a pretty good failsafe.. It’S a pretty good idea in hindsight., But we’ve already seen the other stuff.

It’s gotten out there. The damage has been done. Like the point still stands. This could have only come from Bing., Like a lot of people might have forgotten about this or might not have even known about this. But Google has been working on conversational AI stuff for years.. We’Ve seen Google Assistant., But they also literally showed an AI chatbot demo on stage at Google IO in 2021, where you could have this whole conversation with any person or object or anything in the universe that you wanted.

Their demo on stage was asking Pluto about itself. Nice and friendly right – Oh, what’s it like to be you Pluto? What would it feel like if I visited you? How do you feel so far from the sun? The difference with Google is. This was never shipped as a product. Like this was an internal research project., But the idea of displacing their massive search and ads business with a chatbot that gets things wrong. All the time is insane., It can’t happen right So literally search and ads is more than half of Google’s revenue as a company.. That’S what having everything to lose looks like.

Now to be fair. Google did hold an event in Paris the day after Microsoft’s event, which was them talking a little more about their chat with search, AI plans and they did say they’re planning on eventually doing a chat bot on top of Google search.. It’S called Bard.. It was much more subdued, though, and yes, it also literally did have a factual mistake in the promo for it.. So look I actually like the idea. I obviously think it’s smart when you’re on the precipice of this huge AI thing to have AI kind of be this co-pilot for the web to help you around the internet..

The idea of it summarizing a longer piece into some bullet points accurately is that would be great. Like the fact that it could give you sparknotes for a longer book. You haven’t read yet cool.. It could even help you meal plan help you plan a trip, help you make a purchase decision., But it’s clear that we’re still at the beginning of this.

Like there are so many unanswered questions from. Obviously, the fact checking to like do schools embrace this or ban this Or like how do search engines keep sending traffic to the publishers. Who are the sources that the chatbot is scraping from? I mean you get the links at the bottom, but a lot of people are not gon na click – those anymore, if you just give’em the answer above the search results. So right now in its current stage.

My take is anything we do with any of these. Ai tools should be a collaboration with the human touch., Like you, wouldn’t just put in a query in DALL-E and then just take whatever it generates and put it in a frame and just call that art right. It’S more for inspiration for your own paint and canvas., Like you, wouldn’t ask ChatGPT to write an essay and then just copy and paste it and submit it as your own.. It’S supposed to be the inspiration for the framework for your own piece for the human touch..

So, of course you shouldn’t ask the Bing chatbot what TV you should buy and then just like mindlessly click and buy the first one that comes up.. I mean it could be fine, but it could also be a C plus answer.. You should use that as a springboard for your own, more informed research, especially on topics that you don’t already know much about., Like maybe don’t just buy 19 backpacks immediately when asking if it can fit in the back of a Tesla., Maybe check its work.

First. Thanks for watching. Catch you, the next one. Peace. (, playful music, ) .