I experienced the AI Pin and I still have questions!

I experienced the AI Pin and I still have questions!

Hi, this is Wayne again with a topic “I experienced the AI Pin and I still have questions!”.
So we’re here at mwc, with Imran and Bethany from Humane, AI uh, and we would like to ask you to introduce to us the AI pin so AIP pin is a new kind of computer. It’S essentially um something that uh allows you to be more present and gives you a sense of Freedom, mainly because it’s actually got a AI operating system that we’ve built from the ground up on top of an Android core. That allows it to really uh. Do a lot of work for you and really engage AIS to um, make it so that you don’t have to be manually doing a lot of operations which really gives you a new way to interact. That allows you to really maintain a little bit more presence and more freedom than you have today translate to French and then to French.

I experienced the AI Pin and I still have questions!

How it works is that when I put two fingers down and speak to it in English, it’ll speak back in French. So maybe I’ll start and then you can respond. So we have white, yes, perfect. So I could say it’s so nice to meet you and now it’ll speak back in French.

I experienced the AI Pin and I still have questions!

Now when I put two fingers down, you’ll speak to it and so I’ll come closer. So you can speak to it. Yes, it’s also very interesting to meet you and see this pin awesome and it speaks and understands over 50 languages, and it also knows when you’re in a new city it’ll default, set to the language for the city you’re in so when I landed in Barcelona, it Defaults to kadan or Spanish, which is really powerful, so we have a first ofit kind, laser projection system that we call Laser ink and it essentially allows you to have a display there when you need it and it disappears. When you don’t, we use our time a flight camera to understand when your palm is present and we project only on your palm.

I experienced the AI Pin and I still have questions!

When you put your hand down, the display goes away, and this is meant to be for really quick interactions and you navigate the display using touch uh, and I can show you so this is my daughter sending me a message. Uh Al Oliver asks if he can call you after school. Let’S move back now, you can go here, go through time, temperature and the dates you can push back to get to the menu.

I can go here to see uh photos that I’ve taken in the past and scroll through previews and videos here. This is to make sure that after I take a photo, I make sure I got the shot. So, every time you ask the device, a question when I hold up my hand, I can interrupt the answer and read the answer on my palm. This is really the heart of a multimodal system, which is really important. We see this as a new kind of computer, something that’s you know more established in terms of the uh coexistence of a lot of these services and things like the you know, emails and slacks, which will be coming in the future.

For us, we think smartphones are going to be around for quite a while just like desktops and even even servers are still around in terms of computer platforms. But we do see this as being something as uh, a quick go-to for being able to do a lot of that for you, and I think one of the things that’s really powerful about it. Is that the moment you want to actually send a text or think about uh, something that you want more information on? It’S right there ready for you in a way that other computers aren’t so there’s. You know a couple ways you can play music, but I prefer to just you know: speak to it very naturally, I’m going to play uh, you know, play some Taylor, Swift, right, famous artist, um. What it’s going to do is because of our streaming partner with tidle. It’S going to load music from Taylor, Swift, get her top songs, make a playlist and look at that. It’S starting to play right. You can see. I can like control uh, the music and Playback with simple gestures right. I just paused it by double tapping on it.

I can lower volume just by swiping down. I can skip to the next song Right. These are all gestures, but I can also you know: control music, just by using the laser display right.

Look at that. I can go next. I can pause it.

I see more about uh, you know the album. I can push back, you know, get see. What’S next, all sorts of you know music stuff that you might expect from a music. You know experience but in a new form factor in a new UI.

Can you tell us a little bit about the chat, GPT and open AI models that you’re using? Are you taking into consideration the fact that most AIS right now sometimes hallucinate answers and come up with stuff that isn’t you know 100 % accurate? We don’t actually use chat GPT, we use the open AI API as one of our llms. Our OS supports multiple llms as well. The way our architecture works, though, is that we go out and we find the right thing you’re looking for, and so, if it’s a an application experience that we’re supporting. We get you that one immediately, if it’s information that you want, we try and get you the best and most accurate answer as much as we can. That’S. You know all contingent upon what’s out there, so we will actually go to the right sources. We’Ll go to wolf from alpha, for example, for mathematics or Wikipedia Hallucination is something that comes when you go directly from the llm. We don’t go directly from the L.

.