Hi, this is Wayne again with a topic “Apple’s Voice-Learning AI: Accessibility Enters the Chat”.
Apple, this week showed off new generative AI tools that can actually help Humanity, but you may not have heard a lot of talk about Apple’s AI, because Apple doesn’t really use the term. Ai Apple likes to use the phrase machine learning. There are several new features coming to the iPhone and iPad that can give assistance to people living with disabilities, and one feature that is getting a lot of attention is the option to have the iPhone speak in your voice. It learns how to sound like you. After about 15 minutes of training, the idea is that this can help people who have trouble speaking or people who may be losing their voice to a disease.
Like ALS reporters, who heard a demo have told me, the artificial voice sounded just like the real person we’ve seen this Tech before regenerative AI can simulate someone’s voice. In fact, on the same day that Apple announced this feature, there was a senate hearing on AI oversight and Connecticut Senator Richard Blumenthal opened the hearing with comments and audio that was entirely generated by an AI to sound like him. Too often we have seen what happens when technology outpaces regulation. It was to demonstrate how audio can be faked to sound like someone and the risks that may come with that he later described the moment as Eerie. What reverberated, in my mind, was what, if I had asked it, and what, if it had provided an endorsement of Ukraine surrendering or Vladimir Putin’s leadership, but apple is doing something different in its approach. The company’s focusing on how AI can be used as an accessibility tool to improve the quality of life for those with disabilities and Apple’s Tech is doing more than just voice replication. It’S also helping people who are blind or have low vision, which I’ll get into more. In a minute, it’s frankly amazing, on the same day, to see this technology be discussed on two totally different sides of the spectrum. Generative AI could help someone with daily tasks and make their life easier, but yeah. It could also start a World War Apple. This week gave us one more thing to add to the conversation of how AI shapes Humanity, because this is an utterly transformative moment in technology, but we are only at the early stages kind of like having a cell phone for the first time.
Let’S talk about what Apple has been working on with accessibility by using machine learning smarts and how it has the potential to change people’s lives. I’M Richard – and this is one more thing – Apple – previewed – a number of new features coming later this year to iPhones and iPads that were designed for cognitive Vision, hearing and Mobility, accessibility, headlining the features is personal voice. This makes a synthesized voice to sound like you and to train the machine. You have to spend about 15 minutes speaking a series of text prompts so it can learn to mimic the way you sound. It works for an in-person conversation as well as over the phone and on FaceTime now Apple says it’s doing all this machine learning on the device.
That means it’s not taking your voice data into the cloud, it’s private on the iPhone or the iPad. Now, if someone is at risk of losing their ability to speak, personal voice can help make sure they still sound like themselves when talking to family and friends, and their family still could have that person’s voice in their life. When chatting for those who already are non-speaking, there is something called live speech. It’S similar, you type out what you want to say and the phone will say it aloud for you.
It also Works in phone calls and FaceTime now for users who are blind or have low vision, there’s going to be a new mode in the iPhone magnifier app if you’re unfamiliar with the magnifier app. You can use it to point the camera around the room and have it tell you if a person or a door is nearby or have it describe your surroundings, but coming soon is something called point and speak, so you could identify text. You just point the camera to the text and the phone reads aloud to help someone interact with objects like say, figuring out, which button is which, on a microwave power level, add 30 seconds or imagine pointing it at a carton of milk and having it tell you, The expiration date, if you can’t read it yourself, it gives someone just more autonomy. There are also going to be new interfaces that help people with cognitive disabilities. The whole design of the iPhone can be set to be super simple for just some core iPhone features like listening to music, taking calls and texting and accessing photos in the camera. Everything there has high contrast, buttons and large text labels and if someone prefers to communicate visually messages will have an emoji only keyboard and the option to record a video message.
Apple gave some select press members, demos of how it all worked ahead. Of this week’s Global accessibility awareness day and for multiple years now on this day, Apple talks about its work in the accessibility space and one of those people who got an early preview of the new features is tech, journalist, Stephen Aquino. He has covered accessibility and assistive tech for many years. You can find his work now at Forbes, and this beat is personal for him, as a disabled person he’s had to overcome his own accessibility issues.
The other day Steven chatted with me to share his take on what Apple presented. What were some of your big reactions, especially with the feature where Apple, can help, replicate a voice for someone who might be losing their voice really in the in the everyday life. You know software that they showed off this week is really Innovative game changing stuff because of how it helps people use their technology, and I think we as a uh uh uh society as nerds as people in them, media um, don’t give Apple enough credit for being Super Innovative in that sense I haven’t heard with this um generative voice sounds like what did it sound like in the demo? It sounded like a real person, um it it it. It sounded like uh, you know them like you hear them talk, and then you hear them give the demo and and like.
If you closed your eyes um, I think you’d be hard to tell you know who’s who right Apple did not provide a recording for us to hear this, but maybe it’s something we’ll see more of. As we get closer to launch Steven said, he sees himself using it because of his stutter. Having all these speech, things is really heartening because um, you know I can see myself using that feature, because I get really strip dressed out. When I talk to people on the phone or at Starbucks or wherever I am so having that to help me would be super cool and – and you know again, I this is all not to say that there’s not a lot more – that they that apple could do. Um, I can think of a hundred things that Apple could do but um for the here and now just it’s really super awesome. We also chatted about where this Tech could go next and he said maybe the camera detection smarts could be put into a future Apple. Headset, so maybe you don’t have to hold up a phone to get a description read to you out loud in the end, it’s about tech, helping with Independence. Could this be something Apple, brings up at wwdc’s Main keynote presentation on June 5th, when Apple’s expected to show off its new headset, maybe certainly the developers conference that goes on for a few days May dive into this Tech more. But it is pretty awesome to see advancements like this, and it is something to keep in mind about what AI is doing for us. Besides, just writing term papers for students who didn’t do their homework thanks again to Tech reporter Stephen Aquino – and I want to hear from you about what this news has you thinking about for future uses of AI drop your thoughts in the comments and subscribe.
So you don’t miss any of the news as we get closer to WWDC, I’m Bridget Carey, thanks for watching .