Hi, this is Wayne again with a topic “I typed text on a computer with my thoughts – Top Shelf”.
I grew up in a household with two official languages. We spoke French and English interchangeably and often in the same sentence, without ever missing a beat, but even with two languages in your back pocket. Some things just don’t translate. It needs interpretation, it’s about humans or machines. Turning one word or idea, or something only one of us understands into something we can both appreciate.
Uni is a tablet that can capture and interpret American sign language and translate into spoken English. Brian hate Campbell is a CEO of motion savvy the company. That’S developing unique: where did the idea for you come from the overall idea? I know sign language change, the voice, I mean that’s pretty common. Did I hire some deaf people? They know it’s also feared.
Omaha waffle really did contact sea of sign language actually being captured by kanima and trap that in some ways it’s tough. If the amp is a project, we have to prove our tea at the port of leap and have poor deaf community and with all that, the core of your mission is harder. You know you saw leap motion and you thought hey. This is something that we can use.
The beautiful thing about Libre actually are two cameras and idea to allow for death. If touch them, you can put your hand together. You can overlap the doctor cool thing about Lee.
Okay, such a breakthrough in gasps recognition technology. You have something called a Stein builder you’re allowed to record the Pacific time and Phi I was able to it. So, for example, no my sign is for a pizza pizza, another potion time for pizza pizza.
So I wish time builder I open it up. Click record time pizza apply that enable to it then left. I my sign. They’Ll come out. The boys pizza, which is very cool sign builder, will be an integral part of Union when the tablets widely available, but the offline demo Ryan has now is still pretty impressive. So, as you see, we have two buttons off screen.
You have a sign button. I have a listen button. I’M gon na head and fresh this time right now, hello, my name is Ryan. What’S your name? Don’T you have a vision, button that out and Democrat that hello, my name is Alex nice to meet you. This will be a game-changer, really cannot just jog. But after talking to fear me making you spend, I mean those deaf people. I know day within the deaf community need nothing wrong with that, but it’s also a Rizzo of as active communication option and backed up I’m hoping to fifth with this product. Seeing union action was amazing and you can really see how it might one day make getting around in a world that largely doesn’t speak, sign language a little bit smoother.
But what about people who have lost all mobility? Jonathan Volk, ah, is a brain injury researcher at waz row Center in Albany New York he’s responsible for developing a computer brain interface system that allows people to control computers and type text within minds. So I’m just going to measure your playlist circumference of your head and I’m the length of it. This is a brain computer interface system that a very simple one, that’s intended to be used by people who are severely disabled people who may be totally paralyzed and really lose all means of communication.
So its purpose its immediate purposes to restore simple communication and control to people who have lost it, see your eye blinks right there. If you blink your eyes almost interesting, there are a matrix of stimuli that are presented they can be. You can have a matrix of letters and numbers of function calls various kinds of things the person might want to select and that’s a flash.
You see that so every time it flashes you just count when the person wants to select a particular item. The flash of that item produces a response on the brain, that’s different from all the other items. That’S called the AHA response or the oddball response, and that’s what I’m focusing on all letters: yeah you’re paying attention and you want the eye so you’re paying attention to the eye and you notice how many times the eye flashes me really ignoring everything else. So, after a series of flashes after a series of repetitions, the system can tell, with considerable accuracy, often very high accuracy what you want to select and can make that selection.
Okay, so if you could make it simple, you think of a three-letter word. Typically, how long does it take for somebody to get used to the system and really feel comfortable the the system that you’re using really it’s something that’s a matter of a few hours, but it does require the setup it does require the cap and the gel right And the person helping women, yeah yeah, but I’m someone who’s paralyzed, that that thoroughly is gon na need help in any case theater. You know you mentioned. This will help people who have you know, limited mobility and who are unable to communicate what exactly? What conditions are we talking about, how the one that is seeking the most attention and most of the people we work with? Are people with a Mitrovica, lateral sclerosis or Lou Gehrig’s disease, so in their variety of other potential people with high level brain stem stroke speak with high level spinal cord injuries they’re a variety of kinds of disorders that might make a person candidate for this, and people Are using it right now in there there are you ready male, there are. Yes, there are a few people who are using it right now at homes, um. What’S what’s the future of the system? What are you trying to improve is? It is always going to use gel well, no, hopefully not.
There are a number of companies now who are developing dry electrodes. Hopefully we’ll get something that eventually that people can put on just like a hat look, good, etc, and the electrodes will be there. They’Ll make contact with the skin and they’ll work, we’re not quite there yet, but we’re moving in that direction.
Do you think that you know the whole area of brain computer interfaces? Are they getting? Is that area getting more attention? It clearly draws a lot of attention. Both scientific and popular attention, I think that will continue to be sustained as long as the things are actually delivered when you do in the laboratory you’re about two percent earlier right and you’re, actually having people use it as yes that, yes, I mean the kinds of Bci’S that we can look at in the lab are a lot fancier and ultimately, perhaps a lot more capable than what you were using here, but the thing about that is it’s reliable and it could be used in real life without us hanging over you. Clearly, technology is getting us a lot closer, the kinds of systems that allow our favorite aliens, superheroes and sci-fi villains to talk to each other in fictional worlds.
But when it comes to communication, there are certain nuances that machines might never fully grasp or convey, and that’s where human interpreters come in. Sarah Wilson is one of the best she works as an interpreter at the United Nations. We’Re properly conveying meaning is of the utmost importance.
How does one become a UN interpreter? I think, first and foremost you one has to have a real natural curiosity about, what’s going on in the world, also being a bit of a natural performer. What’S the performance aspect, it’s a stressful job. If you feel nervous, you can’t let that be reflected in your voice, because otherwise you would be doing a disservice to the speaker, and it would also call attention to yourself as an interpreter, and the idea is for us to convey the intended message of the speaker.
As accurately as possible, our sincere gratitude to all of those who supported our candidate. Thank you very much. Mr. president, what exactly is the difference between translation for translation deals with the written word and interpretation is the spoken word. There are different modes of interpretation here at the United Nations. We do simultaneous interpretation and when you say simultaneous interpretation at the same time, we don’t work on the basis of each word.
We work in units of meeting so once you have a unit of meaning, then you will render that into in my case into English. Well, while you’re doing that you’re also listening for the new information is coming right and that’s something we call split attention. You have to be able to divide your attention between taking in the information and processing it and also monitor your output enough to make sure that you’re making sense what happens with of speaker is speaking in an angry voice. We try to convey that somehow it may be through the emphasis that they place on a given word.
So that’s something that we have to be very attuned to. How do you think that the current technology to translate what people are saying? How does that compare to an interpreter? I think that the new technologies definitely have their applications. That can be very useful. I do think in the context of conference interpreting that a machine would never be able to replace a human being, because our understanding of the nuances of meaning as human beings, we have a capacity to detect and emotions and emphases that that I don’t believe a machine. Can do so, we’ve seen the machines that are working towards doing the very things that make UN interpreters great, but at the end of the day the human brain is still the ultimate interpreter. It’S the processor that other machines are trying to catch up to body language.
Sign language eye movements, speech and breathing rates: we take it all in without even thinking about it. Maybe one day machines will be able to do that too. .