Google I/O 2018 keynote in 14 minutes

Google I/O 2018 keynote in 14 minutes

Hi, this is Wayne again with a topic “Google I/O 2018 keynote in 14 minutes”.
Welcome to Google i/o. Thank you for joining us. We are an important inflection point in computing and it’s exciting to be driving technology forward and very excited about how we can approach our mission with renewed vigor. Thanks to the progress we see in AI Healthcare is one of the most important fields AI is going to transform. If you go and analyze over hundred thousand data points per patient more than any single doctor could analyze, we can actually quantitatively predict the chance of readmission. 24 to 48 hours before earlier than traditional methods, it gives doctors more time to have another example of a one of our core products which we are redesigning with.

Ai is Gmail, we call it smart compose. So, as the name suggests, we use machine learning to start suggesting phrases for you, as you type. All you need to do is to hit tab and keep order, completing [ Applause ].

We are rolling out smart composed to all our users this month and hope you enjoy using it as well. So we are bringing a new feature called suggested actions say, for example, you went to a wedding and you’re looking through those pictures. We understand your friend. Lisa is in the picture, and we offer to share the three photos with Lisa and with one click those photos can be sent to her by the way I can also deliver unexpected moments. So, for example, if you have this picture cute picture of your kid, we can make it better. We can drop the background, color pop the color and make the kid even cuter or if you happen, to have a very special memory, something in black and white. Maybe of your mother and grandmother, we can recreate that moment in color and make that moment even more real. All these features are going to be rolling out to Google photos uses in the next couple of months, and today I’m excited to announce our next generation.

Tpu. 3.0, these chips are so powerful that for the first time we’ve had to introduce liquid cooling in our data centers, so we both Harvard wavenet and we are adding, as of today, six new voices to the Google assistant. Let’S have them say hello, good morning, everyone, I’m, your Google assistant, welcome to Shoreline Amphitheatre. We hope you’ll enjoy Google i/o back to you sundar.

Let’S be honest, it gets a little annoying to say: hey, Google everytime. I want to get my assistants attention now. You won’t have to say, hey, Google, every time we call this continued conversation, and it’s been a top feature request our kids learning to be bossy and demanding when they can just say, hey, Google, to ask for anything they need now. We’Ve been consulting with families and child development experts and we plan to offer pretty please as an option for families later this year. Let’S say you want to ask Google to make you a haircut appointment on Tuesday between 10:00 and noon. What happens is the Google assistant makes the call seamlessly in the background, for you, the amazing thing is the assistant can actually understand the nuances of conversation.

We’Ve been working on this technology for many years. It’S called Google duplex. It brings together all our investments over the years and natural language, understanding, deep learning text-to-speech by the way when we are done the assistant can give you a confirmation notification saying your appointment has been taken care of. We gave you an early look at our new smart displays at CES in January, we’re working with some of the best consumer electronic brands, and today I’m excited to announce that the first smart displays will go on sale in July from staying in touch with family with Broadcasts and do a video colleague to keeping an eye on your home with all of our other smart home partners, to seeing in advance what the morning commutes like with Google map, we’re thoughtfully.

Integrating the best of Google and working with developers and partners all around the world to bring boys and visuals together in a completely new way for the home. With the new Google News we set out to help you do three things. First, keep up with the news.

You care about second understand the full story and finally enjoy and support the sources you love we’re rolling out on Android iOS and the web in a hundred and twenty seven countries, starting today, Android p. Ai underpins the first of three themes in this release, which are intelligence, simplicity and digital wellbeing. What Android P we partnered with deep mind to work on a new feature.

Google I/O 2018 keynote in 14 minutes

We call adaptive battery it’s designed to give you a more consistent battery experience and then, with this understanding, the operating system adapts to your usage patterns so that it spends battery only on the absent services that you care about. Adaptive brightness learns how you like to set the brightness slider given the ambient lighting and then does it for you in a power efficient way. What Android P we’re going beyond simply predicting the next act to launch to predicting the next action you want to take. We call this feature after actions. Slices are a new API for developers to define interactive snippets of the wrap UI that can be surfaced in different places in the OS. If I type lift into the Google Search app, I now see a slice from the lift app installed on. My phone lift is using the slice, api’s rich array of UI templates to render a slice of their app in the context of search and then lift is able to give me the price for my trip to work and the slice is interactive. So I can order the ride directly from it.

Google I/O 2018 keynote in 14 minutes

Pretty nice, I’m really excited to announce ml kit. A new set of API is available through firebase with ml kit. You get on device, api’s to text, recognition, face, detection, image, labeling and a lot more and ml kit. Also supports the ability to to Google’s cloud-based ml technologies.

Google I/O 2018 keynote in 14 minutes

We think we can help users with their digital well-being in four ways. We want to help. You understand your habits, focus on what matters switch off when you need to, and above all, find balance with your family.

In Android, we actually give you we’re going to give you full visibility into how you’re spending your time, the apps, where you’re spending your time. The number of times you unlock your phone on a given day. The number of notifications – you God and we’re gon na, really help you deal with this better Android P. We’Ll show you a dashboard of how you’re spending time on your device we’re making improvements to do not disturb mode to silence, not just the phone calls and texts, but also the visual interruptions that pop up on your screen.

We’Ve created a new gesture that we’ve affectionately. Codenamed shush, if you turn your phone over on the table, it automatically enters, do not disturb, so you can focus on being present. No pings vibrations or other distractions. Android P will help you set up a list of contacts that can always get through to you with a phone call. Even if Do Not Disturb is turned on, so we created wine download.

You can tell the Google assistant what time you aim to go to bed and when that time arrives, it will switch on, do not disturb and fade. The screen to grayscale. Today, we’re announcing Android, P beta and with efforts in Android or EO to make OS upgrades easier, Android P beta is available on Google pixel and seven more manufacturer flagship devices. Today Maps was built to assist everyone wherever they are in the world.

We’Re now able to automatically add new addresses businesses and buildings that we extract from street view and satellite imagery directly to the map. This is critical in rural areas in places without formal addresses and in fast changing cities like Lagos here, where we’ve literally changed the face of the map in the last few years, we’re adding a new tab to maps called for you. It’S designed to tell you what you need to know about the neighborhood’s you care about new places that are opening what’s trending now and personal recommendations, we’ve created a score called your match to help you find more places that you’ll love your match, use machine learning to Combine what Google knows about hundreds of millions of places with the information that I’ve added restaurants, I’ve rated cuisines I’ve liked and places that I’ve been to share the list with your friends to get their input? You can easily share with just a couple of taps on any platform that you prefer, then my friends can add more places if they want to or just vote with, one simple click. So we can quickly choose a group favorite. So now, instead of copying and pasting, a bunch of links and sending text back and forth decisions can be quick, easy and fun. What if the cameras can help us answer, questions questions like where am I going or what’s that in front of me, our teams have been working really hard to combine the power of the camera, the computer vision with street video and maps to reimagine walking navigation.

You can start to see nearby places, so you see what’s around you and just for fun, our team’s been playing with an idea of adding a helpful guide like that there, so that you can show you the way: VPS visual positioning system, that can estimate, precise positioning And orientation, we think the camera can also help you do more with what you see. That’S why we started working on google lens, oh, that cute dog in the park. That’S a labradoodle lens can now recognize and understand words with smart text selection. You can now connect the words you see with the answers and actions you need, so you can do things like copy and paste from the real world directly into your phone. The next feature I want to talk about is called style match and the idea is this: sometimes your question is not what’s that exact thing. Instead, your question is: what are things like it? You’Re at your friend’s place, you check out this trendy looking lamp and you want to know things that match that style and now lens can help you.

So the last thing I want to tell you about today is how we’re making lens work in real time. So, as you saw in the style match example, you start to see you open the camera and you start to see lens surface proactively, all the information instantly and it even anchors that information to the things that you see, but we’re very excited that starting next week, Lens will be integrated right inside the camera, app on the pixel, the new LG g g7 and a lot more devices today. Wham-O is the only company in the world with a fleet of fully self-driving cars, with no one in the driver’s seat on public roads. Phoenix will be the first stop for way: Mo’s driverless transportation service, which is launching later this year.

Soon, everyone will be able to call way mo using our app and a fully self-driving car will pull up with no one in the driver’s seat to whisk them away to their destination and within a matter of months we reduce the error rate for detecting pedestrians by 100X, that’s right, not a hundred percent, but a hundred times now at way mo AI touches every part of our system from perception to prediction, to decision-making to mapping and so much more now to be a capable and safe driver. Our cars need a deep semantic understanding of the world around them. Our vehicles need to understand and classify objects, interpret their movements, reason about intent and predict what they will do in the future, and today I want to tell you about two areas where AI has made a huge impact, perception and prediction a traditionally in computer vision, neural Networks are used just on camera images and video, but our cars have a lot more than just cameras. We also have lasers to measure distance and shapes of objects and radars to measure their speed and by applying machine learning to this combination of sensor data, we can accurately detect pedestrians in all forms.

In real time. Our fleet has self-driven more than 6 million miles and public roads and at way mo, we use the tensorflow eco system and Google’s data centers, including TP, use to Train our neural networks and with GPUs. We can now train our nets up to 15 times more efficiently. I can’t wait to make our self-driving cars available to more people moving us closer to a future where roads are safer, easier and more accessible for everyone.

Thanks everyone, it’s a great reminder of how AI can play a role in helping people in new ways. All the time – and I hope you all find some inspiration in the next few days to keep building good things for everyone. Thank you. [ Applause, ] .