Google AI Health Event: Everything Revealed in 13 Minutes

Google AI Health Event: Everything Revealed in 13 Minutes

Hi, this is Wayne again with a topic “Google AI Health Event: Everything Revealed in 13 Minutes”.
To bring tailored personalized insights based on your unique needs and preferences, that’s why we’re creating new AI experiences in Fitbit Labs? It’S a concept that we introduced last year. Fitbit lab would be a space in the mobile app where premium users can get early access to experimental. Ai features so that they can test out and give feedback. This offers an opportunity for our users to partner with us as we build and iterate on these experiences. Building on Google’s AI expertise, Fitbit lab can help.

You derive meaningful and personalized Insight by bringing together your multimodal time, series, health and wellness data. It can even generate charts for the data points that you want to visualize. For example, you could discover that your sleep score is best on the days that you are more active on a recent vacation with my family. I was put in charge of all the children one day and I hit my record number of steps and then my best sleep scored that night with these type of tools and features I’ll be able to then dig deeper into these type of connection. Am I just more active when I’m around children um later this year? Fitbit lab features will be available for testing for a limited number of Android users who are enrolled in the Fitbit Labs program in the mobile app looking ahead, we want to deliver even more personalized Health experiences with AI, so we’re partnering with Google research, Health and Wellness Expert doctors and personalized and certified coaches to create personal health, large language model that can reason about your health and fitness data and provide tailored recommendation similar to how a personal coach would this personal health llm will be a fine-tune version of our G.

I model this model fine-tuned using high quality research case studies based on the deidentified Diverse Health signals from Fitbit will help users receive more tailored insights based on patterns in sleep schedule, exercise intensity, changes in heart rate, variability resting heart rate and many many more. This new personal health llm will power future AI features across our portfolio to bring personalized Health experiences to our users, while it’s not meant to diagnose, treat mitigate cure or prevent any disease, injury or condition. We hope that this more personalized AI coaching model can help you reach your fitness, health and well-being goals in ways that were not possible before just last year, at the checkup we launched our medically tuned large language model, Med, LM and fast forward to today.

It’S already being explored in the field. You’Ll hear several examples, but I want to highlight three ways: our partners around the world are using medm and our other AI to innovate in the US. Geno bioworks is advancing drug Discovery and biocurity in the UK.

Hum Therapeutics is supporting clinicians, with better insights and in India. Apollo hospitals is using AI to ease access to their 24/7 teleah Health Services. We’Re now expanding our metaland family of models to include multimodality, starting with metm for chest x-ray, which is now available in an experimental preview on Google Cloud to allow our customers to build solutions. Medam for chest x-ray will enable things like findings, classification, semantic search and more. We hope this will provide solutions that can improve the efficiency of radiologist, workflows, empowering them to the Del high quality and consistent care.

When we first showed medm to members of our care team, we heard a resounding feedback that this could be extremely valuable in supporting part of the Care process. That has been an issue since the beginning of healthcare. The nurse to nurse handoff, sometimes commonly known as the bedside shift report Metal’s capabilities can be used to analyze Patients, health history, clinician notes and more to create a comprehensive easy to use summary for the handoff. A nurse then reviews the note confirms its accuracy modifies as needed and hands it over to the counterart.

Google AI Health Event: Everything Revealed in 13 Minutes

This helps nurses quickly get to the information they need about. A year and a half ago, we built the first AI system that was able to achieve a passing score on a medical licensing, exam Benchmark, called mqa. Our newest model fine-tuned for the medical domain achieves state-of-the-art performance on medical on the medical QA Benchmark, standing at over 91 %. When doctors see new patients, particularly in complicated cases, they have questions about the patient medical history.

Google AI Health Event: Everything Revealed in 13 Minutes

Much of this information is held in dense electronic medical records, but now we’ve observed that if we provide synthetic medical record, a synthetic medical record as part of the context, our models are able to correctly and directly answer a physician’s questions about the patient’s history multimodal. It is the ability to absorb and reason across multiple different data types like text images and audio Gemini. Models were built from the ground up to be multimodo and, after all, medicine is also inherently multimodal to make the best care decisions. Healthcare professionals regularly interpret signals across a plethora of source, including medical images, clinical notes, electronic health records and lab tests.

We tested how our new model would perform on the significantly more complex task report generation for 3D brain CTS. With these 3D images, we found that a significant portion of the reports that our model generated were judged by independent clinicians to be on a par or better than manually created reports. Well, these results are very encouraging ing they’re, not an endorsement that these AI systems are ready to be trusted to generate Radiology reports independently. Instead, they emphasize that it’s time to evaluate ai’s ability to assist Radiologists in Real World Report generation.

Google AI Health Event: Everything Revealed in 13 Minutes

Workflows, there’s currently no established standard to ensure that data used to develop AI reflects the diversity of people and experiences around the world and in healthcare. There’S historically been a lack of representation of historically marginalized populations in areas like clinical trial research. We realized that many existing Dermatology data sets include primarily skin cancers and other severe conditions, yet lack common concerns like allergic reactions. It’S estimated that less than 20 % of Dermatology textbook images contain dark skin tones, a statistic that has has not changed in about 15 years to correct the eror, the failures of the past.

We need to ensure these biases are not repeated in in the way that we build AI models. We are using heel, which we’ve determined is called Health Equity, um, assessment of machine learning and we dubbed it heel for short, using heel, we evaluated an AI model designed to predict Dermatology conditions based on photos of skin concerns, the model performed equitably across race, ethnicity and Skin subgroups, but we discovered that it had some room for improvement when it came to age, older adults, 70 and over are at risk of worse Health outcomes from skin conditions. Our model recognized that with Cancers, but it did not when it came to more common concerns. Like allergic reactions, this is a great example of the heel framework doing what it was meant to do, highlighting where we need to improve our model. Today, we’re excited to share that aollo will build Upon Our TB, lung and breast cancer models to help diagnose more people.

Sooner, we believe that AI has an important role here in helping more people receive diagnosis sooner and get life-saving treatment. Apollo is working towards bringing these models to markets across India. This means that, with the required regulatory approvals, they can incorporate our algorithms into screening programs nationally.

Additionally, over the next 10 years, Apollo will provide AI power screenings for TB, lung and breast cancer in under resource communities at no cost. Some health questions are really difficult to describe in words alone. For example, if you see a discoloration or other abnormality on your skin, trying to describe the color or the shape or the texture is not always easy. So, starting last year we made it possible to use Google Lens to search what you see on your skin and it couldn’t be easier. Take a picture of your skin with lens in the Google app and you’ll find visually similar matches from the web to inform your search. This AI powered feature available in more than 150 countries also works if you’re not sure how to describe something else.

On your body like a bump on your lip, another part of making health information accessible to everyone is presenting it in formats. They can easily understand. So, with this in mind, we’ve been making the experience more visual on mobile devices, we’ve added images and diagrams from high quality sources on the web that make it easier to understand symptoms like neck pain, for example, and we’re also working to make these more visual results Available on mobile for health conditions as well, such as migraines, kidney stones and pneumonia, in whichever format you understand information best using text, images or videos, we want you to help want to help. You find those answers to your health questions and, over the next few months, we’ll be rolling out this update globally, we’ve extand expanded the suicide domestic violence and sexual assault hotlines shown in search to dozens of countries and languages this year alone, we’ll increase coverage across these Features by 20 additional countries, including Puerto Rico and Thailand.

This will help people connect with local resources when they need it most and as searches for Mental Health crisis continue to climb year after year, we’ve made it easier to find clinically validated self assessments for depression and anxiety in more countries. Youtube’S no cost AI power tool streamlines the video translation and dubbing process, empowering creators to expand their reach and reaching people where they are begin, two compressions per second or 100 to 120 per minute and press down at least two Ines in depth. With the heel of your hand, imagine a Spanish speaking mother learning how to perform CPR on her child, a construction worker studying how to control bleeding in case of an accident, even thinking through the cases of an elderly couple being able to quickly identify signs of a Stroke – and these are just a couple of examples of how important it is to really break down this issue of language access and give people quality information when they needed the most starting today. An animation style course on how to promote racial Justice in medical education is available in Spanish, for free on the Stanford medicine. Continuing medical education Channel thanks to allow and shout out to the people from Stanford in terms of their leadership in this as well. .