Healthcare is Lucrative, but so far, Big Tech has yet to benefit meaningfully from it. While each company has its unique advantages, we’re still so early in the field that any overarching view of the space will not be meaningful enough except to say, please expect lots of exciting innovation, and more than a few expensive failures.
AI is a focus area for everyone but its journey from Code to Clinic is an arduous one.
We decided to look at the promising work by Big Tech in Healthcare and dedicate separate posts to them. This one is dedicated to three of Google’s forays in Diagnostics!
Google’s Diagnostics bets
Breast Cancer
Google first published promising results in these two studies in 2016 and 2020. What eventually followed was Google’s first commercial agreement to license its mammography AI research model (or probably any diagnostic AI model) - it announced a partnership with iCAD, a global leader in mammogram screening technology, in November 2022, and later expanded it in August 2023.
Ultrasound Reading
In many countries, maternal mortality is often impacted by the lack of trained staff. Google is developing Artificial intelligence (AI) models to make it easier to interpret important health information from ultrasound images like gestational age and fetal presentation. It even published two studies recently.
One does wonder about the commercial potential of this technology, given it has been developed specifically for low resource settings.
Diabetic Retinopathy
Diabetics are at a high risk of Diabetic Retinopathy - vision problems caused due to diabetes. Automated Retinal Disease Assessment (ARDA) can help detect diabetic retinopathy within 10 minutes. Google first published a study regarding ARDA in 2016, and the technology was deployed in Thailand in 2020. However, it received mixed results where it was tested since it was not trained to assess retinal scans clicked in actual conditions including poor lighting.
Musing: Are $$ better spent on solving poor lighting in clinics than making AL/ML models accurate enough to process images clicked in poor lighting?
As always, the AI is only as good as the data set it is trained on (which in this case was highly accurate images in a controlled setting). Google recently published a study about its experiences deploying ARDA in Nature and while they haven’t told us what’s next, they have debunked a few AI myths- inputs which will definitely improve the usability of its technology- from Code to Clinic.
Conclusion
Based on our secondary research, Google’s MO seems to be:
Enter a partnership with a hospital or organisation that will supply it with rich and (ideally) anonymised data
Train AI model on above acquired data
If results are ideal, publish said results in a reputed journal like Nature.
Expand its partnerships with reputed healthcare institutions based on the promise of its published research to work towards nuanced issues that support eventual commercialisation
Take model to real world clinical settings, and eventually sell its technology
Now it’s a fair assumption that Google probably has many disease diagnostics at various stages of development. Apart from the ones mentioned above, it is also experimenting with Lung Cancer detection and using Ultrasound for Breast Cancer detection. For now, its mammography technology seems to have reached the end line, and while not all will be eventually commercialised, we’re excited to see where and how things land up!