Google believes that there is a chance to recruit generative AI models to help healthcare staff do their responsibilities, or at least a chance to assign more healthcare tasks to those models.
The business unveiled MedLM, a line of models optimised for the healthcare and pharmaceutical sectors, today. For Google Cloud clients in the U.S. (it’s in preview in select international areas), MedLM is based on Med-PaLM 2, a model created by Google that scores at a “expert level” on dozens of medical exam questions. Whitelisted users can use MedLM through Vertex AI, Google’s fully managed AI development platform.
Currently, MedLM comes in two flavours: a larger model intended for “complex tasks,” according to Google, and a smaller, more precisely tuned model best suited for “scaling across tasks.”
“Through piloting our tools with different organizations, we’ve learned that the most effective model for a given task varies depending on the use case,” Yossi Matias, Google’s vice president of engineering and research, said in a blog post that was sent to TechCrunch prior to today’s release. “For example, summarizing conversations might be best handled by one model, and searching through medications might be better handled by another.”
According to Google, HCA Healthcare, a for-profit facility operator, was one of the first users of MedLM and has been testing the models with doctors to assist with patient record writing at emergency department hospital sites. MedLM is now a part of BenchSci, another tester,’s “evidence engine” for finding, classifying, and ranking new biomarkers.
Every day, according to Matias, “We’re working in close collaboration with practitioners, researchers, health and life science organizations and the individuals at the forefront of healthcare every day.”
Google is frantically trying to seize control of the healthcare AI business, which has the potential to be worth tens of billions of dollars by 2032, alongside its main competitors Microsoft and Amazon. AWS HealthScribe, which was just released by Amazon, use generative AI to record, condense, and evaluate notes from patient-doctor consultations. Microsoft is testing a number of AI-driven healthcare technologies, such as “assistant” apps for doctors that are supported by extensive language models.
However, there are good reasons to be cautious while using this kind of technology. Historically, the use of AI in healthcare has yielded mixed success.
The National Health Service of the United Kingdom is supporting Babylon Health, an AI business that has come under fire on several occasions for asserting that its technology can diagnose illnesses more accurately than physicians. Additionally, IBM had to sell its Watson Health subsidiary at a loss due to technical issues that caused client relationships to deteriorate.
It may be argued that generative models, such as the ones found in Google’s MedLM family, are far more advanced than their predecessors. However, studies have indicated that even for relatively simple healthcare-related queries, generative models aren’t very accurate.
In one study, which was co-authored by a group of ophthalmologists, queries concerning eye problems and diseases were posed to ChatGPT and Google’s Bard chatbot. The results showed that most of the answers from all three tools were wildly wrong. Cancer treatment regimens generated by ChatGPT contain potentially fatal mistakes. In addition, models like ChatGPT and Bard respond to inquiries concerning skin, lung capacity, and kidney function with prejudice and disinformation about healthcare.
The World Health Organisation (WHO) issued a warning in October about the dangers of applying generative AI in healthcare. The organisation noted that models may produce damaging incorrect answers, spread misinformation about health-related topics, and divulge confidential information or health data. (It is possible that models trained on medical records could inadvertently disclose those records, as models may memorise training data and return chunks of this data when prompted.)
“While WHO is enthusiastic about the appropriate use of technologies, including [generative AI], to support healthcare professionals, patients, researchers and scientists, there’s concern that caution that would normally be exercised for any new technology is not being exercised consistently with [generative AI],” the WHO said in a statement. “Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and thereby undermine or delay the potential long-term benefits and uses of such technologies around the world.”
Google has stated time and time again that it is releasing generative AI healthcare tools with extreme caution, and it is not going to change its stance today.
“We’re focused on enabling professionals with a safe and responsible use of this technology,” Matias added. “And we’re committed to not only helping others advance healthcare, but also making sure that these benefits are available to everyone.”
- SpaceX Launches Hera Asteroid Mission Ahead of Hurricane Milton - October 8, 2024
- How to Watch Seattle Kraken 2024-25 Hockey Season Online & on TV - October 8, 2024
- Apple to Roll Out iOS 18.1 Update on October 28 With Apple Intelligence Integration - October 7, 2024