Dr Google? AI could be doctor in the pocket, but company’s health officer urges caution about its limits

The arrival of artificial intelligence into healthcare means everyone could one day have a doctor in their pocket, but Google’s chief health officer has urged caution about what AI can do and what its limits should be.

“There’s going to be an opportunity for people to have even better access to services [and] to great quality services,” Dr Karen DeSalvo told Guardian Australia in an interview last week.

“But we’re a ways to get there. We have a lot of things to work out to make sure the models are constrained appropriately, that they’re factual, consistent, and that they follow these ethical and equity approaches that we want to take – but I’m super excited about the potential even as a doc.”

DeSalvo, a former Obama administration health official, has headed up Google’s health division since 2021, and visited Australia for the first time in her role last week. She said AI would be a “tool in the toolbox” for doctors and could help support workforce shortage issues and improve the quality of care people are given. It would fill gaps rather than replace doctors, she added.

“I have to say as a doc sometimes: ‘Oh my, there’s this new stethoscope in my toolbox called a large language model, and it’s going to do a lot of amazing things.’ But it’s not going to replace doctors – I believe it’s a tool in the toolbox.”

Last week, a Google research study published in Nature analysed how large-language models (LLMs) could answer medical questions, with its own Med-PaLM LLM included in the study.

The LLMs were fed 3,173 of the most common medical questions searched online, and the results showed the Med-PaLM system generated answers on par with answers from clinicians 92.9% of the time. Answers rated as potentially leading to harmful outcomes occurred at a rate of 5.8%. The authors said further evaluation was necessary.

DeSalvo said it was still in a “test and learn phase” but LLMs could be the best intern for a doctor by placing every textbook in the world at their fingertips.

“I’m in the camp of, there’s potential here and we should be bold as we’re thinking about what the potential uses could be to help people around the world.”

But it should never replace humans in the diagnosis and treatment of patients, she said, indicating there would be concerns about the potential for misdiagnosis, with early LLMs prone to what has been called “AI hallucinations” making up source material to fit the response required.

skip past newsletter promotion

Sign up to Guardian Australia’s Afternoon Update

Our Australian afternoon update email breaks down the key national and international stories of the day and why they matter

Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

after newsletter promotion

“One of the things that we’re really focused on at Google is the tuning of the model and the constraining of the model in such a way that it leans factual,” she said. “Whether it’s for a clinician or for the patient, you don’t want a sonnet about your chemotherapy, you want to know what’s the literature say [and] is that right?”

DeSalvo said the ultimate aim was to address the information imbalance between the medical industry and the public, and put as much power into the hands of patients as possible.

“Information is a determinant of health. And it starts with just people understanding and knowing about the potential condition … We want to make sure that people have that knowledge and agency,” she said.

“When I was practising, I loved it when [patients] showed up with the printed sheets or a spiral bound notebook with all their glucose things written in the lines, and we could have a real conversation.”

Leave a Comment