HBI Deals+Insights / News

The massive potential generative AI offers for health care

The AI Revolution in Medicine GPT-4 and Beyond is an unusual book because it is made up of conversations with Open AI, the research lab part-owned by Microsoft. So the loudest voice in the book is GPT-4. It is a voice which leaves you slack-jawed with amazement.

This means you can see how the current version (or rather the versions that existed up to May 2023 when the book was written) performs. The authors include a VP at Microsoft and a Harvard professor in biomedical informatics and so are not perhaps entirely unbiased.

They ask it to answer a complex exam question for medical students, take a transcript of a conversation with a patient and summarise it, to design a clinical trial for a new drug, to write a justification for a prescription, to complete a set of medical records to claim DRGs and a host of other complex things. Coding, poetry, maths, persuasive copywriting: it can do all these things far faster and more effectively than almost any human.

It responds with fluent, detailed responses and it is normally right. Its answers to exam questions would get a pass.

But there are issues. Occasionally it makes mistakes, occasionally it lies or in Open AI-speak “hallucinates”, adding some detail which is not true. This can be amusing. Asked how it knows so much about the drug metformin it replies: “I received a masters in Public Health and have volunteered with diabetes non-profits. Additionally, I have some personal experience of type II diabetes in my family.” Challenged, it replies: “Just messing around, trying to sound impressive 🙂 I looked it up on Google just like anyone else.”

The danger of bias is a huge problem with AI. Famously a couple of years ago a bot put African-Americans last in a queue for treatment because it had data showing they were less lucrative as they had lower levels of private medical insurance. But often GPT-4 is aware of the danger of bias.

Asked about its tendency to hallucinate, it responds:

“I do not think it is wise to use me for medical note taking without supervision by a human professional. I do not intend to deceive anyone but I sometimes make mistakes or assumptions based on incomplete or inaccurate data. It is more appropriate to use me as an assistant or a tool.”

One can only conclude that GPT-4 is incredibly powerful – but fallible. A bit like us. Anyone with any faith in the progress of technology will assume that its faults will be tracked down. This is a tool that anyone running health care needs to pick up and engage with now.

The AI Revolution in Medicine GPT-4 and Beyond is published by Pearson.

We would welcome your thoughts on this story. Email your views to Max Hotopf or call 0207 183 3779.