HBI Deals+Insights / News

The risks and challenges of regulating AI

A crescendo – in equal measure hype, hope and concern – about the use of artificial intelligence in a range of situations including healthcare is reaching unprecedented levels.

It is hard to turn on any form of media without hearing the likes of Tesla boss Elon Musk opine that AI could eventually put everyone out of a job – indeed he has warned on multiple occasions that AI poses a greater risk to humankind than nuclear weapons and called for a pause to the development of AI more advanced that OpenAI’s GPT-4 software.

And yet, such software can be almost unimaginably useful. While it was in its nascency, AI was an incredibly useful tool in healthcare, able to read imaging scans with inhuman accuracy and speed and flag up potential abnormalities to its slower but better informed human colleagues. But the growth of the likes of ChatGPT is putting us in uncharted territory.

Where would one stand, legally, if AI managed to pass a medical licensing exam. Who would you prefer to be “seen” by, if the AI could be demonstrably proven to be more accurate and have better results?

Because we already live in a world where AI is beating humans in exam boards, general diagnostics and specific areas such as breast cancer detection. Perhaps more surprisingly, when it comes to mental health, feedback from chatbots deployed to assist with patients suffering from depression found a high percentage of users actually preferred “speaking” to the non-human adviser.

We may be moving into a world where, having first been a tool, then a tool offering advice with the clinician firmly in the driving seat, we now face a future where AI could be mandated to check the accuracy human advice rather than the other way around – as a regulatory requirement.

In healthcare, privacy is the other big concern. It is already possible to track a person’s every physical and virtual move. A combination of cameras and facial recognition algorithms takes care of the former, while trackers record the websites you visit, how long you stay for and what you buy as a matter of routine.

But what of your medical records? Medical data is extremely valuable and the question as to who owns your data then raises its head, and consumers are starting to realise this. And yet, for life threatening diseases like cancer, pooling data is an invaluable way to allow AI loose on patient data in the hunt for a cure.

Further regulation is currently being debated in Europe, and the World Health Organisation recognised in October the serious challenges of “unethical data collection, cybersecurity threats and amplifying biases or misinformation”.

As we write this, a third day of negotiations have resumed in Brussels on the EU’s AI Act. Given the difficulties encountered trying to agree on something so apparently simple as a common format for electronic medical records, one cannot envy the rowing 27-member bloc participants looking to put the stopper back into the bottle of a genie which has already begun to escape.

We would welcome your thoughts on this story. Email your views to David Farbrother or call 0207 183 3779.