HBI Deals+Insights / News

AI is in dire need of regulation

From silicon valley to national legislatures, debates around the impact artificial intelligence could have on the future of humanity seem to have reached a new fever pitch in recent weeks.

The WHO is warning against the hasty adoption of untested systems. Doctors and public health experts are concerned that AI could further exacerbate health inequalities, breach personal data, contribute to the growing challenge of misinformation, as well as jeopardise patient safety and global health.

You may be thinking “but haven’t we been here before?”

Technological progress has always brought with it a sense of unease and, amongst some, nostalgia for the past, but has ultimately tended to improve human wellbeing. But there is a sense that this time things may be different. The sheer rate of innovation as well as the lack of regulation governing it is concerning, and policymakers are beginning to wake up.

The dam has broken. Attempts to “halt” AI development, as the British Medical Journal Global Health calls for, will most likely prove that resistance is futile – a global halt would almost certainly be impossible to agree upon, let alone enforce.

Surely, then, the best and possibly only viable route governments can take is proactive policy with the aim of creating a responsible regulatory framework.

But this will likely require some level of global cooperation. Callous or intentional misuse carries a risk of dire repercussions that we have no reason to believe will be confined within borders. It’s difficult to stress how critical a strong systematic emphasis on greater government oversight will be going forward.

This week UK Prime Minister Rishi Sunak announced the government’s intentions for the UK to play a leading role in global AI guidelines. HBI sources, especially those in the digital sector, are as passionate and positive as ever about the role AI has to play in improving health outcomes.

They advise always having a human overlay involved and suggest that nothing should ever go direct to a consumer without human evaluation and confirmation. One thing’s for sure: if 350 global AI experts say that AI should be taken as seriously as threats posed by pandemics, nuclear war, or climate change, then we should be paying attention.

We would welcome your thoughts on this story. Email your views to Michaila Byrne or call 0207 183 3779.