HBI Deals+Insights / News

Why the EU won’t become the global standard-setter for AI regulation 

Last week HBI attended a breakfast event hosted by international law firm BonelliErede on the impact of AI in health care. Vincenzo Salvatore, an expert on EU law and the former head of legal at the European Medical Association (EMA), gave a presentation in which he discussed the EU’s plans for regulating the use of AI.

Salvatore appeared to be generally optimistic about how AI will impact health care and the EU’s ability to successfully regulate its use. He seemed to be of the view that the EU is in a good position to become the global standard-setter when it comes to regulating AI.

There is some precedent for this. In 2018 the European Union implemented The General Data Protection Regulation (GDPR), which quickly became the global model for the regulation of digital personal data. Many countries around the world have since crafted their own intentionally very similar data protection laws. Last year the UK adopted GDPR in its entirety, despite no longer being part of the EU (so much for the bonfire of EU regulation!). US tech companies now abide by GDPR as a matter of course, in what has been termed the “Brussels effect”.

The EU is hoping to repeat this triumph with AI. In April 2021 it became the first major political entity to publish a proposal on how AI should be regulated. The proposal lays out a framework for how different uses of AI should be handled, categorising them by level of risk into four groups: “unacceptable risk” (to be banned), “high risk” (to be subject to strict requirements throughout the technology’s lifecycle), “low risk” (to be subject to transparency requirements), and “minimal or no risk” (to be left unregulated). 

But it is difficult to see how it will succeed in doing so. In part this is because the world today is quite different from the world in 2018. The war in Ukraine and the consequent deterioration of trading relations between major powers has made the world both economically and politically much more fragmented. Countries are increasingly prioritising ‘security’ of their supply chains over free trade. There is far less of a push by governments to tear down barriers – including regulatory differences – to the free movement of goods and services across borders, so it is unlikely that countries outside the EU will copy the EU’s regulation with a view to facilitating trade.

The only other reason countries outside the EU would copy the EU’s regulation would be if they became convinced that it has got the regulation ‘right’. And this is even more unlikely. 

Regulating AI is not like regulating data protection. Data protection laws need only specify the privacy rights that people have with regards to their data, and the concomitant obligations and responsibilities that companies have when it comes to handling that data. But regulating a technology is different – especially a technology that is evolving at such a break-neck speed and which no one – least of all EU bureaucrats – knows what the limits of are. 

The proposal says that anything that is considered to pose a “clear threat to the safety, livelihoods and rights of people” constitutes an “unacceptable risk” and will be banned, giving examples such as social scoring by governments and toys which use voice assistance that encourages dangerous behaviour. Taken at face value this would seem to imply that something like driverless cars should be banned outright, since there is a clear risk to people’s safety. The proposal suggests, however, that “critical infrastructures (e.g. transport), that could put the life and health of citizens at risk” will go in the second – “high risk” – category. 

Presumably the reason for this is that, unlike rogue toys, there is a clear benefit to things like driverless vehicles that may outweigh the risks involved. However, this is not spelled out. The notion of benefits and risks for a particular AI application being weighed or traded off against each other is explicitly mentioned only once in the over-100-page document (in the context of whether real-time biometric identification systems should be allowed). 

As well as suffering from a very limited understanding about what AI can and (especially) will be able to do, the EU’s proposal seems to suffer from what some psychologists have termed the ‘trade-off aversion’ – a cognitive bias or behavioural tendency where people prefer to avoid explicitly weighing the pros and cons of different options when making a decision, as doing so can be emotionally and mentally taxing. 

This is especially pertinent for health care, where such difficult trade-offs have to be made all the time. We accept the potential side effects of a particular medicine because we have good reason to believe the benefits outweigh the risks. Doctors and nurses continuously have to make uncertain life-or-death decisions in a hospital setting, often under severe time pressure.

When asked what the biggest risk posed by AI use in health care will be, Salvatore said “misdiagnosis”. That doesn’t seem too bad. When HBI asked whether there couldn’t be situations where AI could pose a risk of death in health care, he said “no, because the doctor will always remain responsible for the patient”.

Can this really be so? Perhaps it is true when only considering the current ways in which AI is being used in health care. But as the technology develops and the number of applications proliferates, it is difficult to see how it will remain true.

The promise of AI lies in its potential to carry out a greater and greater number of tasks which can currently only be performed by humans, and do them better than humans. With driverless vehicles a certain (small) risk of death may be deemed acceptable because it appears to be less than the risk of death from human drivers. Regulators may quite soon have to accept similar arguments in the case of health care. EU regulators, despite their enthusiasm, do not seem to be prepared for such inevitable developments.

We would welcome your thoughts on this story. Email your views to Martin De Benito Gellner or call 0207 183 3779.