How will AI in healthcare be regulated?

28 Jun 2023 7min read

2001: A Space Odyssey taught me one key lesson: the development of artificial intelligence (AI) would inevitably lead to the destruction of humans. My fears have not been allayed by Geoffrey Hinton, a “godfather of AI”, recently quitting from Google citing his belief that AI is an existential threat. But all is not lost, by developing regulations to restrict how and when AI can be used, humanity may yet be saved.

AI is an extremely broad term that encompasses anything from a simple computer algorithm that uses a single reasoning method, through to autonomous vehicles and self-aware robots. At present, when discussing AI in healthcare, we are predominantly discussing machine learning, a subset of AI whereby computer algorithms apply statistical analysis to past data to predict the outcome of future data with the same input parameters. Deep-learning is a further subset of machine learning, which uses artificial neural networks modelled after the human brain, composed of input modules, processor modules and output modules that work together to solve complex problems.

Doctor AI

The use of AI in healthcare has fantastic potential, from aiding in early-diagnosis of patients to extremely sensitive surgical procedures. It is already being extensively trialled to identify patients at risk of complications such as sepsis and to determine where healthcare resources need to be focused, something that became increasingly life-critical and time-sensitive during the COVID-19 pandemic. However, there are also a number of inherent risks to AI that need to be considered to ensure the system is both fair and transparent, while maintaining user privacy. In January 2021, the FDA released an action plan for the use of AI in medical device software, building on their proposed regulatory framework for modifications to AI/machine learning (AI/ML) based software as a medical device. Then, in September 2022, the FDA expanded the remit of software considered to be a medical device to include clinical decision support software.

Meanwhile, in April 2021, the European Commission released the AI Act (AIA), a proposal for AI regulations non-specific to medical devices. The European Association for Medical devices of Notified Bodies.

Notified Bodies of medical devices are currently lobbying for the MDR/IVDR to expand their designation scope to incorporate AI, instead of requiring medical devices that utilise AI to be evaluated by both an MDR/IVDR notified body and an AI-notified body. There will be substantial overlap between both sets of regulations concerning risk and validation. However, MDR/IVDR regulations are predominantly concerned with physical safety, while AIA regulations principally focus on transparency and equality. Despite there still being no specific AI regulations in place, hundreds of AI medical devices that use static algorithms, (algorithms that have been developed using machine learning but will not change beyond the point of regulatory submission), have been approved for use in the USA and EU by applying medical software regulations already in place. ChatGPT, one of the most talked about AI models in use today, is an example of a static system. Being ‘static’ avoids the risk of the system being manipulated by users in negative ways, as Microsoft’s AI Twitter feed was in 2016 to train it to begin expressing antisemitic, homophobic or misogynistic beliefs.

New laws and regulations will be essential before incorporating “adaptive” algorithms, (algorithms that continue to learn and change as they are used), into medical devices. This will provide a pathway for software to drastically improve post-release, by being exposed to extensive amounts of ideal training data rather than the restricted amounts typically available in open datasets.

AI-healthcare-phone-image

1. Measures to control adaptive AI post-release

Change control plan

If the software is expected to continue learning after its release, under both the EU’s and the FDA’s proposed regulations, a predetermined change control plan will become an essential component for the authorisation of AI use in a medical device. This would need to detail any aspects of the software that will evolve during its use, known as pre-specifications. Additionally, an algorithm change protocol will also be needed to show the methodology by which the pre-specifications will be changed and how any risk to the patient will be mitigated.

Algorithmic impact assessment

Algorithmic impact assessment Bias in AI will need to be addressed in an algorithmic impact assessment. Within this assessment, a document is used to identify unintended outcomes and sources of risks for an AI, as well as detailing the steps that have been and will be performed to ameliorate them. For example, a company may have developed their AI using data from an equal proportion of black and white patients to avoid bias. However, if the patients which the AI continues to learn from after its release are predominantly white, it may lead to a skew in the diagnosis or treatment to be white-focused. The impact assessment must both identify this risk and detail how the algorithm will be written to mitigate it. The Federal Trade Commission are clearly concerned by such risk of bias, as they released a blog recently reiterating some of the key equality laws already in place that must be adhered to.

Continuous evaluation

AI systems will need to be overseen by a human user and over reliance on system outputs should be avoided. This will be necessary to counter the risk of the algorithm adapting in a way that had not been accounted for, in either the change control plan or the algorithmic impact assessment.

2. What to expect from AI regulation in medical devices

AI software will have to adhere to regulations already in place regarding privacy and software in medical devices, where data management and transparency to both users and the regulators is essential. For example, the EU’s General Data Protection Regulation (GDPR) states that if a decision is made about a user using an algorithm without any human involvement, the user must be informed about the process and has the right to challenge its judgement. These requirements may seem strict, but they essentially adhere to the risk-based approach already outlined in the EU’s Medical Devices regulations.

With this new, much clearer pathway for FDA and EU approval, it is only a matter of time before the first medical device with adaptive AI software is FDA and/or EU approved. That being said, these devices are likely to be relatively constrained, with restricted pre-specification data to limit the extensive risk considerations.

Progress is continuously being made in this space, with new guidance emerging such as the UK Government’s Software and AI as a Medical Device guidance, as well as the Good Machine Learning Practices guidance. As we continue on the path towards integrating more complex AI into our medical devices and healthcare, it will be important for regulations to catch-up and keep pace.

Join the conversation

Looking for industry insights? Click below to get our opinions and thoughts into the world of
medical devices and healthcare.