AI is an extremely broad term that encompasses anything from a simple computer algorithm that uses a single reasoning method, through to autonomous vehicles and self-aware robots. At present, when discussing AI in healthcare, we are predominantly discussing machine learning, a subset of AI whereby computer algorithms apply statistical analysis to past data to predict the outcome of future data with the same input parameters.

The use of AI in healthcare has fantastic potential, from aiding in early-diagnosis of patients to extremely sensitive surgical procedures. It is already being extensively trialled to identify patients at risk of complications such as sepsis and to determine where healthcare resources need to be focused, something that has become increasingly life-critical and time-sensitive during the COVID-19 pandemic. However, there are also a number of inherent risks to AI that need to be considered to ensure the system is both fair and transparent, while maintaining user privacy.
Despite there being no specific AI regulations in place, hundreds of AI medical devices that use “locked” algorithms, (algorithms that have been developed using machine learning but will not change beyond the point of regulatory submission), have been approved for use in the USA and EU by applying medical software regulations already in place. However, new laws and regulations will be essential before incorporating “adaptive” algorithms, (algorithms that continue to learn and change as they are used), into medical devices.
This year, the FDA released an action plan for the use of AI in medical device software, building on their proposed regulatory framework for modifications to AI/machine learning (AI/ML) based software as a medical device. Meanwhile, the European Commission has also released a proposal for AI regulations. This will provide a pathway for software to drastically improve post-release, by being exposed to extensive amounts of ideal training data rather than the restricted amounts typically available in open datasets.

Measures to control adaptive AI post-release
Change control plan
If the software is expected to continue learning after its release, under both the EU’s and the FDA’s proposed regulations, a predetermined change control plan will become an essential component for the authorisation of AI use in a medical device. This would need to detail any aspects of the software that will evolve during its use, known as pre-specifications. Additionally, an algorithm change protocol will also be needed to show the methodology by which the pre-specifications will be changed and how any risk to the patient will be mitigated.
Algorithmic impact assessment
Bias in AI will need to be addressed in an algorithmic impact assessment. Within this assessment, a document is used to identify unintended outcomes and sources of risks for an AI, as well as detailing the steps that have been and will be performed to ameliorate them. For example, a company may have developed their AI using data from an equal proportion of black and white patients to avoid bias. However, if the patients which the AI continues to learn from after its release are predominantly white, it may lead to a skew in the diagnosis or treatment to be white-focused. The impact assessment must both identify this risk and detail how the algorithm will be written to mitigate it. The Federal Trade Commission are clearly concerned by such risk of bias, as they released a blog recently reiterating some of the key equality laws already in place that must be adhered to.
Continuous evaluation
AI systems will need to be overseen by a human user and over reliance on system outputs should be avoided. This will be necessary to counter the risk of the algorithm adapting in a way that had not been accounted for, in either the change control plan or the algorithmic impact assessment.
What to expect from AI regulation in medical devices
AI software will also have to adhere to regulations already in place regarding privacy and software in medical devices, where data management and transparency to both users and the regulators are essential. For example, the EU’s General Data Protection Regulation (GDPR) states that if a decision is made about a user using an algorithm without any human involvement, the user must be informed about the process and has the right to challenge its judgement. These requirements may seem strict, but they essentially adhere to the risk-based approach already outlined in the EU’s Medical Devices regulations.
With this new, much clearer pathway for FDA and EU approval, it is only a matter of time before the first medical device with adaptive AI software is FDA and/or EU approved. That being said, these devices are likely to be relatively constrained, with restricted pre-specification data to limit the extensive risk considerations. As we continue on the path towards integrating more complex AI into our medical devices and healthcare, it will be important for regulations to catch-up and keep pace.