The use of AI in healthcare has fantastic potential, from aiding in early-diagnosis of patients to extremely sensitive surgical procedures. It is already being extensively trialled to identify patients at risk of complications such as sepsis and to determine where healthcare resources need to be focused, something that became increasingly life-critical and time-sensitive during the COVID-19 pandemic. However, there are also a number of inherent risks to AI that need to be considered to ensure the system is both fair and transparent, while maintaining user privacy. In January 2021, the FDA released an action plan for the use of AI in medical device software, building on their proposed regulatory framework for modifications to AI/machine learning (AI/ML) based software as a medical device. Then, in September 2022, the FDA expanded the remit of software considered to be a medical device to include clinical decision support software.
Meanwhile, in April 2021, the European Commission released the AI Act (AIA), a proposal for AI regulations non-specific to medical devices. The European Association for Medical devices of Notified Bodies.
Notified Bodies of medical devices are currently lobbying for the MDR/IVDR to expand their designation scope to incorporate AI, instead of requiring medical devices that utilise AI to be evaluated by both an MDR/IVDR notified body and an AI-notified body. There will be substantial overlap between both sets of regulations concerning risk and validation. However, MDR/IVDR regulations are predominantly concerned with physical safety, while AIA regulations principally focus on transparency and equality. Despite there still being no specific AI regulations in place, hundreds of AI medical devices that use static algorithms, (algorithms that have been developed using machine learning but will not change beyond the point of regulatory submission), have been approved for use in the USA and EU by applying medical software regulations already in place. ChatGPT, one of the most talked about AI models in use today, is an example of a static system. Being ‘static’ avoids the risk of the system being manipulated by users in negative ways, as Microsoft’s AI Twitter feed was in 2016 to train it to begin expressing antisemitic, homophobic or misogynistic beliefs.
New laws and regulations will be essential before incorporating “adaptive” algorithms, (algorithms that continue to learn and change as they are used), into medical devices. This will provide a pathway for software to drastically improve post-release, by being exposed to extensive amounts of ideal training data rather than the restricted amounts typically available in open datasets.