Regulation of AI in healthcare: navigating the EU AI Act and FDA

29 Apr 2025 14min read

Artificial intelligence (AI) is transforming industries at a breathtaking pace, but the medical sector remains one of the most cautious adopters. Rather than rushing to deploy AI solutions, the industry has rightly prioritised managing risk, to ensure that any AI medical technology solution is both safe and effective when used. However, with the potential to dramatically improve healthcare, AI-powered medical devices are drawing attention from regulators worldwide, leading to critical discussions about safety, transparency and long-term impact. 

Since the EU AI Act came into force on the 1st of August 2024, along with the FDA’s evolving approach to regulating AI in healthcare, there is now greater clarity for developers, manufacturers and healthcare providers in navigating this complex landscape. Drawing on this regulatory guidance, there are several critical areas that developers must address to ensure safety in medical AI applications. These include the different ways to address data quality and sources of bias, as well as the role of predetermined change control plans, techniques for AI validation and considerations for post-market surveillance. 

AI in healthcare regulations: why we need guardrails

AI systems offer immense promise for medicine – think AI diagnosing diseases earlier, personalising treatments or managing patient care in real-time. However, the complexity and adaptive nature of AI introduces risks. What if the AI makes the wrong decision? How do you ensure the system doesn’t evolve into something unpredictable over time? These are concerns that regulatory bodies like the FDA and EU are grappling with, leading to a push to establish robust, risk-based frameworks.

In medical settings, the question isn’t just whether AI can do something but whether it should – and whether we can ensure its accuracy and safety when it does. Unlike consumer-facing applications where mistakes might cause inconvenience, errors in medical AI can mean life or death.

Blog images (81)

Global regulatory expectations around AI

Understanding the regulatory landscape and expectations related to AI adoption in medical settings is crucial. As AI technologies continue to evolve, regulatory bodies worldwide have developed guidelines and frameworks to ensure safe and effective integration. There are several key regulatory expectations for medical device manufacturers to note:

  • Risk assessment: before an AI system can be deployed, developers must identify potential safety risks, such as what could happen if the AI malfunctions or provides incorrect information and implement strategies to mitigate those risks.
  • Transparency: one of the most common global requirements is transparency – both in terms of how AI makes decisions and how that information is relayed to the user. AI systems are often seen as “black boxes,” where it’s difficult to understand how they arrive at specific conclusions. However, regulators are pushing for documentation of algorithms, data sources, the decision-making processes within AI systems and clear instructions for use. Clinicians need to understand the rationale behind the system’s advice to trust and use it confidently in patient care.
  • Clinical validation: no AI system can be trusted in a medical setting without thorough clinical validation. Before any AI-driven medical device can be used, it must undergo rigorous testing in real-world scenarios. These validation studies are designed to prove that the AI performs reliably and safely across diverse populations and clinical settings. For example, a predictive AI model for heart disease must be validated with data from multiple demographic groups to ensure it performs well across ages, ethnicities and health profiles. Failing to do so could result in biased or inaccurate predictions, which could compromise patient care.
  • Post-market surveillance: AI systems in healthcare are not static; they can adapt and change, especially those that continuously learn. This is why global regulators emphasise the need for continuous monitoring – often called post-market surveillance. Once the AI is deployed, developers are expected to closely monitor its performance and safety, ensuring that any issues are quickly identified and rectified. This might involve continuously collecting real-world data from hospitals where the AI is in use, analysing it for signs of bias, failure or any unexpected changes in performance.

The FDA’s action plan: managing AI across the product lifecycle

In the U.S., the FDA has taken proactive steps to regulate AI and machine learning-based medical devices. Since their current regulation was not designed for adaptive AI systems (AI models that continuously learn based on new data to support decision-making) the FDA has released action plans and draft guidance documents in the last few years to try to account for this.

Released in January 2021, the FDA’s proposed action plan – “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan” – highlights a regulatory strategy based on the Total Product Lifecycle (TPLC) approach. This means regulating AI in medical devices does not stop at initial approval; it covers the full life of the product – from development, through to deployment and continuous post-market monitoring.

For AI systems that are designed to evolve, such as machine learning algorithms that learn from new data and improve over time, this action plan is crucial. Rather than repeatedly submitting for FDA approval whenever an update is made, the agency suggests using a Predetermined Change Control Plan (PCCP). This would allow manufacturers to implement minor changes to their AI models without requiring new regulatory approval, as long as the modifications are within the predefined parameters.

Imagine a diagnostic AI that learns to recognise patterns more accurately over time. Under the FDA’s model, this system could update itself in line with its PCCP, maintaining safety and effectiveness without delaying necessary improvements for months due to lengthy regulatory re-approvals. Here, the focus shifts to real-world performance monitoring, where systems are continuously observed and adjusted based on real-world data.

Transparency is also key. The FDA insists that developers must provide clear documentation on how AI algorithms make decisions – information that must be understandable not just to regulatory experts, but also to healthcare professionals using the device in clinical settings.

AI being used in a healthcare setting

The EU AI Act: a stricter, more granular approach

The EU AI Act proposes one of the most comprehensive regulatory frameworks in the world and applies to all AI systems across multiple sectors that are placed in the European market. AI systems are classified into risk categories, with AI-based medical devices placed in the high-risk category under the legislation. The goal of this act is to harmonise AI regulations across Europe and ensure that developers of medical AI systems meet higher safety standards than those applied to less critical technologies. It also means that developers of AI-based medical devices must meet additional requirements beyond those specified in the EU’s existing Medical Device Regulations (MDR) and In-Vitro Diagnostic Regulations (IVDR).  

The EU AI Act emphasises several core principles: 

  1. Classification and risk management: AI systems are classified based on their risk to human health and fundamental rights. For medical devices, which directly impact patient care, the classification is naturally high-risk. This means more stringent requirements for transparency, accuracy and human oversight.
  2. Bias, data quality and transparency: a key concern for AI in healthcare is ensuring high-quality, unbiased data. If an AI system is trained on flawed or unrepresentative data, its outputs could disproportionately harm certain patient populations. The EU AI Act mandates that medical device developers take proactive measures to prevent such biases by using high-quality datasets for training, validation and testing of their AI models. Finally, transparency should be ensured in the AI algorithm; the output must be interpretable by users and they should be informed that AI is being used. 
  3. Quality Management System: providers of high-risk AI systems, including medical devices, must implement a quality management system (QMS). This system should cover risk management, data governance, technical documentation, data logging, labelling, design accuracy, robustness, safety, cybersecurity and post-market monitoring. The QMS ensures compliance with the AI Act and must be documented in clear policies, procedures and instructions. 
  4. Post-market surveillance: much like the FDA’s approach, the EU AI Act also emphasises post-market surveillance. Manufacturers are expected to collect and analyse data on how the AI system performs after it’s deployed, looking for any potential adverse events or unexpected behaviour. They’re also tasked with correcting issues in real-time, ensuring the AI adapts safely in clinical settings. 
The EU AI act timeline

The EU AI Act timeline

AI in medical devices: navigating a complex landscape

By adopting a risk-based approach to medical AI development, ensuring the use of high-quality and representative training data, maintaining transparency about the system’s decisions and capabilities and implementing rigorous post-market surveillance, manufacturers can deploy safe and effective AI solutions.  

How is it going and what’s next?

EU AI Act 

There was a lot of buzz around the EU AI Act when it came into force in August 2024 but since then it has been relatively quiet. From February 2025, AI products that fall under prohibited applications (such as social scoring AI) will have to be removed from the EU market.  

The EU AI Act also mandated that each Member State would establish at least one sandbox environment by August 2026, an environment designed to foster innovation by allowing companies to develop, test and validate AI systems under regulatory submission. Some countries, such as Spain, have already taken the lead by legislating and piloting sandbox frameworks aligned with the EU AI Act. Other nations like Denmark and Sweden are piloting sector-specific initiatives, while the UK – though outside the EU – is modelling a multi-regulator “AI and Digital Hub” that aligns conceptually with the EU’s goals. 

With ongoing advancements in AI, there are concerns that Europe may face challenges in maintaining its innovation and competitiveness due to stringent regulations on AI development.  

As of April 2025, the European Commission is actively seeking feedback to reduce the regulatory burden of the AI Act on startups and smaller innovators. This initiative aims to balance rigorous safety standards with the need to foster innovation, particularly in the medical device sector. 

FDA guidance

The FDA started the year off with a bang, releasing a draft guidance titled “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” This document outlines a Total Product Lifecycle (TPLC) approach, providing comprehensive recommendations for the design, development and maintenance of AI-enabled medical devices. The guidance emphasises the importance of transparency, risk management and post-market surveillance, offering a clear framework for manufacturers to ensure the safety and effectiveness of their AI-driven products. 

However, shortly afterwards, recent administrative actions have led to significant staff reductions within the FDA, particularly affecting departments responsible for digital health and AI. These layoffs have raised concerns about potential delays in the approval processes for AI-enabled medical devices. Manufacturers should anticipate possible impacts on regulatory timelines and consider proactive engagement with the FDA to navigate these challenges effectively. 

Both the FDA and the EU are shaping a future where AI-driven medical devices not only meet high regulatory standards, but continuously evolve in a safe, monitored manner. With the EU’s obligations for high-risk AI systems, including medical devices, set to come into force 36 months from August 1, 2024, companies have a clear timeline to adapt to these new standards.  

Successfully meeting these requirements will not only build trust among regulators and clinicians but also pave the way for broader adoption and integration of AI in healthcare, ultimately improving patient outcomes and advancing medical innovation. 

Join the conversation

Looking for industry insights? Click below to get our opinions and thoughts into the world of
medical devices and healthcare.