How is it going and what’s next?
EU AI Act
There was a lot of buzz around the EU AI Act when it came into force in August 2024 but since then it has been relatively quiet. From February 2025, AI products that fall under prohibited applications (such as social scoring AI) will have to be removed from the EU market.
The EU AI Act also mandated that each Member State would establish at least one sandbox environment by August 2026, an environment designed to foster innovation by allowing companies to develop, test and validate AI systems under regulatory submission. Some countries, such as Spain, have already taken the lead by legislating and piloting sandbox frameworks aligned with the EU AI Act. Other nations like Denmark and Sweden are piloting sector-specific initiatives, while the UK – though outside the EU – is modelling a multi-regulator “AI and Digital Hub” that aligns conceptually with the EU’s goals.
With ongoing advancements in AI, there are concerns that Europe may face challenges in maintaining its innovation and competitiveness due to stringent regulations on AI development.
As of April 2025, the European Commission is actively seeking feedback to reduce the regulatory burden of the AI Act on startups and smaller innovators. This initiative aims to balance rigorous safety standards with the need to foster innovation, particularly in the medical device sector.
FDA guidance
The FDA started the year off with a bang, releasing a draft guidance titled “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” This document outlines a Total Product Lifecycle (TPLC) approach, providing comprehensive recommendations for the design, development and maintenance of AI-enabled medical devices. The guidance emphasises the importance of transparency, risk management and post-market surveillance, offering a clear framework for manufacturers to ensure the safety and effectiveness of their AI-driven products.
However, shortly afterwards, recent administrative actions have led to significant staff reductions within the FDA, particularly affecting departments responsible for digital health and AI. These layoffs have raised concerns about potential delays in the approval processes for AI-enabled medical devices. Manufacturers should anticipate possible impacts on regulatory timelines and consider proactive engagement with the FDA to navigate these challenges effectively.
Both the FDA and the EU are shaping a future where AI-driven medical devices not only meet high regulatory standards, but continuously evolve in a safe, monitored manner. With the EU’s obligations for high-risk AI systems, including medical devices, set to come into force 36 months from August 1, 2024, companies have a clear timeline to adapt to these new standards.
Successfully meeting these requirements will not only build trust among regulators and clinicians but also pave the way for broader adoption and integration of AI in healthcare, ultimately improving patient outcomes and advancing medical innovation.