Artificial Intelligence

Simplification or Back to Square One? The Future of EU Medical AI Regulation

As part of a recent effort to simplify and harmonize its digital framework, the European Union (EU) is considering two interlinked regulatory proposals that could fundamentally reshape the governance of artificial intelligence (AI) in medical devices.

As part of a recent effort to simplify and harmonize its digital framework, the European Union (EU) is considering two interlinked regulatory proposals that could fundamentally reshape the governance of artificial intelligence (AI) in medical devices. In this blog post, we introduce the two regulatory proposals and outline some of the potential consequences for health tech, clinicians and patients should they be adopted.

The Current EU Approach

Under the EU Medical Devices Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR), medical devices and in vitro diagnostic devices must be assessed for safety and performance before they can be placed on the market. The EU AI Act adds a horizontal set of requirements for certain “high-risk AI systems,” including mandatory risk management, transparency, cybersecurity measures, and human oversight. Under the AI Act’s current structure, AI used as part of medical devices and diagnostics is generally treated as high-risk by default. In practice, this means that many AI medical devices are expected to meet both MDR/IVDR requirements and the AI Act’s high-risk AI requirements — an overlap that industry has increasingly criticized as duplicative and as regulatory burden.

Proposal 1: The Digital Omnibus

The first proposal aiming to address this overlap is a legislative package referred to as the “Digital Omnibus,” which was brought forward by DG CONNECT — the EU Commission’s Directorate-General for Communications Networks, Content and Technology. The Digital Omnibus aims to make it easier to apply the AI Act’s requirements for high-risk AI systems (HRAIS) to AI medical devices and AI in vitro diagnostic devices already regulated under the MDR/IVDR.

It would streamline the interplay between the AI Act and the MDR/IVDR by reducing bottlenecks and duplication. It would make it easier for “notified bodies” — independent organizations that review and certify medical devices in the EU — to assess AI Act requirements alongside MDR/IVDR requirements in a single overall process. Another objective is to prevent delays by postponing the application of certain AI Act obligations until the necessary harmonized standards are in place and to make “dual compliance” more workable in practice.

Proposal 2: MDR/IVDR Simplification

Second, the Commission in December 2025 also proposed a sweeping amendment to the MDR and IVDR as part of a broader effort to simplify medical device regulation. This proposal, led by DG SANTE — the EU Commission’s Directorate-General for Health and Food Safety — seeks to simplify the implementation of the MDR and IVDR, including amendments affecting their interaction with the EU AI Act. 

This proposal would mean that AI medical devices would no longer automatically fall within the scope of the AI Act’s HRAIS requirements and would instead only be subject to the MDR/IVDR. However, the EU Commission would retain the power to adopt specific delegated or implementing acts in the future to reinstate (some of) those requirements for medical AI. As of now, no proposed acts exist. 

Two Diverging Paths to Simplification

Both proposals are framed as delivering what much of the health tech sector has advocated for. Many companies have long argued that overlapping regulatory regimes risk slowing innovation, increasing compliance costs, and delaying patient access to promising tools.

The approaches the two proposals are taking to achieve this goal, however, differ considerably. The Digital Omnibus intends to streamline compliance while keeping medical AI within the AI Act’s high-risk framework. The proposed amendments to the MDR/IVDR, by contrast, would exclude HRAIS requirements for medical AI altogether, unless and until reintroduced later via Commission acts.

Because both proposals are still subject to negotiation by the Council of the European Union and the European Parliament, their prospects remain uncertain. What is clear, however, is that they signal a policy debate about whether medical AI should remain subject to systematic AI Act safeguards by default, or whether those safeguards should apply only selectively in the future.

Why the Existing Guidance Matters

Even prior to both proposals, the Medical Device Coordination Group, in collaboration with the Artificial Intelligence Board, had issued a guidance on the interplay between the MDR/IVDR and the AI Act that has steered compliance efforts under both frameworks—albeit with persisting interpretative ambiguities. Specifically, it explicitly acknowledges that while the MDR and IVDR requirements address risks related to medical device software, they do not explicitly address risks specific to AI systems. It states that the AI Act complements the MDR/IVDR by introducing requirements to address hazards and risks to health, safety, and fundamental rights that are specific to AI systems.

Thus, if the MDR/IVDR simplification amendment by DG SANTE were to be adopted, it would not only substantially narrow the impact of the parallel Digital Omnibus proposal from DG CONNECT but also have notable consequences for AI medical device developers, clinicians, and patients.

What Both Proposals Could Mean for Health Tech, Clinicians, and Patients

The additional “relief” envisaged by the two proposals now under discussion comes with strategic uncertainty and tangible consequences for the healthcare sector, as safeguards to address AI-specific characteristics would no longer systematically apply unless reintroduced later through specific Commission acts.

For AI medical devices developers, the AI Act’s requirements provided a coherent framework for transparency, monitoring, and human oversight across high-risk AI. Its prospective disapplication to AI medical devices risks depriving developers – many of whom appear to support the proposed changes – of stable, long-term expectations for responsible AI design. Future delegated acts could reintroduce requirements abruptly, forcing costly redesigns. More concerning still, companies that have invested in robust governance structures, bias mitigation, and explainability may find themselves competing against products optimized for minimal compliance.

For clinicians, the consequences would be immediate and practical. Under the AI Act, providers of high-risk AI must ensure appropriate user information and promote AI literacy, helping professionals understand system limitations, confidence levels, and appropriate oversight. Without it, clinicians revert to MDR-style instructions for use, which typically emphasize intended purpose and performance but say little about algorithmic uncertainty, automation bias, or model drift. Clinicians will still be expected to use AI safely, interpret outputs, and manage edge cases, yet the regulatory system will no longer guarantee that systems are designed to support meaningful human oversight and without the AI literacy incentives to be implemented by healthcare institutions that the AI Act was activating.

Patients stand to lose the most. Traditional device conformity focuses on whether a product meets predefined safety specifications. HRAIS requirements go further, requiring structured evaluation of AI-specific risks such as bias across demographic groups, AI-specific cybersecurity measures, and robustness in real-world settings, even after deployment. Removing these obligations weakens protections exactly where AI introduces new failure modes. Patients may not benefit from systematic checks for population bias. They may be exposed to systems whose performance degrades silently as clinical practice evolves. 

A Crossroads for EU Medical AI

This is a crossroads moment. It is about how the EU conceptualizes medical AI. Is it merely another type of device, or is it a new clinical actor — adaptive, probabilistic, and deeply embedded in decision-making — that demands its own specific safeguards? The EU already made this choice when introducing a horizontal regulation to respond to AI risks across different fields of application. Is the EU now going back to square one with AI and Medical devices?

AI disclosure: We used ChatGPT to improve the fluency of certain sentences. AI was not used to generate the content of this blogpost.

About the authors

  • Sofia Palmieri

    Sofia Palmieri is the Post-Doctoral Fellow in Medicine, Artificial Intelligence, and the Law at the Petrie-Flom Center at Harvard Law School. She holds a PhD in Health Law from the University of Ghent and graduated with honors in Law from the University of Bologna. Her fellowship is part of a collaboration between the Petrie-Flom Center and the International Collaborative Bioscience Innovation & Law Programme (Inter-CeBIL) at the University of Copenhagen.

  • Henrik Nolte

    Henrik is an LL.M. candidate at Harvard Law School, where his research focuses on the regulation of AI, cybersecurity, privacy, and biotechnology. He is also a PhD candidate at the University of Tübingen. At Harvard, Henrik is a Research Assistant at the Harvard Kennedy School, Communications Director of the Harvard European Law Association, and a Student Leader in AI.