Artificial Intelligence

Informed Consent, Redefined: How AI and Big Data Are Changing the Rules

Is informed consent still meaningful in the age of AI? Once the gold standard of medical ethics, it now risks becoming symbolic — a checkbox, rather than a safeguard.

Is informed consent still meaningful in the age of AI? Once the gold standard of medical ethics, it now risks becoming symbolic — a checkbox, rather than a safeguard. With proper regulation lagging behind, AI-driven medicine is increasingly shaping diagnoses and treatments; yet, in today’s legal landscape, patients are often unaware of AIs influencing their health care. Even when patients consent for their data to be used by AI applications, the complexity and opacity of these technologies make it difficult for them to fully understand what that entails.

This concern highlights the importance of adapting medical ethics to this evolving paradigm, which could include aligning informed consent with each phase of AI systems’ lifecycles. While AI has the potential to enhance health care, it must not come at the cost of infringing on patient agency or ethical medical practice.

How and Why Informed Consent Was Born

While medical ethics principles are well established today, that wasn’t always the case. Some evidence traces back to Ancient Greece — the Hippocratic Oath clearly acknowledges these principles — but it wasn’t until the 20th century that medical ethics gained attention as a formal field of study. Many believed medical practice was already ethical as it was, without a need to bureaucratize patient rights. But in the early 20th century, some landmark court cases helped establish the principle of patient autonomy, and Henry Beecher further shifted public opinions by empirically pointing out the failure to inform research patients of risks they incurred. Amidst this rise of awareness, amplified by revelations of scientific atrocities committed during the Second World War, the process of informed consent developed, evolving from requiring a simple signature to a process centered on informative communication. 

The arrival of the Internet and algorithmic processing soon completely changed how data was handled and understood. This paradigm shift was met with comprehensive data protection measures — legislation like the General Data Protection Regulation (GDPR) in the EU or, at a state level in the U.S., the California Consumer Protection Act.

Today, we stand at the forefront of a new wave that is reshaping our understanding of data and its influence. AI models are already impacting patient health outcomes, yet in many jurisdictions, including the U.S., the existing legal framework for informed consent does not explicitly require an obligation to disclose these cases. Indeed, AI-related laws enacted so far, such as the EU AI Act that entered into force over summer, have done little to address data privacy or reform data protection to mitigate the unique risks posed by AI-powered health care.

Why Explainable AI is an Unattainable Promise for Medical Consent

AI-driven medical predictions and decisions rely on algorithms that their own developers often struggle to fully comprehend: most modern AI models can be described as black-box systems, meaning their output is humanely comprehensible, but not their inner workings. This already complicates transparency — together with the opacity of health care industries, true patient comprehension is a real challenge. A particular challenge concerns sensitive personal data since, once that data enters an AI model, it’s virtually impossible to remove it. Tackling the weaknesses of current frameworks considering these concerns is crucial to introduce these systems into health care safely, and to ensure the medical validity of decisions that influence patients’ health.

The Evolving Nature of AI models, and What it Means for Patient Rights and Data Protection

The EU AI Act (AIA) is one of the first comprehensive laws to govern the use of AI models. Being a product safety law, not a fundamental rights one (unlike its complementary GDPR), it introduces quality and safety requirements rather than focusing on individual rights. Article 10, on data governance, almost exclusively focuses on data quality, rather than the data subjects’ rights. Other weaknesses of the AIA have already been pointed out, but we’ll have to wait for its full implementation to assess these risks in practice.

In the meantime, it seems protecting patients’ data is left to the GDPR, which fails to consider AIs as evolving systems. Even if a patient consents to sharing their data for a specific purpose, these models usually incorporate data into all future predictions, evolving with it, and blurring the limits of the use cases to which the patient agreed to.

As a product safety law, the AIA lays out four risk categories for AI systems, each with different requirements. For higher-risk AIs, their assessments are on a cyclic basis, requiring periodical assessments to ensure they remain safe after deployment.  Other countries have adopted a similar risk-tiered approach, such as Australia’s Regulatory changes for software-based medical devices or South Korea’s recent Basic Act on Artificial Intelligence (AI) Development and Trust Building, which drew from the AIA. Handling AIs as such enables disclosure to stay relevant, requiring practitioners to disclose higher-risk systems without wasting time or resources on low-impact uses.

However, even with this tiered approach, some critical questions remain: What protection exists for individuals whose data is used to train these models and whose health outcomes they may influence — and should protection extend throughout the entire lifecycle of AI systems, ensuring continued oversight and accountability?

Balancing Medical Innovation and Patient Rights in AI Regulation

For medical innovation to enhance patient health without undermining individual rights, robust safeguards are crucial. While the changing nature of AI systems often clashes with the more static frameworks of informed consent, this tension has already spurred meaningful efforts to ensure such protections. Emerging research has also begun to leverage these technologies directly as a way to address these concerns, revealing promising pathways for interdisciplinary work to support the safe integration of AI into healthcare.

Some recent regulatory developments suggest a shift toward stronger data protection — like the EU’s Opinion 28/2024 on safe data processing for AIs and the updated OECD AI Principles, which emphasize transparency and individual rights. Other initiatives present themselves as prioritizing innovation, such as the UK Labor Party’s AI Opportunities Action Plan or President Trump’s AI initiatives, which may weaken data protection in favor of accelerating AI development.

Regardless of the advantages and risks of these initiatives, they all highlight how fluid and malleable AI regulation still is. As current efforts continue to define AI’s growth, whether they lead to safe, empowering technologies, or untested, undecipherable ones, will shape these systems’ integration into society.

Acknowledgment:
This article was made possible through the generous support of the Novo Nordisk Foundation (NNF) via a grant for the scientifically independent Collaborative Research Program in Bioscience Innovation Law (Inter-CeBIL Program – Grant No. NNF23SA0087056).


About the author

Emma Kondrup is studying Computer Science at McGill University, focusing on Machine Learning applications with social and health benefits. In her research, she explores the implications of technological advancements for health and personal rights.