Mental Health

Addictive Algorithms and the Digital Fairness Act: A New Chapter in EU Public Health Policy?

Picture a typical teenager waking up. Before even getting out of bed, they’ve already scrolled through TikTok, checked Instagram, and responded to Snapchat notifications.

Picture a typical teenager waking up. Before even getting out of bed, they’ve already scrolled through TikTok, checked Instagram, and responded to Snapchat notifications. Each swipe delivers content fine-tuned by algorithms designed to maximize attention and engagement. Autoplay keeps the feed going. Notifications prompt more interaction. Autoplay keeps content flowing. Sleep, focus, and mental health are subtly undermined.

The mental health consequences of these design features — autoplay, infinite scroll, push notifications — are becoming hard to ignore. In December 2023, the European Parliament issued a resolution calling out these harms and proposing something bold: a digital “right not to be disturbed.” Soon after, the European Commission launched a review of EU consumer law: the Digital Fairness Fitness Check. The conclusion? Existing rules are outdated and unfit for today’s attention-hacking digital environments.

In May 2025, the Commission announced plans for a new Digital Fairness Act aimed at updating consumer protections for the algorithmic age. A formal legislative proposal is expected in 2026. The timing couldn’t be more urgent.

Addictive Design as a Public Health Concern

Features like autoplay and personalized content feeds aren’t just convenient; they are engineered to keep users glued to their screens. Drawing on behavioral psychology, these designs create reward loops that are especially hard for adolescents to resist. Artificial intelligence (AI) supercharges this process, adapting in real time to each user’s preferences and vulnerabilities.

A growing body of research links heavy social media use to anxiety, depression, and sleep disruption, especially among young people. A 2020 meta-analysis estimated that around 7 percent of the global population shows signs of internet addiction. While not yet a formal clinical diagnosis, this kind of compulsive use shares traits with gambling disorders.

In response, The Lancet Psychiatry Commission launched a global policy initiative in 2025 focused on so-called “addictive design.” Their message is clear: This is not just a matter of personal choice but a structural issue driven by platform architecture. Importantly, the Commission calls for regulatory frameworks similar to those governing gambling or tobacco. This includes design-level interventions and public health strategies aimed at protecting vulnerable populations from long-term psychological harm. Addictive algorithms are a public health challenge that demands coordinated clinical, policy, and regulatory responses.

Blind Spots in EU Law

EU law has not kept pace with the design techniques driving these harms. While EU law is strong on privacy and data security, it does not (yet) address the psychological effects of technological design.

EU consumer law protects individuals buying goods and services by guaranteeing clear information, fair treatment, and strong rights across all EU countries. It rests on three main directives: the Consumer Rights Directive (2011/83/EU) (rights to information, safeguards for distance sales, and a 14-day right of withdrawal), the Unfair Commercial Practices Directive (2005/29/EC) (ban on misleading or aggressive sales tactics), and the Unfair Contract Terms Directive (93/13/EEC) (protection from unfair contract clauses). However, none of these address how digital environments are built to capture attention.

Yes, the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act) contain some protections. For example, the AI Act bans manipulative AI and AI that exploits vulnerabilities. But there’s no legal definition of “manipulation,” and the rules apply only to specific “high-risk” sectors like healthcare and education. The daily influence of algorithms, for example on video feeds, is not covered.

The European Data Protection Board has published guidance on “dark patterns” in consent flows, where social media providers persuade users to consent to data sharing through the design of their services. But it doesn’t touch on the broader behavioral impacts of digital design. The result is a patchwork of protections that does not protect the mental health of online users.

The Promise of the Digital Fairness Act

This is where the Digital Fairness Act (DFA) enters the picture. The idea behind the DFA is to modernize current EU consumer law and address manipulative design practices. The DFA aims to extend consumer protection to cover “dark patterns,” addictive interfaces, manipulative personalization, and misleading influencer marketing.

The European Parliament’s 2023 resolution called for proactive safeguards, such as turning off attention-seeking features by default, introducing time-limit warnings, and promoting ethical design “by default.”

This approach is not entirely new. EU consumer law has long protected citizens from harmful product designs. EU law already regulates toys, food packaging, and cosmetics for psychological harms. The DFA simply extends that logic to algorithms, where the “product” is user attention.

Global Implications: Addictive Algorithms as Public Health Concern

While the Digital Fairness Act is an EU initiative, addictive algorithm design is a global challenge. Some jurisdictions are already exploring related reforms. While some countries focus on individual behavior of children, such as South Korea’s National Center for Youth Internet Addiction Treatment offering ‘digital detox camps’, more and more countries are now targeting tech developers.

The U.S. Federal Trade Commission has launched investigations into dark patterns and manipulative interfaces. The UK has adopted an Age Appropriate Design Code to protect children’s rights online. China requires all smartphones and apps to include a “minor mode” to combat smartphone addiction among children. Australia recently adopted a social media ban for children under 16 because of mental health concerns.

However, none of these efforts yet matches the breadth or ambition of the DFA, which aims to cover all user groups. If successful, the DFA could help shift the dominant framing from personal to platform responsibility. Addictive digital environments are the result of deliberate design decisions that prioritize commercial interest over user well-being.

If successful, the DFA could help shift the dominant framing from personal to platform responsibility. Addictive digital environments are the result of deliberate design decisions that prioritize commercial interest over user well-being.

The DFA represents a new kind of regulatory response. By embedding protections into the architecture of digital services, the DFA reimagines consumer law as a tool for public health. For a generation growing up in algorithmic environments, these protections may prove as essential as tobacco advertising bans or seatbelt laws once were. If it delivers on its promise, the DFA could become a global benchmark, inspiring jurisdictions worldwide to embed mental health safeguards into the design of digital environments.

About the author

  • Hannah van Kolfschooten

    Dr. Hannah van Kolfschooten (Ph.D, LL.M.) is a researcher and lecturer at the University of Amsterdam, the Netherlands. She studies the intersection of law, health and technology. She obtained a Ph.D. in Law (dr. iur.) on the topic of EU regulation of Artificial Intelligence in Healthcare and the consequences for patients’ rights protection (University of Amsterdam, 2025). She was a visiting researcher in-residence at Harvard Law School, University of Verona, and Fondation Brocher. She is an independent legal consultant on AI policy and regulation for non-profit organization Health Action International. She occasionally shares her expertise with governments, non-profits, and health organizations. She is a member of the WHO Technical Advisory Group on Artificial Intelligence for Health (TAG-AI).