Digital Health

Decolonizing Digital Health: Reclaiming Equity, Consent, and Governance in Global Health Innovation

The digital transformation of health care is advancing at unprecedented speed. Artificial intelligence (AI), mobile health apps, and telemedicine promise to enhance more accurate diagnoses, streamline care delivery, and expand access to health services worldwide.

The digital transformation of health care is advancing at unprecedented speed. Artificial intelligence (AI), mobile health apps, and telemedicine promise to enhance more accurate diagnoses, streamline care delivery, and expand access to health services worldwide. But in the Global South, these promises often collide with stark realities: colonial legacies, systemic inequalities, and uneven digital infrastructure. Unless digital health governance (DHG) is fundamentally reimagined, these technologies risk reinforcing — rather than narrowing — global health disparities.

DHG requires a multi-faceted approach to regulating, coordinating, and guiding the ethical and responsible deployment of digital technologies within health care systems. This article explores why DHG must be reimagined to address structural power imbalances — using sexual and reproductive health and rights (SRHR) as a key example — and to promote equity, diversity, and inclusion in digital innovation.

The Digital Divide as a Governance Failure

Globally, nearly one-third of the population lacks internet access, with women and rural communities disproportionately affected. In Africa, only 31 percent of women use the internet compared to 43 percent of men. This “digital divide” is not just a technological issue — it is a governance failure. When health care systems rely on digital tools to expand access but ignore underlying inequalities, they risk excluding the very communities they aim to serve.

Moreover, many digital health tools are designed by private firms or research teams based in the Global North. The resulting technologies often lack contextual relevance and fail to consider local health needs, languages, and cultural practices. Without participatory design processes, these tools can alienate users or produce biased outputs — especially in AI systems trained on homogenous datasets.

SRHR and the Ethics of Consent in the Age of AI

Digital tools can be powerful enablers for SRHR. SRHR encompasses a wide range of services, including contraception, safe abortion, maternal care, sexual transmitted infection prevention, and fertility treatment — all grounded in bodily autonomy. Mobile apps can help people access contraception information, abortion services, or mental health support in privacy and safety. However, these same tools often collect sensitive data — menstrual cycles, sexual activity, geolocation — without adequate safeguards. In some contexts, such data could be weaponized, especially where abortion is criminalized or gender-based violence is prevalent.

As highlighted by the United Nations Population Fund (UNFPA), technology-facilitated gender-based violence (TFGBV) is on the rise. TFGBV refers to any form of violence that uses technology — like phones, social media, GPS trackers, or even recording devices — to harm someone because of their gender. Most often, this affects women and girls. Examples include online harassment, defamation, non-consensual sharing of intimate images, and cyberstalking.

When health care systems rely on digital tools to expand access but ignore underlying inequalities, they risk excluding the very communities they aim to serve.

Many of these digital health apps lack robust consent frameworks and proper data anonymization. A notable SRHR data breach occurred in 2021 when hackers accessed the records of approximately 400,000 Planned Parenthood Los Angeles patients. This highlighted serious privacy risks in digital reproductive health services. Femtech apps — digital applications designed to support women’s health, particularly in areas like menstruation, fertility, pregnancy, and menopause — have been criticized for providing inaccurate medical information, sharing sensitive user data with third parties without clear consent, and reinforcing gender stereotypes that contribute to epistemic injustice. Additionally, algorithmic bias can reflect and reinforce disparities related to socioeconomic status, race, ethnicity, religion, gender, disability, or sexual orientation — amplifying existing inequities in health systems and undermining the effectiveness and safety of SRHR interventions.

Rethinking Data Sovereignty and Gender Justice

A central challenge in DHG is determining who has control over health data and how it is governed. In many countries across the Global South, personal health data — such as medical records, diagnostic results, and information collected through health apps or wearable devices — is stored on servers owned by multinational corporations and subject to foreign legal jurisdictions. This dynamic undermines national sovereignty and creates significant risks for privacy, security, and evidence-based policy-making.

Frameworks must account for how race, gender, disability, and geography shape both access to — and impact of — digital. DHG must also incorporate algorithmic fairness standards as core principles, aligning with international frameworks such as the OECD’s AI Principles, the WHO’s Ethics & Governance of AI for Health, the EU’s ALTAI requirements for trustworthy AI, and ISO standards like ISO/IEC 42001 (AI Management Systems) and ISO/IEC 42005 (AI System Impact Assessment). Together, these frameworks emphasize key elements such as human agency, technical robustness, transparency, diversity, and accountability. Integrating these standards helps ensure that AI-driven health tools are not only effective but also ethically sound and socially responsible.

To ensure health AI tools are equitable, they must be rigorously tested and validated across diverse populations — taking into account differences in gender, race, age, disability, and socioeconomic background. However, equity in design must go beyond surface-level demographic representation. It requires genuine participatory processes, where affected communities are meaningfully engaged throughout development and deployment. This includes mechanisms for feedback, redress, and community accountability, ensuring that technologies not only do no harm but actively contribute to inclusive, safe, and ethical care.

Global Health Governance Must Decolonize

The colonial undercurrents in global health are not a matter of the past — they persist in how research is funded, how technologies are transferred, and how expertise is recognized. Digital health is no exception. The “one-size-fits-all” model of technological intervention often exported from the Global North reproduces dynamics of dependency and disempowerment. Global South development has often resembled a game of Tetris — with rigid, standardized policies imposed on diverse realities — flattening complexity into prefabricated solutions.

The Lancet and Financial Times Commission on Governing Health Futures 2030 argues that digital health should be based on solidarity, sustainability, and sovereignty. This means supporting local tech development, promoting South–South collaboration, and rejecting extractive data models. Instead, data should be governed collectively to ensure fair and inclusive digital health systems that benefit communities.

The transformative potential of digital health can only be realized when equity, rights, and justice are embedded at its core. Advancing equity in digital health governance means moving beyond efficiency to embrace a rights-based approach that prioritizes public health over private profit. This requires embedding inclusion throughout the design and deployment of SRHR technologies, ensuring diverse data, human-centered solutions, and culturally sensitive interfaces that communicate data usage clearly and uphold informed consent.

True accountability demands transparent algorithms, gender-responsive policies, and mechanisms to redress harm. To be meaningful, digital health governance must confront colonial legacies by shifting power to the Global South, investing in local leadership, and fostering global cooperation grounded in ethical standards and shared responsibility.

This post is part of a digital symposium called Innovation, Law, and Ethics in International Bioscience. To read the related posts, click here.

Acknowledgment: The research for this blog post received support by the Novo Nordisk Foundation (NNF) through a grant for the scientifically independent Collaborative Research Program in Bioscience Innovation Law (Inter-CeBIL Program – Grant No. NNF23SA0087056).

About the author

  • Marcelo Corrales Compagnucci

    Marcelo Corrales Compagnucci is an Inter-CeBIL Research Affiliate, and Associate Professor and Associate Director at the Center for Advanced Studies in Bioscience Innovation Law; Faculty of Law at the University of Copenhagen.