Deep phenotyping research has the potential to improve understandings of social and structural factors that contribute to psychiatric illness, allowing for more effective approaches to address inequities that impact mental health.
But, in order to build upon the promise of deep phenotyping and minimize the potential for bias and discrimination, it will be important to incorporate the perspectives of diverse communities and stakeholders in the development and implementation of research projects.
Deep phenotyping projects involve multiple streams of personal and medical data – such as genomic data, biometric and body scan data, clinical records, and moment-to-moment sensor and smartphone data collected in real time – which are collected and analyzed by artificial intelligence (AI) in order to deepen clinical understandings of health.
Psychiatry and behavioral health research have long had to rely heavily on patient information that is self-reported and intermittent, and have faced questions regarding the validity of diagnostic categories of mental illness. Descriptions of deep phenotyping often emphasize the objectivity of this new approach to behavioral research, pointing to the opportunity to integrate objective behavioral data and generate more objective measures and understandings of psychiatric illness.
At the same time, there is growing recognition that datasets and technologies that employ AI-powered data analytics are not value-neutral, but shaped by social and cultural factors, including those that arise from racial, gender, and socioeconomic inequities. Deep phenotyping projects will need to incorporate mechanisms for identifying and addressing potential areas for bias in the research and implementation of these tools in order to avoid results that reflect and even reinforce existing areas of discrimination and bias in the health system and society.
Researchers must carefully consider how different datasets used to train algorithms present issues regarding bias and diversity. Genomic research has overwhelmingly relied upon white European research subjects, an issue that has hindered precision medicine efforts. Some digital phenotyping projects incorporate facial recognition technology and computer vision data, technologies that have been shown to have more limitations in recognizing women or people of color, due to lack of diversity in the image datasets or research subject pools used to develop them. Unfortunately, efforts to “correct for race” within health care algorithms can backfire or serve to perpetuate racial inequities in health care due to the pervasiveness of racism within society and the health system.
On the user side, some of the types of data collected through smartphones and sensors, such as social media posts, voice data, or movement, may need further study to assess whether factors such as structural inequalities or cultural differences impact the ability to make health inferences. The information placed in electronic health records may be influenced by discriminatory factors, such as patient access to resources, or physician attitudes towards patients according to race, gender, or class. Even an algorithm that focuses on a seemingly benign input, such as patient health costs, for achieving the goal of identifying patients to receive enhanced treatment, can have a discriminatory impact on Black patients due to effects of racial discrimination on spending and treatment in the health system.
Diversity and inclusion efforts will also require deep phenotyping projects to identify and address potential downstream applications of the research. The surveillance aspects of deep phenotyping, particularly sensor tracking, may be regarded with suspicion by groups that have been more subject to discrimination by the state and law enforcement. One goal of deep phenotyping is the identification of at-risk individuals in order to facilitate early interventions for mental health. However, early intervention efforts can look rather different in communities where people of color or lower socioeconomic status who are identified as “at-risk” are less likely to receive high-quality interventions, or more likely to be tracked for law enforcement or punitive purposes.
A socially conscious approach to deep phenotyping necessitates attention to how diversity and inclusion issues factor into each stage of research and development.
A number of computer and data science organizations have put forth principles and guidelines to mitigate the potential for negative bias in datasets and algorithms.
It is vital to build diversity and inclusion not only into datasets, but also in research participant populations and research teams. Projects will need to consider how to engage and recruit from communities and populations that may have concerns about the risks and benefits of research participation rooted in historical discrimination and current inequities.
In other words, principles must be paired with practices that serve to identify and address bias and discrimination throughout the development of deep phenotyping research.
Nicole Martinez-Martin is an assistant professor of pediatrics at the Stanford Center for Biomedical Ethics.
This post is part of our Ethical, Legal, and Social Implications of Deep Phenotyping symposium. All contributions to the symposium are available here.