By David Arney, Max Senges, Sara Gerke, Cansu Canca, Laura Haaber Ihle, Nathan Kaiser, Sujay Kakarmath, Annabel Kupke, Ashveena Gajeele, Stephen Lynch, Luis Melendez
A new working paper from participants in the AI-Health Working Group out of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and the Berkman Klein Center for Internet & Society at Harvard University sets forth a research agenda for stakeholders (researchers, practitioners, entrepreneurs, policy makers, etc.) to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.
Along with sections on Technology and a Healthy Good Life as well as Data, the authors focus a section on Nudging, a concept that “alters people’s behavior in a predictable way without forbidding any options,“ and tie nudging into AI technology in the healthcare context.
The authors have followed a patient/user-focused approach, where they emphasize the viewpoints and wishes of individual patients. They argue that the “ultimate goal of creating technology for health and wellness is to enable patients to live healthier, more meaningful lives.” For example, in their research agenda, the authors ask the following question: „How do we design nudging systems that are transparent, protect individual agency and promote achieving one’s goals?“
From the paper:
Traditional nudges often appear in simple environmental design choices such as placing healthy snacks at eye-level in a grocery store or selecting double-sided printing by default. By design, nudges are not explicit notifications, but rather a way of making it easier to engage in the promoted action or behavior. (…)
The term “digital nudging” emerged only recently in engineering and computer systems literature, and is defined as the “use of user-interface design elements to guide people’s behavior in digital choice environments.” (…)
The term “digital nudging” has not, to the best of our knowledge, been explicitly associated with AI technology in the healthcare context (…) [But] digital nudging flows from an AI system to an end recipient. Numerous healthcare stakeholders can leverage this AI-enabled technology to promote health among their users, consumers, citizens or patients. For example, pharmaceutical companies, insurance companies, physicians, and governments all have stakes in the health of individuals and populations. These interests are at times competing, and mediating between multiple competing nudges is likely to be difficult.
Read more here!
Please send comments as well as proposals for collaboration to petrie-flom@law.harvard.edu or ai-health@cyber.harvard.edu or contact@aiethicslab.com or contact one of the authors.