AI in Medicine

Patients Are Seeking Illicit Drug Advice from AI. What Standard of Care Should Apply?

When AI chatbots are asked about mixing substances, managing withdrawal, or reducing overdose risk, they increasingly provide substantive guidance. 

When AI chatbots are asked about mixing substances, managing withdrawal, or reducing overdose risk, they increasingly provide substantive guidance. Poison control centers and emergency departments report encountering patients who have acted on chatbot advice about drug interactions, while harm reduction organizations observe users turning to AI for information they feel too stigmatized to seek from clinicians.

A 2024 study found that chatbots can provide high-quality responses to queries about substance use and recovery, though they still occasionally generate dangerous disinformation. Meanwhile, medical disclaimers in AI responses are declining — falling from an average of 26 percent in 2022 to under 1 percent in 2025 — suggesting that safety messaging is receding as models’ apparent clinical competence improves. The result is that patients seeking guidance on drug interactions or withdrawal management increasingly receive clinical advice from systems operating entirely outside clinician oversight. This raises a question unsettled in U.S. law: What standard of care should govern general-purpose AI systems that provide guidance on illicit drug use?

The Legal Blind Spot

The legal framework governing such drug advice is evolving rapidly, with direct implications for patient safety. In Garcia v. Character Technologies, a court recently declined to treat a chatbot’s output as protected speech, allowing a product liability claim to proceed. The ruling suggests that AI advice may soon be judged not as expression, but as a product subject to strict liability — the same framework applied to defective medical devices and pharmaceuticals. For clinicians accustomed to operating within a well-defined duty of care, the pressing question is what standard should govern these systems when they step into roles that overlap with their own.

California’s recent AI legislation demonstrates that current regulatory frameworks have not caught up with the technology: SB 243 creates protocols for “companion” chatbots regarding suicide risk, while AB 489 prohibits non-physicians from implying licensure. But neither statute addresses the clinical complexity of harm reduction — the substance-specific regulatory frameworks, the federal-state conflicts over drug scheduling, or the distinct professional standards that shape how clinicians counsel patients who use illicit substances. Without professional licensing, medical malpractice standards do not apply to algorithms, and no court has yet articulated a reasonable AI system standard.

The Professional Standard: Counseling is Permitted, Prescribing is Not

The medical standard of care offers a compelling model for AI governance, built on a fundamental distinction: The transaction is criminalized, but the information is not. Physicians who prescribe Schedule I drugs face criminal prosecution and loss of licensure. But nothing in federal law prohibits them from counseling patients on the use of these substances. In fact, major professional organizations endorse such counseling as standard care. American Society of Addiction Medicine guidelines recommend “education and counseling on safer drug use,” and the AMA Code of Medical Ethics emphasizes obligations to overcome the stigma that prevents patients from disclosing use in the first place. 

The duty-to-warn doctrine reinforces this point. Courts have established that physicians must warn patients of dangerous interactions – even when those interactions involve illicit substances the patient is already using. The precedent is clear: Providing overdose prevention education to a patient using illicit opioids is practicing within — and perhaps required by — the standard of care.

This is the model that should inform AI governance. AI chatbots currently operate in a liability gap; they cannot claim the statutory immunities designed for needle exchange workers or poison control centers, nor are they covered by Good Samaritan laws that protect individuals who seek help during an overdose. As Garcia suggests, courts may increasingly evaluate AI advice under product liability theories, asking not whether the system “meant well,” but whether it supplied unsafe guidance for foreseeable reliance. That framing makes clinical standards of harm reduction all the more relevant as a benchmark.

The Patchwork Problem

The clinical and legal safety of AI drug advice is further complicated by the substance involved and the patient’s location. Opioid harm reduction has the strongest legal foundation: Federal agencies endorse naloxone, most states permit fentanyl test strips, and public health consensus treats overdose prevention as a core strategy. A chatbot providing information about naloxone administration operates in well-charted territory.

Other substances present a patchwork of legality that AI systems are poorly equipped to navigate. Consider psilocybin: Oregon and Colorado have established therapeutic frameworks with licensed facilitators, yet the substance remains federally prohibited. An AI chatbot providing dosing guidance might be facilitating state-legal therapy in Portland while facilitating a serious federal crime in Boise. Current models generally lack the geofencing capabilities to adjust advice by jurisdiction. For clinicians, this means a patient may arrive having followed AI guidance that was clinically sound and legally permissible in one state but potentially dangerous or unlawful in another.

Defining an AI Standard of Care

The challenge is designing systems that neither refuse all engagement with drug-related queries — potentially increasing harm by leaving users without guidance — nor provide advice that ignores legal and medical complexities.

  • First, AI systems should be permitted, and perhaps expected, to provide harm reduction information. A blanket refusal to answer questions about drug interactions is not safety; it is an abdication that drives patients toward less reliable sources. If a patient asks about fentanyl test strips or interaction risks, the system should answer accurately, mirroring the medical obligation to warn and educate.
  • Beyond permission, AI advice must be contextually anchored. There is a meaningful distinction between harm reduction and instruction. Explaining how to test a substance for fentanyl is a safety function; explaining how to manufacture methamphetamine is a liability. Developers must train models to distinguish between safety-seeking queries, which align with the standard of care, and facilitation-seeking queries, which align with criminal liability.
  • Critically, AI responses should not function as a sole source of authority. Rather than acting as a sole source of authority, responses to substance use queries should include referrals to addiction medicine specialists and harm reduction services — directing patients back into the clinical relationships where comprehensive care is possible.
  • Any governance framework must also address surveillance. Rather than tracking individual users — which creates privacy risks that discourage help-seeking — developers should focus on aggregate risk signals through periodic internal and voluntary reporting channels.

Physicians are not permitted to prescribe illicit substances, but they are ethically bound to counsel patients on reducing risk. AI systems operating at the boundary of health care should be held to a comparable principle: Do not facilitate access to illicit substances, but do not withhold the information necessary to save lives.

About the authors

  • Julia Etkin

    Julia is a Dean’s Scholar at Harvard Medical School, pursuing a Master of Science in the Center for Bioethics. Her research interests include biopsychosocial pharmacovigilance, health policy of novel psychoactive substances (NPS), trauma studies, and FDA regulation. She has published on topics at the intersection of ethics and equity, including work on FDA advisory committee reform with an emphasis on increasing public trust.

  • Vincent Joralemon

    Vincent Joralemon was a Petrie-Flom Student Fellow (J.D. 2024) in the Berkeley-Harvard Exchange Program. He is the Director of the Life Sciences Law & Policy Center at the University of California, Berkeley, School of Law, where he teaches courses on legal writing, health law, and technology.