AI Is Creeping Into Every Aspect of Our Lives—and Health Care is No Exception
AI is the use of computers or machine learning to simulate or imitate human intelligence. This technology has advanced rapidly over the past several years, allowing new forms of AI to leverage vast quantities of data to teach itself and solve problems.

Artificial intelligence (AI) is the use of computers or machine learning to simulate or imitate human intelligence. This technology has advanced rapidly over the past several years, allowing new forms of AI to leverage vast quantities of data to teach itself and solve problems.
In health care, AI can assist medical professionals in numerous ways. Technology can automate patient documents in electronic medical records, assist with diagnosis, improve precision medicine, facilitate treatment, and even improve robotic use during surgery. Current evidence suggests that the use of AI can outperform medical professionals in some situations, like predicting the diagnosis of Alzheimer’s.
However, the use of AI in health care raises legal and regulatory issues. Federal and state regulators and governments have a role to play in shaping whether and how AI is used in the practice of medicine.
Federal Regulation of AI as a Medical Device
The U.S. Food and Drug Administration (FDA) is responsible for the oversight of medical devices and products. FDA has various premarket pathways for medical devices depending on the classification of the technology. However, FDA’s current framework was designed for traditional medical devices, which are static. AI complicates this paradigm given that technology is constantly evolving and learning from itself.
FDA has responded to this changing landscape. On April 2, 2019, the agency published a discussion paper to request feedback on a potential approach to premarket review for AI. In January 2021, the agency published the “Artificial Intelligence and Machine Learning Software as a Medical Device Action Plan” (AI/ML SaMD Action Plan.) Under this plan, FDA has published various follow-up documents, including guiding principles on “Good Machine Learning Practice,” Predetermined Change Control Plans,” and “Transparency.” Most recently, on Jan. 6, 2025, the agency published draft guidance on lifecycle considerations and specific marketing recommendations for artificial intelligence-enabled device software functions. Despite FDA’s best thinking, it may prove challenging to regulate a “moving target” like AI.
State Regulation of AI
While states typically turn to traditional tort law to regulate medical technologies, states may turn to Corporate Practice of Medicine Doctrine (CPOM) to regulate AI use in health care. While exact CPOM differs among states, the principal policy behind the doctrines is to ensure only licensed physicians make medical decisions; This protects patients from harm that could arise due to interference with a physician’s medical judgment. States may choose to revive CPOM to limit AI use without direct human clinician supervision. While CPOM laws were not written with AI in mind, they may be applicable to limit the scope of CPOM use by non-licensed users, limiting the potential of harm to patients.
States have also enacted various laws to limit AI in health care decision-making, with particular attention given to proper informed consent practices. On May 17, 2024, Colorado passed the Consumer Protections in Interactions with Artificial Intelligence Systems Act of 2023. The act applies to developers of all “high-risk AI systems,” which includes systems used by health care providers, to take reasonable care to avoid “algorithmic discrimination.” On Sept. 28, 2024, California passed Assembly Bill 3030, requiring explicit consent from patients before using AI. On the same day, California also adopted Senate Bill 1120, requiring a human to review insurance coverage decisions made by AI.
Additionally, states may choose to increase oversight of AI through state licensing boards. If AI were considered by courts to have “personhood” perhaps state licensing boards, which are responsible for testing, licensing, and overseeing the practice of physicians, would determine whether an AI technology is sufficiently “educated” and “trained” by software developers to qualify for a medical license. If licensing boards found AI technologies sufficiently qualified for a medical license, they would be legally cleared to practice medicine without the supervision of a licensed clinician. However, the use of autonomous AI physicians would raise further issues about liability, like whether AI technologies are correctly classified as products under the traditional tort for “products liability” or whether physicians or medical centers are vicariously liable for harms as a result of AI technology.
Medical Community Regulation of Physician Use of AI
In addition to federal and state regulations, the medical community has already begun to self-regulate. The American Medical Society (AMA) defines AI as “augmented intelligence” to emphasize that the technology should be used to enhance the human intelligence of physicians rather than replace physicians. State Medical Boards, responsible for the oversight of health care workers, also have spoken out on the use of AI in practice. In April 2024, the Federation of State Medical Boards issued a report addressing responsible and ethical incorporation of AI. While the report supports medical education that includes programming on advanced data analytics and use of AI in practice, the report emphasizes that “the physician is ultimately responsible for the use of AI and should be held accountable for any harm that occurs.” Thus the physician should provide a rationale for their ultimate decision, as would be required without the use of AI.
Case Study: Mental Health Chatbots
The patchwork of regulations described above can be highlighted in a recent technological breakthrough: mental health chatbots. One of the most promising new chatbot companies on the market, Woebot, is seeking FDA approval to validate its clinical application. However, it is currently unclear how FDA would evaluate this technology. At the state level, Utah’s new permanent office of AI development is also likely to regulate mental health chatbots used in the licensed practice of medicine, requiring some acceptable degree of reliability. Further, Utah’s clinicians are still encouraged to adhere to the AMA guidelines, requiring that final treatment decisions come from them.
As this example illustrates, federal, state, and self-regulatory bodies have various options to protect patients from the potential harms of AI use in medicine. As it stands now, there is a real patchwork of enforcement and a lot of uncertainty for a future regulatory structure. However, the three levels of regulation have the potential to work together well. The federal government should regulate AI use in medical products and as software to ensure minimum safety and efficacy standards are met. Additionally, states should use CPOM doctrines and licensing laws to cabin the use of AI as an autonomous physician or at least ensure that AI autonomous physicians meet education and training standards. Lastly, the medical community must continue to uphold its own professional values, which includes learning how to incorporate AI in a responsible way.
About the author
Jessica Samuels is a third-year dual degree law and public health student (J.D./MPH 2025). Her research interests include genetics, environmental health sciences, novel biotechnologies, and the FDA regulatory process. She has previously published work on the accuracy of ultrasound in predicting malignant ovarian masses. At HLS, Jessica is co-president of the Harvard Health Law Society.
