Malfunction, Malpractice?: Who Is Liable When AI Injures a Patient?

SUmmary

Medical errors happen; doctors are only human. And when doctors make mistakes, the law pertaining to who is liable is usually clear-cut. But what happens if the mistake was made by an AI, included one embedded in a device or a robot? 

This episode will explore who is liable. Is it the hospital? The developer of the AI? The doctor on the scene? And what legal recourse do patients have? Nic Terry (an expert in the intersection of health, law, and technology), Michael Abramoff (an ophthalmologist, AI pioneer, and entrepreneur), and Ravi Parikh (a practicing oncologist and bioethicist) will attempt to answer these questions and others.

Episode

Transcript

Nic Terry: Typically, we have viewed clinicians as being the primary targets of liability, but as decisions are made by the machines, increasingly, then I think we will see a movement away from the clinician as primarily responsible to institutions putting these machines in and developers.  

I. Glenn Cohen: I’m Glenn Cohen. I’m the Faculty Director of the Petrie-Flom Center, the James A. Attwood and Leslie Williams Professor of Law, and the Deputy Dean of Harvard Law School and your host. You’re listening to Petrie Dishes, the podcast of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. 

We’re taking you inside the exploding world of digital health, one idea at a time. Today, it’s liability in the context of artificial intelligence or AI for short. Traditionally, when people talk about liability for medical errors, they’re focused on the physician because physicians are the ones ultimately making final decisions as to patient care. 

Ravi Parikh: Say for example, there’s a decision point around whether to give a clot-busting drug for someone’s stroke. That is entirely in the clinician’s hand and so if something obviously deviates from a standard of care, then the physician should be held liable. 

I. Glenn Cohen: That was Ravi Parikh, a Professor of Medicine, Medical Ethics and Health Policy, and a practicing physician. He outlined the traditional liability regime in healthcare. But healthcare is changing. Medical decision making is increasingly relying, at least in part, on artificial intelligence as a tool in patient care. For example, AI might be used to read radiological images or to make predictions about a patient’s course of illness. And when algorithms and machines make mistakes, for example, miscategorizing a malignant tumor on a scan as benign, liability can become a thorny issue. Who’s to blame? Is the hospital liable? The developer of the artificial intelligence? The doctor on the scene? And what legal recourse do patients have? In other words, how does liability work when artificial intelligence malfunctions? 

Nic Terry: I think we’re at an early stage of seeing sort of an evolution in what we view as clinical care and what we view as clinicians. 

I. Glenn Cohen: That’s Nic Terry, a Professor of Law and Health. He’s discussing how the use of AI in healthcare is changing our perception about who is providing our care and how they are held responsible. 

Nic Terry: And that has implications for ethics, because we’re not typically talking about employees who have gone through the same ethical training as clinicians, particularly physician. There are implications for liability and insurance. So I think it’s very much an open question at the moment as to how in the future we’re going to think of the manufacturers in this case as whether we’re going to keep that sort of very strong divide between the manufacturers who supply the machines and the clinicians who use them. That’s maybe a distinction that has met its real-world expiry date. 

I. Glenn Cohen: Where the manufacturer’s role in developing and producing a product was once distinct from the role of clinicians using these products to provide care, that line has become blurred due to the nature of AI and how algorithms work. 

Ravi Parikh: I think medical malpractice is a really tricky situation because right now, I don’t know of any medical malpractice insurance that has clearly defined stipulations of what occurs in the situation of an AI based error. We need to be thinking about liability in those cases. In cases of algorithm-based practice, less in terms of physician negligence and medical malpractice, and more in terms of trying to account for the whole upstream of stakeholders that are present in the machine learning and AI ecosystem. That includes algorithm designers who have responsibility for making their inputs and training data transparent. That includes health systems who are oftentimes the primary decision makers in whether to deploy an algorithm or not, and oftentimes not to the knowledge of the clinician. And yes, that includes clinicians as well, who oftentimes have the end-all-be-all decision that comes about even if it is informed by an algorithm or not. 

I. Glenn Cohen: Because there are more stakeholders in the AI environment, there are more individuals who can be considered at fault when something goes wrong. Our system of liability should consider that Ravi says, so that liability isn’t just placed on one party where multiple stakeholders could be responsible. But what does it mean for multiple parties to be liable? How do we divvy up the liability pie? 

Ravi Parikh: A lot of this ends up being upstream from the point of the decision maker deciding whether or not to use the AI in general, needing to understand well, is this AI being used as a decision support tool and so should I be viewing it as a decision support tool? Or should I be viewing it as an end all decision maker? 

I. Glenn Cohen: There’s a continuum between a physician using an AI as one input for their clinical decision versus relying on it to make the decision for them the latter being of greater cause for concern. 

Ravi Parikh: In terms of reducing the threat of liability from use of these AI technologies, that ends up being more-so a question of ‘Are you using the AI in the right way that it was studied on to generate the outcome that has been sold to you as what it’s trying to do?’ So, for example, an AI that’s designed to predict a heart attack should not try to judge someone’s risk of having a pulmonary embolism or another sort of related but not exactly the same outcome as a heart attack, because it hasn’t been studied in that population. 

I. Glenn Cohen: AI-based algorithms have narrow purposes and should be used by physicians accordingly. But even if the algorithm is used for a very narrow defined purpose, how do physicians know that the AI will do what it’s supposed to do? How much of the algorithm do physicians need to understand themselves? Do physicians also need to become computer scientists? 

Ravi Parikh: When it comes for algorithmic predictions where a lot of the decision making that goes into a prediction isn’t known to the clinician and isn’t known to anyone in all honesty because of the black box nature of these algorithms. And I think that requires a real nuanced understanding of how we apportion liability for those cases 

I. Glenn Cohen: The physician ends up in an unenviable position. Her decision making is influenced by that algorithm, but she doesn’t necessarily know how the algorithm is coming to its conclusion. This is why some AI is often referred to as a ‘black box.’ The underlying explanation for how it went from input to output is not something the user can explain. But Ravi says it might be okay that doctors don’t understand the algorithm itself. Instead, it’s more important that they understand the factors that influenced its creation.  

Ravi Parikh: I think from the health system side really understanding the training population about where the AI was trained and whether it’s likely to replicate in that particular population is really important. Typically we think about representativeness of the training population along racial, ethnic, or gender lines, and that’s certainly very important. But even when it comes to the outcome that you’re trying to predict, if you’re trying to predict an outcome around heart attack in a group of 65 year olds in a nursing home and you think that’s a trained up population, and then you’re deploying that in a New York City health system that serves the majority of like younger, healthier individuals. Then it’s probably not going to do that well, or you probably had to do some more vetting and more validation of that algorithm on the health system side before deciding to deploy it because those kinds of things just are ripe for false positives and overdiagnosis. 

I. Glenn Cohen: Training data, as Ravi describes it, is what an algorithm uses to learn so that it can make predictions about a wider population. When the algorithm is not training on data that represents the larger population, it could lead to inaccurate results. Even when trained properly, these algorithms can’t be right a hundred percent of the time. 

Michael Abramoff: If you have these black box AIs, they typically over-optimize and they can have catastrophic failure. 

I. Glenn Cohen: That’s Michael Abramoff. He’s a professor of ophthalmology and a founder of an AI company called Digital Diagnostics. He’s gonna give an example of catastrophic failure. Describing a noisy image, an image where the patient’s true condition is slightly obscured, which causes the algorithm to interpret it as lesions or abnormalities when it’s really just noise. 

Michael Abramoff: Catastrophic failure; meaning you showed him an image of a patient and it diagnosed correctly in 99% of cases. Now there’s a little bit of noise over the image and they can flip entirely the diagnosis when, if you’re a doctor or if you’re a biomarker-based algorithm, you take away lesions you’re ultimately going to say this is normal, there’s no disease because in fact there’s no lesions anymore, but as graceful failure. While these black box algorithms in many cases, and we analyze these and by the way his group also analyze this, they have this catastrophic failure totally unexpectedly you flipped the diagnosis when no one can see what the difference is, except if you analyze it numerically. And so, as long as we don’t fully understand these systems, yeah, over time we will probably evolve and better be more and more comfortable. But right now, there’s too many unexplored risks in my view to go ahead again, yes, maybe on average it does well, but then if you examine this vulnerable group, this population, oh, it shows that it really is bad for this specific group. That is right now, ethically unacceptable. And so, I think you need to be very careful. 

I. Glenn Cohen: Does this mean that physicians should not use AI at all? ‘No,’ Abramoff says, because there are steep costs associated with not using AI as well. 

Michael Abramoff: There are so many health disparities, so many poor outcomes in healthcare right now that in my view, we cannot wait. 

I. Glenn Cohen: Abramoff gives the example of the technology he has developed with his company, Digital Diagnostics. 

Michael Abramoff: So, a big problem in diabetes care is that people go blind from what is called diabetic retinopathy, which is a complication of diabetes and the most important cause of blindness. Almost entirely preventable if caught early before there are symptoms. People of different races and ethnicities have very different outcomes for diabetic retinopathy. A major and probably the major source for these health disparities is lack of access in rural communities, in inner city communities. It’s hard to get a diabetic eye exam and so people don’t get them. And that leads to preventable blindness and physical loss. And what autonomous AI can do, because now you can get a diabetic eye exam rather than with a retina specialist like me by the AI where there’s an outlet. If you make the diabetic eye exam available to the patient at where they already go for diabetes management, because the compliance with diabetes management for people with diabetes is very high, up to 95%, and now within minutes, they can get this diabetic eye exam rather than having this referral and then not going.  

I. Glenn Cohen: Abramoff suggests that to unlock the potential of AI in healthcare, including in addressing health disparities, the company behind the technology might choose to take on the relevant liability itself and thus protect the physicians who use the product. 

Michael Abramoff: By saying ‘we assume the liability for the performance,’ we could move past that. I think we were the first to address liability for autonomous AI. Meaning we, as a company have said, we said for our products ‘we assume liability for the performance of the AI.’ Just like if you’re a physician you assume liability for the decisions you make as a medical professional. And so it couldn’t be that as an AI company you avoid that liability, like I’ve seen some others do. And it’s interesting to see that the American medical association actually now has in its policy the requirements that autonomous AI creators should assume such a liability. 

I. Glenn Cohen: Importantly, Abramoff is talking about Autonomous AI, the kind closer to the end of the continuum where there is no human physician in the loop. But Abramoff says there’s more to fostering the adoption of AI than merely allocating liability. 

Michael Abramoff: All the stakeholders in healthcare need to be able to feel comfortable about using this. Payers, including CMS, regulators like NCQA, physicians, patient organizations like the American diabetes association, which has made the standard of care part of standard of care, all of them need to be comfortable for this to actually reach patients, and that’s happening. And so it is so important to address these things as we go, in a well-considered ethical framework. 

I. Glenn Cohen: Because AI has such great potential, it might be worth taking on some level of risk to continue to develop and use it. But each stakeholder’s level of comfort will differ, which leaves room for disagreement in how AI should be used. Knowing this, how might liability play out in practice moving forward? 

Nic Terry: I think you’re going to see quite a lot of action in the products liability area. We know, apart from a few goofy cases from decades ago dealing with maps and things, that software is a product as far as products liability. But increasingly, we will see members of the distribution chain that perhaps had not been always directly implicated, being more responsible. 

I. Glenn Cohen: In addition to products liability, Nic points out institutional or corporate liability as increasingly relevant. 

Nic Terry: Theories that we’ve talked about for 50 years now, such as institutional or corporate liability of healthcare institutions for actual health care, those will be resolved in favor of liability. Because the institution is so tied up with the implementation training and indeed the purchase decisions which involve risk analysis of these machines. 

I. Glenn Cohen: What Nic is saying is that because different stakeholders other than clinicians, such as hospitals, have responsibilities in training and vetting tools, we might see liability fall on them as well. Still, we may not be able to figure out who is liable all the time. There are still many unanswered questions, like, how will courts and regulators draw a line between the developers of AI and the clinicians involved in patient care? Let’s hear from Michael again. 

Michael Abramoff: This has not been litigated yet. I see it as if you say that the AI is autonomous and then a medical decision is made by the computer and there’s no doctor involved in a medical decision, how can you make a medical doctor or anyone else liable for the decision when they’re actually using it because they’re not comfortable making this decision by themselves. If it is used according to its labeling and according to its recommendations, then the performance or the accuracy of the AI is on the creator rather than on the user. If now you’re a physician using the AI and it tells you the patient has the disease and you decide to override it and ignore that and not refer the patient or not manage the patient appropriately that is not on the AI company. That is not where AI company has control.  

I. Glenn Cohen: But some situations might be even less clear cut. And those situations, Ravi suggests, might underscore the need for a whole new system to address liability in the case of AI. 

Ravi Parikh: You can imagine that say for example, a black box algorithm, an algorithm that doesn’t make the source and the inputs of its prediction that well known, it can be very difficult to determine liability in the case of an error related to that. Traditional actors that are meant to sort of determine liability aren’t really well equipped to judge who’s liable and who is not in those cases because, it’s not just like using a drug or medical device where a lot of the specifications and a lot of the evidence basis is well-known. A lot of that information just isn’t available, in the case of AI. And so I think that ideas like specialized adjudication systems for AI based liability, particularly for black box AI based liability, are ideas that we really ought to be thinking about. I think that ideas like having established benchmarks, regulatory benchmarks that kind of play a role into how liable a position is or is not for an AI based error, those are relatively good ideas. And then I think that the idea of potentially not subjecting certain types of AI to traditional liability, but having predetermined, balanced kind of allocation of liability in the case of specific AI error, that also could be necessary. 

I. Glenn Cohen: As in other new and developing areas of the law, regulators might need to move away from the traditional model of liability. Some even think that liability should be socialized like we do for vaccine manufacturers. They pay into a common fund that compensates patients when the vaccine causes harm without requiring patients to show fault. Given the different ways a liability regime could be structured, there’s still a lot of uncertainty about how it will develop. Let’s hear from Nic for his thoughts. 

Nic Terry: I think we won’t know a lot about the correct answers until we see the best practices being published, the protocols being published. And again, I think while they may be out there at the moment, what we need is transparency. And if there’s transparency, then we can maybe trust those answers. And then we’ll know more about where the liability line is. 

I. Glenn Cohen: Our experts, Nic, Ravi, and Michael, all seem to agree. Traditional notions of liability are ripe for change when it comes to new AI based technologies. For now, these technologies are so new that we have more questions than answers when it comes to what this updated liability regime might look like. But as Michael explained, at its best, updating the system could spur the widespread adoption of AI-based technology and promote more accessible and equitable healthcare. 

I. Glenn Cohen: If you liked what you heard today, check out our blog ‘Bill of Health’ and our upcoming events. You can find more information on both at our website, petrieflom.law.harvard.edu. And if you want to get in touch with us, you can email us at petrie-flom@law.harvard.edu. We’re also on Twitter and Facebook @petrieflom, no dash there. 

Today’s show was written and produced by Chloe Reichel. Nicole Egidio is our audio engineer. Melissa Eigen provided research support. We also want to thank Nic Terry, Ravi Parikh, and Michael Abramoff for talking with us for this episode. 

This podcast is created with support from the Gordon and Betty Moore Foundation and the Cammann Fund at Harvard University. 

I’m Glenn Cohen and this is Petrie Dishes. Thanks for listening.

Created with support from the Gordon and Betty Moore Foundation and the Cammann Fund at Harvard University.