Dr. AI Will See You Now: Should We Tell Patients When AI is Being Used in Their Care?
SUmmary
What happens when robots, AI, and big data enter the hospital? Glenn Cohen (a professor and deputy dean at Harvard Law School) is unpacking that question in this exploration of biotechnology, ethics, medical law, and health care policy. Each week, he’ll interrogate a single technology – such as digital pills, AI-powered decision support algorithms, or digital health apps – through the lens of ethical concerns like informed consent, liability, and privacy.
Episode
Transcript
Cynthia Chauhan: I am not anti AI, but I am cautious AI. I think patients need to be engaged and aware and part of the decision-making about whether or not to use it.
I. Glenn Cohen: I’m Glenn Cohen. I’m the faculty director of the Petrie-Flom Center. The James A. Attwood and Leslie Williams Professor of Law, and a deputy dean of Harvard Law School, as well as your host. You’re listening to Petrie Dishes, the podcast of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.
We’re taking you inside the exploding world of digital health, one technology at a time.
Increasingly, artificial intelligence or AI is being used in hospitals without patients’ knowledge of its use in their care, let alone their consent. For example, AI modeling is being used to inform decision making around discharging hospitalized patients. Other existing AI-powered decision support tools include an algorithm that predicts the likelihood that a patient will develop sepsis and an algorithm that predicts whether a cancer patient will die in the next six months.
Hospitals and clinicians are deploying this technology in many cases without disclosing to patients that it’s being used. This episode will delve into the issues around the technology itself. Does it work? How much latitude your physicians have to disregard AI recommendations? We’ll also cover thorny questions that arise as AI infiltrates the world of patient care.
Are patients unwitting research subjects? Under what circumstances does use of AI erode patient’s trust in doctors? In particular, we’ll explore the bioethical principle of informed consent as it intersects with the use of artificial intelligence. Essentially, informed consent requires that patients and study participants are aware of and consenting to any research or clinical care they’re participating in.
And this includes being informed of potential risks and benefits. Ravi Parikh is a professor at the University of Pennsylvania Perelman School of Medicine, and the executive director of the Penn Center for Cancer Innovation. He’s directed research on the use of AI to protect mortality for patients with cancer.
First, let’s understand the algorithm he and his team developed. What is its purpose? How will it augment clinical care?
Ravi Parikh: Our work in developing predictive algorithms to predict mortality came about largely from an operational priority from the health system and especially for patients with cancer. We tend to over utilize resources near the end of life, and that’s not common to our health system. That’s common across health systems for patients with cancer. And so there’s been a decades-long effort to integrate things like early conversations about goals of care and end of life preferences and palliative care early in the course of someone’s illness.
But the issue is that those are resource constrained and oftentimes they’re only done very late when someone is close to death. And so our hypothesis by integrating algorithms at the point of care and using those as a decision support tool for clinicians to sort of flag patients that may or may not benefit from an earlier conversation or an earlier palliative care consult, we’d be able to shift a lot of resources from the current standard of aggressive care near the end of life.
I. Glenn Cohen: Has the algorithm actually achieved its aims?
Ravi Parikh: The results of our algorithm-based work have been really promising to date. First off, we’ve shown through a randomized trial, one of the first randomized trials to be done with the machine learning based predictive algorithm in routine clinical care. We’ve shown that implementing an algorithm based intervention into routine oncology care more than quadruples rates of end of life and serious illness conversation among patients with cancer. And that’s important because it really takes the conversation with a patient to, start to do what they want to do near the end of life and make sure that their preferences are integrated throughout the course of their care.
I. Glenn Cohen: You might be wondering, how do predictive algorithms like this work? Ravi’s going to explain?
Ravi Parikh: How an algorithm generates predictions about whether someone is likely to die in six months is often the same way that an algorithm that’s used in practice will generate other types of predictions, including, how likely is someone to be readmitted to the hospital or how likely someone is to suffer a heart attack?
It takes a set of data from prior to when someone interacts with the healthcare system. So say someone has utilized the healthcare system in a variety of ways over the past six months. An algorithm can take data over those prior six months, including hospitalization, comorbidities, the laboratory value, some cancer specific data in our case, and it can ingest that and build that into a model in our case, a machine learning model, to try to predict downstream outcomes. And so to build that algorithm, you need historical data on that. So you can sort of try to simulate what would those retrospective, inputs need to be and what’s the future looking outcome. And so that’s how you can build your model.
But then you have to make sure that you’re only using the level of data that you would have access to in real time when you build that model code so it’s the most accurate they can be when it’s applied in the real world setting.
And so that was sort of the process that we did was build an algorithm, decide on things like what threshold to flag the patient as high risk by talking to physicians and talking to nurse practitioners and other advanced practice providers that figure out what would be useful to them. And then, pilot that algorithm in clinical practice to see whether, it would be acceptable to clinicians.
I. Glenn Cohen: Machine learning used to build prediction models can sometimes be astonishingly accurate, but often their results are crude, lacking, adequate context, or just plain raw. Fortunately, in many cases, even though AI facilitates medical decision making, the physician is still in command and can catch mistakes.
But we’re now seeing artificial intelligence work its way into more automated medical processes when machine learning processes become more central in making clinical judgements, what are clinicians’ obligations to their patients? Does informed consent mean they must disclose? Any use of ai, even in the most inconsequential of automated processes.
Ravi Parikh: I think the decision about whether to disclose that an algorithm is the one making a prediction is a really, tricky decision. In my practice, at least in a lot of clinicians practice, we’re never making prognoses by telling patients, well, you have this data point and so that’s why, I think that you have X percentage chance to live, or, this scan shows this, so that’s why I think you have this and percentage chance of dying in the next six months.
We’re taking in information all the time to come up with a oftentimes uncertain prognosis or a range of dates for a prognosis. And so there’s a lot of uncertainty into what even the algorithm spits out that we’re never going to be able to fully capture when we communicate that prediction to the oncologist.
And so it’s for that reason that, I think that even if we were to sort of try to entertain, disclosing an algorithmic prediction to a patient that would need to be couched in a lot of different terms around uncertainty, that’s really difficult to communicate. And so that’s why we say treat this like you would an x-ray result or a laboratory value as a data point that could prompt you to have a conversation. But, is likely not the end all be all determinant of whether you’re going to have that conversation just because there’s a variety of other factors going on.
I. Glenn Cohen: Ravi goes on suggest that if AI is just one factor of many considered by a doctor, the obligation to secure informed consent is less weighty.
Ravi Parikh: So I think that that’s the approach that we had to take for these decision support tools, which is that if it’s not the end all be all in determinant of a clinical decision. And I often don’t think it’s subject to a lot of informed consent type of rules.
Now, as algorithms get more integrated into automated workflows that are actually being used to direct decision-making like things like automatic x-rays for anyone who flagged above an algorithm based threshold of some sorts, then you may imagine that that goes from decision support to actual decision making. Then there may be some informed consent provisions that need to be included, but I don’t think we’re there yet for the vast majority of algorithms that are used in clinical practice, because all of them need a clinician, in nearly all cases.
I. Glenn Cohen: Now, what do patients make of all these developments? What are their views on informed consent involving medical AI?
Cynthia Chauhan: I think AI is sort of the exciting new kid on the block that some people are rushing into without judgment. And we may have some repercussions from that later on the other hand, I have heart failure and I have atrial fibrillation, and I know that, AI is being developed to help understand those better and to help with intervention.
I. Glenn Cohen: That’s Cynthia Chauhan, patient advocate at the Heart Failure Society of America. She presents a somewhat different perspective on how information and responsibility should be partitioned between the physician and the patient when it comes to medical AI.
Cynthia Chauhan: Regarding, my thoughts about AI. I see myself as a partner in my care, not a recipient of care. And as a partner in my care, I have a right to be a part of all of the decision-making. I have a right to say. Yes, I will have the surgery or no, I will not.
And I know that there’s an AI stethoscope being developed. That will be very helpful. What is wrong with telling me about that? What is wrong with making me are helping me to be a very informed participant in my care decisions. I do not want to be done on two. I want to be consulted with and done with.
And I know all patients don’t share that view, but for those of us who believe that knowledge empowers us and it empowers our physicians to make better care plans that we are more likely to, be compliant with.
So, yeah, I think you can never give me too much knowledge.
I. Glenn Cohen: One tough aspect of this, how do we ensure the patients fully understand the information?
Cynthia Chauhan: I believe informed consent should be written in lay language. They should be truly made to inform the patient and not to excuse the providers. I think informed should be underlined capitalized. The patient really needs to know what they’re signing and I know many patients think, well, my doctor recommended it, so I’ll do it.
That’s not good enough. They need to understand what they’re agreeing to.
I. Glenn Cohen: For Cynthia, the failure to disclose amounts to a failure to respect the patient as a person, as an end in herself.
Cynthia Chauhan: If I found out after the fact that AI was being used, in my case I would question why. What was important about doing this, that you did not feel it appropriate to share it with me? I am not here to be taken care of. I am here to be a participant in an, a partner in my care. And when you make decisions without me, you are disavowing the importance of my role in my own care.
I. Glenn Cohen: How well do Cynthia’s insights square with the reality of informed consent law today? Do patients have a legal right to be informed about the use of AI in clinical encounters? For some insights, let’s hear from Nicolas Terry, professor of law at Indiana University McKinney School of Law, and Executive Director of the Hall Center for Law and Health.
Nic Terry: I think the knee jerk response to that question, do patients have a right to know their care is being influenced by AI? The knee jerk response has to be yes, of course. Which is then swiftly followed by one: hold on, I’m a lawyer, so the answer must be, it depends. Two: well, just how many, to borrow the Monty Python expression, how many machines that go ping are there in the modern health care facility, and every time you use some kind of machine, are we going to have to go through risk identification and notification? Presumably the answer is no. And then third: that knee jerk response of yes, because we always inform patients about risks, probably has to be qualified with, well, just how robust is our informed consent when it comes to non-machine risks? Do we really have the informed consent mechanisms there that we think we do? Or is it much more sort of clinician sort of just going through check boxes, and nothing too deep?
I. Glenn Cohen: As with most novel legal questions, informed consent to AI use in healthcare is far from a settled question. With that in mind, Nic is going to share his thoughts on the kinds of informed consent cases that might be litigated in the near future.
Nic Terry: The cases we will see earlier are sort of the exceptions and the edge cases. So if you catch clinicians actually being experimental in the true sense, such that the care should have been subject to human research protocols and IRB approval, then I think you’re going to see some activity, but actually you won’t hear about it because it will be settled very quickly.
The same, I think would be the case if this was not FDA approved. In those kinds of cases, then the clear exception to no informed consent applies. And then I think from the exceptions to the edge cases, will there be cases where the clinical staff have actual knowledge of particular risks associated with their machine that they didn’t disclose? Or perhaps where the staff involved have not been trained on this particular piece of equipment. Then I think just like the inexperienced physician informed consent cases, you might get some kind of edge cases like that coming up.
I. Glenn Cohen: How courts will handle these cases remains to be seen. But Nic offers some predictions.
Nic Terry: Informed consent courts have always tried to stick with the core idea of reasonable care and not get into detailed prescriptive activity. Perhaps the best example of that goes to the patient mortality morbidity rates associated with particular diseases or particular treatments.
When patients have asked for the disclosure of those specific numbers and tables and so on to patients, the courts have tended to pull back and say, no, it has to be just what is reasonable in the circumstances. And so I think courts will be hesitant as far as prescribing or being prescriptive as to detailed information about the types of risks that come out or the risk occurrences, if you like. I think they’ll be hesitant to do that. They’ll look to try and find some kind of less prescriptive, more generally framed reasonable care standard, dependent typically on expert testimony.
I. Glenn Cohen: In other words, Nic is skeptical. The courts will set hard and fast rules as to when to disclose the use of AI in medical decision making, let alone sharing the details of the AI. But there is some tension between that and the ethical aims of informed consent in medicine,
As Ravi told us, these artificial intelligence algorithms have the capacity to improve and facilitate clinical care, but there are risks.
AI is prone to error and bias, and it’s such a new tool that in many respects we don’t know what we don’t know. For Cynthia, a patient advocate, this uncertainty compels transparency. If patients are subjected to the clinical judgements of artificial intelligence, then in her view they must be, at the very least, made aware of their application.
At the same time, with so many artificial intelligence applications cropping up in clinical care, it may be impractical to require the disclosure of each machine learning algorithm to which a patient is subjected. And as Nic said, it is not clear what the informed consent standard compels with this modern technology. It may be some time before we know where the courts are going on these questions.
You can find more information on both at our website, petrieflom.law.harvard.edu. And if you want get in touch with us, you can email us at petrie-flom@law.harvard.edu. We’re also on Twitter and Facebook @PetrieFlom.
Today’s show was written and produced by James Jolin and Chloe Reichel. Nicole Egidio is our audio engineer. We also want to thank Ravi Parikh, Cynthia Chauhan, and Nicolas Terry for talking with us for this episode.
I’m Glenn Cohen, and this is Petrie Dishes. Thanks for listening.
Created with support from the Gordon and Betty Moore Foundation and the Cammann Fund at Harvard University.