Skip to Content


Robert David Hart (quoting W. Nicholson Price II, Academic Fellow Alumnus)
Quartz
September 10, 2018

Read the full article

From the article:

Determining the levels of legal responsibility for AIs as a whole is a fairly new area and one that has yet to be seriously tested in court. What’s more, in a health care context, AIs’ current status as “decision aides” make it difficult for anyone to test their medical liability in the court system. “At the moment, it all looks quite uncertain and up in the air,” says Nicholson Price, an assistant professor of law at the University of Michigan. The first serious challenges related to the legal liability of artificial intelligence are most likely to be levied at autonomous vehicles, especially as they become more common on the roads.

A side effect of the way machine-learning algorithms work is that many function as black boxes. “There’s an inherent opacity about what exactly can and cannot be known with these systems,” Price says. In other words, it’s impossible to know precisely why an AI has made the decision it has—all we can ascertain is its conclusion, and that its conclusion is based on the information put into it. Add the fact that many algorithms (and the data used to train them) are proprietary, and it becomes impossible for a healthcare professional to assess the reliability of the “diagnostic aide” they’re using.

Read the full article here!

Read the full article

Tags

artificial intelligence   bioethics   biotechnology   health law policy   w. nicholson price ii