By Sara Gerke and Joshua Feldman
Walking her bike across an Arizona road, a woman stares into the headlights of an autonomous vehicle as it mistakenly speeds towards her. In a nearby health center, a computer program analyzes images of a diabetic man’s retina to detect damaged blood vessels and suggests that he be referred to a specialist for further evaluation – his clinician did not need to interpret the images. Meanwhile, an unmanned drone zips through Rwandan forests, delivering life-saving vaccines to an undersupplied hospital in a rural village.
From public safety to diagnostics to the global medical supply chain, artificial intelligence (AI) systems are increasingly making decisions about our health. Legislative action will be required to address these innovations and ensure they improve wellbeing safely and fairly.
In order to draft new national laws and international guidelines, we will first need a definition of what constitutes artificial intelligence. While the examples above underscore the need of such a definition, they also illustrate the difficulty of this task: What is uniquely common between self-driving cars, diagnostic tools, and drones?
What is AI?
In 1955, four giants of 20th century computer science John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon proposed that they could simulate intelligence in a machine. This proposition led to the Dartmouth Summer Research Project on Artificial Intelligence, a summer-long brainstorming session on the topic, thus coining the term “artificial intelligence.”
Since then, the term AI has entered the common lexicon. But if you ask someone to define AI, you will get no universal answer. One of the key challenges in legislating AI will be to define it.
Several bills related to AI have been introduced in Congress over the past 16 months. Most discuss AI without defining it. However, the FUTURE of Artificial Intelligence Act of 2017, the AI JOBS Act of 2018, and the National Security Commission Artificial Intelligence Act of 2018, contain explicit definitions.
Strikingly, the three offer similar explanations.
A Modern Approach
The three bills base their definitions largely on Stuart Russell’s and Peter Norvig’s textbook Artificial Intelligence: A Modern Approach. Russell and Norvig classify AI into four categories:
- Thinking Humanly,
- Thinking Rationally,
- Acting Humanly, or
- Acting Rationally.
The first and third categories follow a human-centered approach, whereas the second and fourth categories follow a rationalist approach. In contrast to humans who (unfortunately for us) make mistakes, systems are rational if they do the “right thing,” given what they know. Both approaches deal on the one hand with thought processes and reasoning (the first two categories) and on the other hand with behavior (the last two categories).
However, these four categories are insufficient.
Thinking humanly or rationally
Russell and Norvig explain that a machine would think like a human if a correct theory of the human mind were implemented in code. Alternatively, a machine would think rationally if it used logical reasoning to determine its behavior.
We argue that neither case can truly be described as thinking.
The major challenge in defining AI in terms of how machines think is that such a claim is unfalsifiable. In the famous essay, “Computing Machinery and Intelligence”, Alan Turing concludes that since we can never actually get inside someone else’s head, the assumption that other humans think is simply a “polite convention”. If we cannot prove that another human is thinking, how could we do so for a machine?
We will not go into the ongoing controversy surrounding Turing’s paper. The point is that, regardless of whether machines can actually think or not, the burden to prove such activity is too high since doing so appears to be impossible. More precisely, self-driving cars, diagnostic tools, and drones cannot be said to be thinking, but they should still be classified as AI.
The bills cited above provide neural networks as an example of systems that think like humans, but this is incorrect. While a neural network may have been inspired by human brain cells, they are much closer to fitting a line to data than anything resembling cognitive activity. Even if one were to perfectly replicate the human brain in code, the machine would be imitating the behavior of the brain rather than thinking like a human. One would still have no method of proving that the neural network thinks.
Acting humanly
While defining AI in terms of acting instead of thinking is more verifiable, it suffers from similar problems. Following Russell and Norvig, if we make the strong requirement that a computer’s actions must be indistinguishable from human actions, we arrive at Turing’s definition of computer intelligence (known as the Turing test). No system has passed this test to date despite several AI developers who claim the opposite.
If we lower our standard to say that a machine must act like a human only in one respect, then we arrive at the surprising conclusion that most software constitutes AI. For example, consider the calculator. A calculator acts like a human in that it can perform simple arithmetic tasks. In fact, it is much better than us at doing so. Does this mean that a calculator is intelligent? Maybe – but this would mean that future regulation applying to AI would also include calculators, which might not be the legislator’s intention.
Acting Rationally
Russell and Norvig define an agent to be “just something that acts” and a rational agent to be “one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.”
Once again, this criterion is unverifiable. To assess whether an agent is acting to achieve the actual or expected optimal outcome, one would have to know this outcome (or in the case of expectation, the distribution of outcomes) beforehand. Finding the best result in any situation is impossible since there is the possibility that another course of action would have led to a better result. We cannot be sure that autonomous vehicles, diagnostic tools, and drones are acting rationally because we lack an omnipotent view of the actual or expected best course of action.
What Should Be Regulated?
The definitions of AI provided in the bills mentioned above are first attempts to nail down an elusive concept. Defining AI in terms of thinking humanly, thinking rationally, or acting rationally is too narrow, while identifying AI with respect to acting humanly is too broad.
Though autonomous vehicles, diagnostic tools, and drones may all have some semblance of intelligent behavior, this post demonstrates that defining precisely what constitutes this intelligence is challenging. Under the precision and clarity demanded by law, our usual understanding of AI falls apart. Legislators need to consider whether we should anchor regulation on a definition of AI or if a different framework, such as algorithmic decision making or machine learning, would be a better approach.
This blog post was inspired by the presentation “AI in Drug Discovery and Clinical Trials” by Sara Gerke held on October 24, 2018, at the Conference “Drug Pricing Policies in the United States and Globally: From Development to Delivery.”
Sara Gerke’s research is supported by a Novo Nordisk Foundation-grant for a Collaborative Research Programme (grant agreement number NNF17SA027784).