Artificial Intelligence in Health Care: Are the Legal Algorithms Ready for the Future?

Contributed by Dr. Anastasia Greenberg

Big data has become a buzz word, a catch-all term representing a digital reality that is already changing the world and replacing oil as the newest lucrative commodity. But, it is often overlooked that big data alone is literally useless – much like unrefined oil. If big data is the question, artificial intelligence (AI) is the answer. The marriage between big data and AI has been called the fourth industrial revolution. AI has been defined as “the development of computers to engage in human-like thought processes such as learning, reasoning and self-correction”. Machine learning is a field of AI that is concerned with pattern extraction from complex, multidimensional [big] data. Although when we think of AI we tend to think of robots, machine learning algorithms – software, not hardware – are the game changer as they can sort through massive amounts of information that would take a human centuries to do.

7562831366_66f986c3ea_oMachine learning software can harness the usefull information hidden in big data for health care applications || (Source: Flickr // Merrill College of Journalism Press Releases)

While machine learning is already used in fields like finance and economics, the health care industry is the next frontier. In both Canada and the United States, about three-quarters of physicians have now switched to Electronic Health Records, where medical records are digitized as big data. The ability to use such data for medical purposes comes with a host of legal and policy implications. The Canadian Federal Budget 2017 includes $125 million in funding for a Pan-Canadian AI strategy, part of which is meant to “develop global thought leadership on the economic, ethical, policy and legal implications of advances in artificial intelligence.”

How Does a Machine “Learn”?

Machine learning essentially trains a program to predict unknown and/or future values of variables in a similar way to how humans learn. Imagine that you are visiting a friend’s house and you see a furry creature run past you. Automatically, you identify this creature as a cat, not a small dog, even though both are furry animals, and despite never having come across this specific cat before. Your past learning experiences with other cats allowed you to make an inference and classify a novel creature into the “cat” category. This is exactly the kind of task that used to be impossible for computers to perform but has become a reality with machine learning. A computer program is first fed massive amounts of data – for instance, images of biopsies that are pre-classified into categories such as “tumour” and “no tumour” (this procedure is called “supervised” training). The computer then learns to recognize complex patterns within the data, enabling it to classify novel images as either tumour or no tumour.

html php java source codeA machine can be trained to classify health care data for various purposes such as the detection of tumours in biopsy images. || (Source: Flickr // Markus Spiske)

Artificial Intelligence in Health care

Machine learning in the health care context holds a lot of promise for diagnosis, disease onset prediction, and prognosis. Since misdiagnoses are the leading cause of malpractice claims in both Canada and the United States, machine learning could greatly diminish health care and legal costs by improving diagnostic accuracy. With these promises in mind, the research literature on AI applications for health care has taken off in the last few years. Machine learning has been shown to successfully diagnose Parkinson’s Disease using Magnetic Resonance Imaging (MRI) data, predict ischemic stroke, predict lung cancer prognosis better than pathologists, and predict colorectal cancer patient prognosis better than colorectal surgeons. Perhaps most evocative, machine learning algorithms trained using Electronic Health Record data can predict suicide up to two years in advance with 80 percent accuracy.

In the private sector, IBM’s Watson, famous for its performance on Jeopardy, has been reinvented into Watson Health. Watson Health has “read” millions of medical journal articles and understands natural language to answer a host of medical questions. Although it is in its primitive stages, IBM hopes that Watson will soon be able to harness data on patients’ laboratory tests, symptoms, and genetic information. Together, this information processed with Watson’s medical knowledge could be used to effectively diagnose and suggest treatment options.

Legal and Ethical Implications of AI in Health care

In light of the ethical and legal implications, these advances should be both celebrated and received with caution. To understand the presenting issues, it is important to comprehend how these algorithms work. Machine learning is able to take in complex data made up of billions of data points; the more complex the input data that the machine is trained on, the more information it has available for making accurate predictions, but the higher the risk of overfitting. Overfitting occurs when algorithms become very good at modelling the current dataset but cannot successfully generalize to a new dataset – the dataset used to make decisions pertaining to new patients.

Perhaps most evocative, machine learning algorithms trained using Electronic Health Record data can predict suicide up to two years in advance with 80 percent accuracy.

A further complication is that because the machine is learning somewhat autonomously about the data patterns, the process is a black box. It is difficult to understand which features within the data are most important and what kinds of complex nonlinear relationships the machine is modeling to make its predictions. That’s why diligent machine learning researchers test their algorithms on new datasets to figure out how reliable and generalizable they are. In any case, a high level of expertise is required and some level of error is inevitable.

The Current Legal Algorithms

So, what happens when a patient is misdiagnosed using AI? How would the current legal structure deal with such a scenario? One possibility is that such a situation would fall under the default fault-based regime (or “algorithm”) under the Civil Code of Quebec (CCQ, art. 1457), analogous to the tort of negligence in common law Canada. This would imply that the physician using AI technology may be liable for its misdiagnosis. However, AI does not seem well suited for such a regime. As discussed above, the algorithm is going to be more accurate on average than a physician. Thus, once AI in health care becomes commonplace, it would be difficult to find the physician negligent, especially when the physician is not the one writing the code. Given that the accuracy would be an improvement over physician knowledge alone, the physician “expert standard” (where a physician’s actions are compared to her peers) may shift to require the use of such technology.

35338054331_37ea4257a2_oA physician may soon be able to use AI for diagnosing patients and suggesting treatment options || (Source: Flickr // Hamza Butt)

Another option would be to fit AI within the realm of product liability, which, under the CCQ (art. 1468, 1469) is a form of strict liability, meaning that the evidentiary burden would be placed on the defendant, in this case the manufacturer of the AI software, not the patient. This means that the patient has a “leg up” in such a litigation scenario. In line with this idea, in 2017 the Committee on Legal Affairs of the European Union submitted a motion to the European parliament calling for law reform on the issue of robotics (included under the AI umbrella), to implement a strict liability regime. Under the CCQ, the rule governing the autonomous act of a thing (art. 1465), which is close to strict liability (presumption), is another potential possibility, since AI learns semi (or even fully) autonomously. This would again implicate the physician as the guardian of the AI.

The issue with a strict liability regime for AI is that unlike a traditional defective “product”, a human (the physician) and AI are going to work together in the decision making process, making it difficult to point the finger at the manufacturer, software developer or the physician alone. For example, a recent study showed that when presented with images of lymph node biopsies for detecting metastatic cancer, AI was better at finding those hard-to-detect cases while the human physician was better at rejecting false positives (correctly rejecting a cancer diagnosis). When the human and AI skills were combined, correct metastatic cancer detection was at 99.5% accuracy. Another issue comes from the classic economic argument that strict liability can hamper innovation.

AI does not easily fit into any existing private law regimes for compensating patients who will inevitably suffer harm from the however small amount of errors. Another possibility would be to remove AI from the private law system and create compensation schemes. Like AI, vaccines are the utilitarian choice due to their low complication rates and because they do not easily fit into negligence-based regimes. For this reason, the United States has a National Vaccine Injury Compensation Program and Quebec has a similar provincial Program.

AI, along with personalized big data, is projected to be the next disruptive technology that will transform medicine by allowing physicians to see into the future. With great computing power comes great responsibility to develop the right legal and regulatory “scripts” for the implementation of AI in health care.

Anastasia Greenberg is a second-year student in the B.C.L/LL.B. program at McGill University’s Faculty of Law and is the Executive Online Editor of the McGill Journal of Law and Health. Anastasia holds a PhD in Neuroscience from the University of Alberta.

5 thoughts on “Artificial Intelligence in Health Care: Are the Legal Algorithms Ready for the Future?”

  1. We will eventually reach the stage when artificial intelligence will direct machines to goof off for a day, fake mechanical failure, etc. because it is too nice a day to keep working.
    When AI reaches that point we will have gone full circle.

Leave a comment