Artificial Intelligence in Health Care: Are the Legal Algorithms Ready for the Future?

Posted By Dr. Anastasia Greenberg

Big data has become a buzz word, a catch-all term representing a digital reality that is already changing the world and replacing oil as the newest lucrative commodity. But, it is often overlooked that big data alone is literally useless – much like unrefined oil. If big data is the question, artificial intelligence (AI) is the answer. The marriage between big data and AI has been called the fourth industrial revolution. AI has been defined as “the development of computers to engage in human-like thought processes such as learning, reasoning and self-correction”. Machine learning is a field of AI that is concerned with pattern extraction from complex, multidimensional [big] data. Although when we think of AI we tend to think of robots, machine learning algorithms – software, not hardware – are the game changer as they can sort through massive amounts of information that would take a human centuries to do.

7562831366_66f986c3ea_oMachine learning software can harness the usefull information hidden in big data for health care applications || (Source: Flickr // Merrill College of Journalism Press Releases)

While machine learning is already used in fields like finance and economics, the health care industry is the next frontier. In both Canada and the United States, about three-quarters of physicians have now switched to Electronic Health Records, where medical records are digitized as big data. The ability to use such data for medical purposes comes with a host of legal and policy implications. The Canadian Federal Budget 2017 includes $125 million in funding for a Pan-Canadian AI strategy, part of which is meant to “develop global thought leadership on the economic, ethical, policy and legal implications of advances in artificial intelligence.”

How Does a Machine “Learn”?

Machine learning essentially trains a program to predict unknown and/or future values of variables in a similar way to how humans learn. Imagine that you are visiting a friend’s house and you see a furry creature run past you. Automatically, you identify this creature as a cat, not a small dog, even though both are furry animals, and despite never having come across this specific cat before. Your past learning experiences with other cats allowed you to make an inference and classify a novel creature into the “cat” category. This is exactly the kind of task that used to be impossible for computers to perform but has become a reality with machine learning. A computer program is first fed massive amounts of data – for instance, images of biopsies that are pre-classified into categories such as “tumour” and “no tumour” (this procedure is called “supervised” training). The computer then learns to recognize complex patterns within the data, enabling it to classify novel images as either tumour or no tumour.

html php java source codeA machine can be trained to classify health care data for various purposes such as the detection of tumours in biopsy images. || (Source: Flickr // Markus Spiske)

Artificial Intelligence in Health care

Machine learning in the health care context holds a lot of promise for diagnosis, disease onset prediction, and prognosis. Since misdiagnoses are the leading cause of malpractice claims in both Canada and the United States, machine learning could greatly diminish health care and legal costs by improving diagnostic accuracy. With these promises in mind, the research literature on AI applications for health care has taken off in the last few years. Machine learning has been shown to successfully diagnose Parkinson’s Disease using Magnetic Resonance Imaging (MRI) data, predict ischemic stroke, predict lung cancer prognosis better than pathologists, and predict colorectal cancer patient prognosis better than colorectal surgeons. Perhaps most evocative, machine learning algorithms trained using Electronic Health Record data can predict suicide up to two years in advance with 80 percent accuracy.

In the private sector, IBM’s Watson, famous for its performance on Jeopardy, has been reinvented into Watson Health. Watson Health has “read” millions of medical journal articles and understands natural language to answer a host of medical questions. Although it is in its primitive stages, IBM hopes that Watson will soon be able to harness data on patients’ laboratory tests, symptoms, and genetic information. Together, this information processed with Watson’s medical knowledge could be used to effectively diagnose and suggest treatment options.

Legal and Ethical Implications of AI in Health care

In light of the ethical and legal implications, these advances should be both celebrated and received with caution. To understand the presenting issues, it is important to comprehend how these algorithms work. Machine learning is able to take in complex data made up of billions of data points; the more complex the input data that the machine is trained on, the more information it has available for making accurate predictions, but the higher the risk of overfitting. Overfitting occurs when algorithms become very good at modelling the current dataset but cannot successfully generalize to a new dataset – the dataset used to make decisions pertaining to new patients.

Perhaps most evocative, machine learning algorithms trained using Electronic Health Record data can predict suicide up to two years in advance with 80 percent accuracy.

A further complication is that because the machine is learning somewhat autonomously about the data patterns, the process is a black box. It is difficult to understand which features within the data are most important and what kinds of complex nonlinear relationships the machine is modeling to make its predictions. That’s why diligent machine learning researchers test their algorithms on new datasets to figure out how reliable and generalizable they are. In any case, a high level of expertise is required and some level of error is inevitable.

The Current Legal Algorithms

So, what happens when a patient is misdiagnosed using AI? How would the current legal structure deal with such a scenario? One possibility is that such a situation would fall under the default fault-based regime (or “algorithm”) under the Civil Code of Quebec (CCQ, art. 1457), analogous to the tort of negligence in common law Canada. This would imply that the physician using AI technology may be liable for its misdiagnosis. However, AI does not seem well suited for such a regime. As discussed above, the algorithm is going to be more accurate on average than a physician. Thus, once AI in health care becomes commonplace, it would be difficult to find the physician negligent, especially when the physician is not the one writing the code. Given that the accuracy would be an improvement over physician knowledge alone, the physician “expert standard” (where a physician’s actions are compared to her peers) may shift to require the use of such technology.

35338054331_37ea4257a2_oA physician may soon be able to use AI for diagnosing patients and suggesting treatment options || (Source: Flickr // Hamza Butt)

Another option would be to fit AI within the realm of product liability, which, under the CCQ (art. 1468, 1469) is a form of strict liability, meaning that the evidentiary burden would be placed on the defendant, in this case the manufacturer of the AI software, not the patient. This means that the patient has a “leg up” in such a litigation scenario. In line with this idea, in 2017 the Committee on Legal Affairs of the European Union submitted a motion to the European parliament calling for law reform on the issue of robotics (included under the AI umbrella), to implement a strict liability regime. Under the CCQ, the rule governing the autonomous act of a thing (art. 1465), which is close to strict liability (presumption), is another potential possibility, since AI learns semi (or even fully) autonomously. This would again implicate the physician as the guardian of the AI.

The issue with a strict liability regime for AI is that unlike a traditional defective “product”, a human (the physician) and AI are going to work together in the decision making process, making it difficult to point the finger at the manufacturer, software developer or the physician alone. For example, a recent study showed that when presented with images of lymph node biopsies for detecting metastatic cancer, AI was better at finding those hard-to-detect cases while the human physician was better at rejecting false positives (correctly rejecting a cancer diagnosis). When the human and AI skills were combined, correct metastatic cancer detection was at 99.5% accuracy. Another issue comes from the classic economic argument that strict liability can hamper innovation.

AI does not easily fit into any existing private law regimes for compensating patients who will inevitably suffer harm from the however small amount of errors. Another possibility would be to remove AI from the private law system and create compensation schemes. Like AI, vaccines are the utilitarian choice due to their low complication rates and because they do not easily fit into negligence-based regimes. For this reason, the United States has a National Vaccine Injury Compensation Program and Quebec has a similar provincial Program.

AI, along with personalized big data, is projected to be the next disruptive technology that will transform medicine by allowing physicians to see into the future. With great computing power comes great responsibility to develop the right legal and regulatory “scripts” for the implementation of AI in health care.

Anastasia Greenberg is a second-year student in the B.C.L/LL.B. program at McGill University’s Faculty of Law and is the Executive Online Editor of the McGill Journal of Law and Health. Anastasia holds a PhD in Neuroscience from the University of Alberta.

From Lab to Court: Neuroscientific Evidence and Legal Decisions in Disorders of Consciousness and Beyond

Posted By Dr. Anastasia Greenberg

For many, the term “vegetative state” brings to mind the American case of Terri Schiavo and her decade long legal battle (1992-2002) surrounding the “right-to-die”. Terri sustained serious brain damage in 1990 following a cardiac arrest, which led to an eventual diagnosis of a persistent vegetative state. Terri’s husband fought for the right to remove her feeding tube while her parents were desperate to keep her alive, believing she was conscious. Ultimately, Terri’s artificial life support was withdrawn in 2005, stirring an ongoing debate on the difficult ethical and legal implications in similar cases. Progress in neuroscience gives us hope in being able to answer key questions about brain and behaviour with direct relevance for the legislature and the courtroom.

Disorders of Consciousness

Vegetative state can be defined as “wakefulness without awareness”, in which patients show normal sleep-wake cycles (unlike a coma which is analogous to a deep sleep) but without any evidence of purposeful behaviour connected to awareness of the self or the environment. While wakefulness is straightforward to detect based on sustained eye opening and specific electroencephalogram (EEG) activity, the existence of awareness poses a much more complicated question.

In order to measure consciousness or awareness, we rely on behavioural evidence of “command following” as a proxy to make inferences about mental states. For example, locked-in syndrome patients have lost almost all ability to make voluntary movements but retain the ability to respond to “yes” or “no” questions by moving their eyes or eyelids in a consistent manner. This residual ability to form purposeful behaviour leaves no question that the patient is indeed conscious.

Hospital BedAdvances in medical science have changed our understanding of consciousness in patients in a “vegetative” state. || (Source: Flickr // Presidencia de la República Mexicana)

Unfortunately, the difficulty with vegetative state patients is that they do not show any such meaningful behaviour or evidence of language comprehension. These patients will stare into space, move their eyes in an inconsistent manner and may even burst out into laugher or tears; however, none of these behaviours are linked to environmental stimuli.

For a long time, it was believed that such patients were completely unconscious. However, in the last decade this orthodox notion has faced serious scrutiny, regarding at least some of these patients, due in large part to the work of Canadian neuroscientist Dr. Adrian Owen from the University of Western Ontario.

Neural Activity as A Proxy for Behaviour

Dr. Owen’s research method allows certain patients who are labelled as vegetative to communicate solely by modulating their brain activity, recoded using functional magnetic resonance imaging (fMRI). fMRI makes inferences about brain activity indirectly by measuring blood flow, which is temporally linked to neural activity in that recently active cells require a fresh supply of oxygenated blood. This allows scientists to gauge which parts of the brain are involved in various cognitive tasks with high spatial resolution.

In a notable study, Dr. Owen’s team asked “vegetative state” patients in the fMRI scanner to imagine playing tennis or to imagine walking around their house from room to room. When healthy patients are asked to perform this same task, imagining playing tennis shows activation in a part of the brain called the supplementary motor area (SMA) while walking around the house activates parahippocampal corticies (PPA) which are involved in real and imaginary spatial navigation.

Remarkably, a portion of vegetative state patients (17%), diagnosed based on internationally recognized behavioural standards, show consistent SMA activity when instructed to imagine playing tennis and PPA activity in the case of walking around the house. Even more remarkably, they were then able to use imagining playing tennis or imagining walking around the house to respond “yes” or “no” to questions – with 100 percent accuracy. Using their imagination, this select group of vegetative state patients responded correctly to questions about their own name, their parents’ names, the current year, and so forth.

Ethical Issues

These findings make legal characterizations pertaining to the decision to withdraw nutrition and hydration even more complicated. In a personal communication with Dr. Owen, he mentioned that one such patient was asked whether he wished to continue living. He responded: “yes”. This is exciting news in the context of legal decision-making; perhaps we could simply ask the fMRI-responsive patients to decide their own fate.

24130148711_12dae8e061_kfMRI scans allow doctors to prompt and make inferences about neural activity in patients in a “vegetative” state, in some cases enabling a limited channel of communication. || (Source: Flickr // NIH Image Gallery)

But what can be said for the remaining 83% of patients? Can we conclude that they are simply not conscious, and thus truly fit their derogatory label of “vegetative”? The problem with such a conclusion is one of false negatives. When someone consistently “responds” to high-level questions with their brain activity, we can be sure of their consciousness – arguably to the same extent as someone who is saying “yes” and “no” in plain English (or French).

However, when a vegetative patient fails to show any meaningful fMRI responses, we cannot be certain that they are not conscious. Consider, for example, patients that have lost function in their auditory cortex and thus cannot hear the task instruction nor questions – not to mention many more nuanced neural complications that may prevent successful performance despite consciousness.

Legal Applications for Neuroscience Data

Dr. Owen’s work has received enormous media attention and, most relevant to the legal context, Dr. Owen recently submitted an affidavit that was admitted into evidence by the Supreme Court of British Columbia (BC) in Ng v Ng (2013). Kenny Ng was involved in a motor vehicle collision that left him in a minimally conscious state (higher functioning than vegetative) from 2005 onward. Kenny’s wife, who was entitled to give substitute consent for Kenny under BC’s Patient’s Property Act (PPA) and Health Care (Consent) and Care Facility (Admission) Act (HCCFA), decided to take Kenny off of life support in spite of opposition from his siblings.

In a personal communication with Dr. Owen, he mentioned that one such patient was asked whether he wished to continue living. He responded: “yes”.

Dr. Owen’s affidavit could not speak specifically to Kenny’s case given that Kenny never participated in any studies by Dr. Owen’s team. However, it suggested that Kenny could potentially fall into the category of those with awareness and is a good candidate for further study. Ultimately, though, the court ruled in favour of Kenny’s wife since she held the decision-making authority pursuant to legislation and since the removal of feeding was found to be reasonable given the available medical evidence supporting Kenny’s poor clinical prognosis. The court had no legal mechanism by which to order that Kenny be tested by Dr. Owen, a neuroscientist, and set aside the recommendations of Kenny’s team of medical doctors.

An Eye to the Future

The potential applications of neuroscientific evidence in courtrooms and in end-of-life legislation are exponentially increasing. Publications in the developing study of neuroscience and law, coined “neurolaw”, have spiked since 2006. Both neuroscientists and legal scholars express optimism, but they also emphasize erring on the side of caution when admitting flashy neuroscience into court. While the direct legal relevance of Dr. Owen’s work for use in a courtroom setting is persuasive, it also presents many opportunities for abuse, or innocent misinterpretation, of neuroscientific information.

US courts have admitted brain scans (including fMRI) into evidence in criminal cases involving insanity defenses (called defense of mental disorder in Canada), as well as highly controversial fMRI lie-detection evidence. In Canada, fMRI data has not yet seen its day in court and may raise serious Charter issues in relation to brain privacy. Dr. Owen’s affidavit in Ng v Ng is one of only two Canadian cases to ever mention fMRI in more than an incidental way. In a controversial ruling in Italy, a court reduced a sentence for murder after being presented with neuroscientific evidence in the form of brain scans and genetic evidence that suggested links to poor impulse control.

CourtLegislators and the courts will have to grapple with the risks and benefits of allowing the adducement of neuroscientific evidence before a judge. || (Source: Flickr // Jordan Schulz)

A deep understanding of neuroscientific technology and methodology is invaluable in drawing valid conclusions based on the data presented. A jury may interpret a colourful fMRI image as analogous to an X-ray – being able to “see” brain activity – when in fact the image is created through a series of inferential steps involving complicated statistical analyses performed on the data. These steps are peppered with human decisions about which statistical thresholds are to be used, which behavioural conditions should be compared, and so forth. Concerns over “overclaim syndrome” relate to the persuasive “wow” factor neuroscientific evidence evokes. In one study, mock jurors were more likely to give a verdict of “not guilty” if a defense of mental disorder was presented along with MRI images.

Neuroscientific evidence also has the potential to influence end-of-life legislation, such as BC’s PPA and HCCFA that were used to transfer consent to Kenny’s wife, by requiring neuroscientific interventions before transferring consent. Currently, however, such a provision can only exist in the parliament of dreams, as neuroscientific tests of consciousness are far from routine procedures.

Neuroscience and law have begun to converge, developing the field of neurolaw with international neurolaw conferences and societies bringing scholars and practitioners from both disciplines together to explore their mutual interests. Professor Henry T Greely of Stanford Law School predicts that neuroscience will revolutionize the law: while the consequences of this neurolaw revolution carry serious risks, a future that offers a “window into the mind” may prove more conducive to justice. For those conscious patients trapped behind a “vegetative” label, neuroscientific evidence may provide sufficient weight to tip the scales of justice.

Dr. Greenberg holds a PhD in Neuroscience from the University of Alberta, and recently began studies at the McGill Faculty of Law.