The Curious Dichotomy of (in)Sanities

For most people, the inner workings of the brain and the resulting behaviour work seamlessly, and we behave in “socially acceptable” ways. But, what happens when the brain is “faulty?” Take for instance the famous case of Phineas Gage, who had a tragic accident at work, where a railroad blasting rod pierced through his skull into his brain. Mr. Gage survived this accident, however, his once well-behaved personality was altered to a more chaotic one. He began gambling, and became short-tempered, impulsive, and violent. Mr. Gage had suffered damage to the area of the brain known as the orbitofrontal cortex, implicated in emotional regulation, decision making, and impulse control.

Phineas Phinease Gage suffered brain trauma from a rod that pierced through his skull || (Source: Flickr // Protocol Snow)

If, for instance, Mr. Gage had killed someone in a barfight, without premeditation; by today’s legal standards, would Mr. Gage be found guilty (of 2nd degree murder or manslaughter) or would he instead be found not criminally responsible on account of mental disorder (NCRMD)? Is he responsible for his aberrant, perhaps incontrollable, behaviour ensuing from a traumatic brain injury? Or to the crux of the issue, are humans responsible for their behaviour, if it is nothing more than the result of brain function? A biological view of behaviour suggests that regardless of whether someone has a rod in their brain (tumour or other forms of severe brain trauma), all of their actions are controlled by their brain’s function.

The Guilty Mind

The above example is an introduction to the deterministic view of human behaviour as dictated by our brain (mal)functioning. Mr. Gage’s actions were determined by a brain that “misfired” as a result of physical trauma, leaving very little room for the concept of free will and voluntary action. This poses a fundamental challenge to the concept of criminal justice that relies on a crucial principle: that only the morally guilty should be punished. Guilt is determined by a defendant’s level of culpability, or in legal terms, their mens rea, which comes from the Latin phrase actus reus non facit reum nisi mens sit rea, translated to “the act is not culpable unless the mind is guilty.” The basic tenets of criminal justice inform our conception of mens rea: guilt is determined by one’s capacity to distinguish right from wrong and subsequent choice to act in the wrong. There is an assumption of free will and voluntary action. What happens to mens rea if the neuroscientific understanding of brain and behaviour removes free will from the agent? This question has lead to such defences as sane automatism, which can lead to full acquittal because it removes voluntary action from the individual.

The Not-so-Guilty Mind

To understand the case below (R v Stone), let me first explain the concept of automatism. There are two types of automatism defences: the first is sane automatism, where involuntary behaviour does not result from a mental disorder and is a complete defence, giving rise to full acquittal. For instance, sane automatism could be used as a defence if a defendant had murdered someone during a sleepwalking episode. Indeed, the Supreme Court of Canada, in R v. Parks, upheld the trial decision of allowing sleepwalking to be used as a sane automatism defence. This brings back the notion of mens rea: that only voluntary actions may lead to legal culpability. Generally, the factors required for a sane automatism defence must be extrinsic and are as follows: (1) there must exist an involuntary action arising from external source (or reflex action); (2) the action must be completely involuntary; and (3) the automatism must not be self-induced (that is why excessive alcohol/drug consumption is not a viable defence of automatism).

What happens to mens rea if the neuroscientific understanding of brain and behaviour removes free will from the agent?

The second type is insane automatism, where the actions of the accused are held to be the result of a mental disorder and triggers s.16 of the Canadian Criminal Code leading to a defendant being found NCRMD, again reflecting that the defendant could not appreciate the nature and quality of their act. Recall the earlier example of Mr. Gage killing a patron during a bar brawl; which of the two automatism defences could be argued most effectively? Would Mr. Gage’s brain injury justify the use of insane automatism because it would be considered as resulting from an internal cause? Or would the murder be acquitted under a sane automatism defence because Mr. Gage’s brain injury had an external causal factor [a rod] and was exacerbated by alcohol consumption? The predicament raised by the automatism defences, and the fine line between what is considered sane or insane by the courts, identifies a disconnect between fictional legal dichotomies and the neuroscientific reality that our brain controls our behaviour in all situations. This renders the concept of voluntariness moot to the eyes of science.

(mis)Communication of Science in the Courtroom

In recent years, there has been a sharp increase in the use of expert witnesses in courts which exemplifies a shift in the legal paradigm toward “hard facts,” stemming from science and its rigorous methodology. Among these experts are psychologist, psychiatrists, neuroscientists, and other specialists whose mandate is to inform the triers-of-fact in making more scientifically-informed judgements. When neuroscience meets the legal discipline, it is coined neurolaw.

In 1999, the Supreme Court of Canada was confronted with the challenging case of R v. Stone where Mr. Stone was appealing the guilty verdict of manslaughter for the killing of his wife via 47 stabbings. The trial judge had instructed the jury to consider insane automatism as a defence, but this failed, and he received a seven-year imprisonment sentence for manslaughter. The appeal, asking for Mr. Stone to be found NCRMD, was dismissed by the majority of the Supreme Court justices. However, three of the justices dissented, stating that the assessment of the appellant’s mental status at the time of the crime was not fully presented to the jurors. Specifically, the forensic psychiatrist brought in as an expert witness testified that the appellant was in a dissociative state, considered an unconscious state, when killing his wife. Furthermore, this was not attributed to a mental disorder, but rather a reaction to severe stress allegedly inflicted on him by his wife. The trial judge, and the concurring appeal judges, stated that although the accused had “periods” of unconsciousness during the murder, the expert witness did not assess the lack of voluntariness required for the defence of automatism. Interestingly, the jurors found that the appellant did not commit the crime voluntarily, thus resulting in a guilty verdict of manslaughter. The problem here is that the trial judge had only informed the jury of insane automatism, which requires the presence of a mental disorder (which the appellant did not have). Had the judge informed the jury about sane automatism, the verdict may have been a full acquittal.

Dissociative A dissociative state can cause impaired consciousness || (Source: Flickr // Vlad Gilcescu)

The lack of proper instruction to the jurors in R v. Stone brings forward a very important issue arising from the increased use of science in court. Namely, a need for comprehensive scientific literacy on the part of the jurists. In this case, the SCC dismissed the appeal partly on the grounds that the steps required for the insane automatism defence were not satisfied, stating: “As I have explained above, automatism is more properly defined as impaired consciousness, rather than unconsciousness. Furthermore, lack of voluntariness, rather than consciousness, is the key legal element of automatism. Accordingly, the trial judge should have concerned himself with assessing whether there was evidence that the appellant experienced a state of impaired consciousness in which he had no voluntary control over his actions rather than whether there was evidence that the appellant was unconscious throughout the commission of the crime.”

Although eloquent in its rhetoric, this reasoning does not reflect the state of the scientific evidence. The judge rests the dismissal of the appeal on a false dichotomy between unconsciousness and impaired consciousness, wherein he posits that the former does not fall in the latter. If the question of whether unconsciousness fell in the realm of impairments of consciousness was put to a neuroscientist, several questions about what was meant about impairment and in what context would follow. Rooted in the context of R v. Stone the unequivocal answer would be a strong affirmative of: “Indeed, Mr. Stone was impaired of consciousness.” The next question for the neuroscientist would be: “If the appellant suffered from impairment of consciousness resulting from a dissociative state, could his actions be considered of his own volition?” The logical answer would be in the negative. The issue appears to boil down to one of mismatched communication. Where jurors, attorneys, and judges would benefit from increased scientific literacy to clarify the issues put before them and assess the true worth of the expert testimony, and where neuroscientist (and experts in general) could use a crash course in legal standards and a disambiguation of legal jargon.

Loïc Welch is an Online Editor of the McGill Journal of Law and Health and a first-year B.C.L./LL.B. student at McGill University’s Faculty of Law. Loïc holds a M.Sc. in Forensic Psychology from Maastricht University (Netherlands), was a research assistant at the Douglas Mental University Institute in Montreal, and interned at the Professional Clinical and Forensic Services, a part of the Institute of Violence, Abuse, and Trauma in San Diego, California.

 

Recap of Speaker Series 2017: Ethical and Legal Ramifications of Stem Cell Research

Posted by Handi Xu

Our first Speaker Series event of the 2017-2018 academic year consisted of a discussion on the ethical and legal ramifications of stem cell research. This event presented diverse perspectives on research involving the development, use, and destruction of human embryos, as well as its many potential benefits and its complexities and regulations.

Dr. Michel L. Tremblay, a leading researcher from McGill University’s Biochemistry Department, discussed the evolution of stem cell use and its current clinical applications. Notably, stem cells are capable of reproducing themselves and are also able to differentiate into other cell types. Since stem cells are difficult to isolate in humans, experiments involving embryo stem cells are usually performed using animals. These experiments aim to create stem cell mutations in order to understand normal gene function as well as their association to various human diseases such as cancer and obesity.

IMG_2747 Dr. Tremblay spoke about the current clinical applications of stem cells 

In 2006, Dr. Shinya Yamanaka, a Japanese stem cell researcher, discovered through the fusion of stem cells and tumor cells that some genes responsible for stem cell properties were dominant over other gene expressed in non-stem cells. Therefore, the fusion of these stem cells and cancer cells led the majority of the fused cells to be stem cell like.  He then discovered that only four dominant genes in stem cells were necessary to transform a normal cell into a stem cell (Induced Pluripotent Stem cells or IPS cells). He shared the 2012 Nobel Prize in Physiology and Medicine with Sir John B. Gurdon for showing that mature cells can be reprogrammed into pluripotent stem cells. This line of work proved that it was possible to use cells other than those from the embryo to generate stem cells, hence removing one of the major ethical issues of using human embryos to obtain stem cells. Nowadays, novel technologies of genetic engineering, such as CRISPR-Cas9-technology, allow the generation of specific manipulations of genomes in any human stem cell and in other cell types.

Dr. William Stanford, an influential stem cell researcher from the Ottawa Research Institute, detailed the history of stem cells discoveries. He further discussed the use of stem cells in clinical trials to treat a great number of diseases such as diabetes, blindness, and heart disease. They are also starting to be used in the development and assessment of new therapeutic drugs. However, the remarkable potential of stem cells to improve all spheres of biomedical research and treatment has spawn great competition due to the lucrative potential of these technologies. Since the cost and ethical regulations of stem research and therapies differ among many countries, other issues such as stem cell therapeutic “tourism”, fake treatments, and non-ethical research programs in non-clinically certified centres, have resulted in harm to patients in many countries lacking regulation. There is a continuous need for maintaining a legal framework for their applications as well as constant effort to inform the public on the advances and limitations of stem cells activities.

IMG_2749 Dr. Stanford spoke about ethical complications with stem cell therapies in countries lacking proper regulation 

Finally, Me. William Brock, a partner at Davies and a leukemia survivor that underwent bone marrow stem cell transplant, expressed his opinion on stem cell research from a patient’s perspective. Not only did his treatment allow him to realize how fragile and important life is, but it also led him to acknowledge the power of science.

IMG_2753 Me. Brock spoke about his personal experience receiving stem cell therapy 

Indeed, scientific progress has permitted the 100% mortality rate of leukemia fifty years ago to drop to 10% for children and 50% for adults today. Me. Brock also explained that ethics is differently defined for everyone; while one person might find stem cell research unethical, another person’s life or death could rely on stem cells. He believes that society cannot decide for a patient whether they should be allowed to receive a stem cell treatment or not.

When Science meets Alternative Sentencing: Young Adult Courts (Part 1)

Posted by Souhila Baba

On the theme of psychology, this two parts blog-series will showcase recent developments in alternative sentencing, first in the United States and second in Canada, portraying how findings in science contribute to innovation in the legal field.

Don’t Treat Young Adults as Teenagers.” “Why Reimagining Prison for Young Adults Matters.” “​How Germany Treats Young Criminals.” “Criminals under 25 should not go to adult prison, MPs say.” These are but a few examples of the headlines urging change with regard to young adults in criminal justice around the world. While the law sets a threshold in differentiating adolescents from adults (18 years of age), science shows that the young adult (18-24 years of age) brain is still developing.

Young Adults in the Criminal Justice System

In Canada, once a person reaches the age of 18, they are no longer treated by the justice system as a juvenile offender, but as an adult. This results in an overrepresentation of young adults in the prison system. Following a section 7 constitutional challenge in 2008, the Supreme Court of Canada found that juvenile defendants (under 18 years of age) not only have a presumption of reduced moral culpability, they also cannot be sentenced as adults unless the Crown proves it is adequate beyond a reasonable doubt to do so. Since young adults are tried and sentenced as adults, they do not benefit from a similar presumption, and cannot be tried as juveniles. The Supreme Court of Canada is silent on this issue, and there are currently no alternative programs or sentences specifically catered to this age group.

Similarly, in the United States, young adults roughly ranging in age from 18-29, are also consistently overrepresented in prisons: making up 21% of inmates while representing only 10% of the population. Neuroscientific evidence has long held that the brain is continuously in development, from conception to death. While there are critical periods of significant brain development during infancy, childhood, and adolescence, the brain continues to undergo changes even into adulthood and beyond. In law, when a person reaches the threshold of 18 years of age they are characterized as an adult, without considering the developmental gradients that scientists are aware of.

The Science of Brain Development

Neuroscientists distinguish between an 18-year-old and a 26-year-old, as the development of certain brain regions is still in progress. Already in 1999, an experiment performed using Magnetic Resonance Imaging (MRI) technology to map differences in brain anatomy between adolescents (ages 12-16) and young adults (ages 23-30) found that the maturation at this stage was localized in the frontal lobe. The frontal lobe is responsible for decision making, assessment of risks and consequences, and impulse control. The study found that the maturation seen in the older age group was due to an increase in information transmission speed between brain cells, leading to increased cognitive function.

frontal The frontal cortex continues to develop in the young adult brain || (Source: Flickr // Laura Dahl)

But what does this mean? Simply put, with regards to impulsivity or assessment of risks, the young adult brain is not at the same cognitive level as the adult brain: an adolescent, or even a young adult, does not have the same appreciation of risks as older adults do. In this transition phase, young adults are prone to irresponsible and at times reckless behaviour. In failing to account for neurological and behavioural differences between young adults and adults, the criminal justice system sets standards that may be inadequate to account for the mens rea (i.e. moral blameworthiness) needed for a criminal conviction.

While these findings provide for a general understanding of this age group on average, they do not suggest that any given individual with specific anatomical characteristics has failed to appreciate the consequences of their actions. Sentencing is an individual-driven process, and as such, scientific findings about a particular age group only inform the possibility of reduced moral blameworthiness, but do not impose it. In any case, a lack of understanding of developmental nuances seems to correlate with an overrepresentation of young adults in penitentiaries.

Young Adult Courts

In several US states, specifically Idaho, Illinois and most recently California, there is an increase in sentencing diversion programs catered to young adults. In basing their programs on the neuroscientific evidence that young adult brains are still developing, young adult courts were created with preventive and rehabilitative goals in mind. No defendants for violent crimes are admissible to the program as the court mainly deals with felonies such as robbery and assault. All young adults (ages 18-24) admitted to the program go through mandatory classes on controlling emotions and impulses, anger management, and receive therapy. Moreover, they meet with the same judge on a weekly basis where their standing in the program is assessed: if their performance is adequate, they may continue in the program and eventually “graduate”, if it is not, they are sent to jail. Graduating from the program leads to reduced charges or full exoneration.

2870256515_283fcfc87d_o.jpg Young adult courts in the Unites States provide creative alternatives to traditional sentencing for young adults || (Source: Flickr // Priya Deonarain)

As these courts are still in their early stages, it is difficult to assess their effectiveness in reducing the overrepresentation of young adults in prison and in preventing recidivism. One thing is certain, however, in establishing alternatives to prison terms: this court is using an approach that is proactive rather than relying on the currently reactive system.

A Balancing Act

It is difficult to balance the autonomy of young adults and the need to protect a particularly vulnerable age group (see MJLH’s Medical Records Episode 1 with Prof. Shauna Van Praagh). In assessing the mental element of an offence (the mens rea), how much sway should the age of the defendant hold? The legal doctrine in the matter is divided, while the scientific evidence will inevitably be nuanced by social and environmental factors. For example, under certain conditions, young adults may exhibit higher-level reasoning than adolescents, performing at a comparable level to adults. A recent study investigating the cognitive control of individuals, found that young adults’ cognitive performance depended on their emotional level. The study specifically found that when shown images of people experiencing negative emotions, young adults reacted as impulsively as adolescents. However, when the participants were shown images of positive or neutral emotions, young adults reacted similarly to adults over 21 years of age.  They found that in the transitional phase of young adulthood, behaviour may be dictated by emotional state, where a negative emotional state resulted in similar behaviour as adolescents. This may be related to the criminal justice system context as criminal acts may correlate with negative emotional states.

In basing their programs on the neuroscientific evidence that young adult brains are still developing, young adult courts were created with preventive and rehabilitative goals in mind.

Moreover, neuroscience has shown us that there are no clear lines to be drawn between adolescents, young adults, and adults – effectively reflecting the legal approach of basing certain policies differently according to maturity level in different circumstances. As highlighted in this article, maturity level of an adolescent is based on the circumstances that are being assessed. This survey showed that although adolescents have the cognitive ability to make an informed decision pertaining to abortion, it does not necessarily follow that they should be treated as adults with regard to criminal consequences. This is due to the different cognitive abilities assessed in each situation. In deciding on abortion, young people are to be assessed based on their ability to reflect on moral and social implications. In this case, young adults are just as competent as adults. On the other hand, when determining moral culpability in a criminal matter, the cognitive function to be assessed is related to the young person’s psychosocial abilities such as impulse control and resistance to peer pressure: the cognitive skills that are still developing in young adults. This context-specific understanding of young adult decision making should be in line with the law’s reluctance to impose a higher standard in determining criminal culpability when dealing with young defendants, while still respecting the autonomy of young people to make and be responsible for their actions.

These findings show that there is still a need for research in this area, particularly as the young adult age group has been historically studied as part of the adult group. In this case, advocating for young adults to be treated as juvenile defendants may be an overstatement of the available scientific evidence. Instead, the establishment of young adult courts provide for a creative alternative in the wake of evidence pointing to the lowered moral culpability of young adults. The ongoing legal experiment in the US may provide future insights for the Canadian context.

Souhila Baba is a Senior Online Editor with the McGill Journal of Law and Health with a keen interest in mental health, access to health services, and access to justice. She holds a BSc in Psychology from Concordia University. Since she joined the Faculty of Law at McGill University in 2016, she has been able to expand her interests in policy, technology, science, and the law, and the important contributions that women make to these fields and their intersections.

Event: Ethical and Legal Ramifications of Stem Cell Research

We have provided a recording of the event below:


For our first Speaker Series event of the 2017-2018 academic year we are excited to present a stimulating discussion on the legal and ethical ramifications of stem cell research. This event will present diverse perspectives on research involving the development, use and destruction of human embryos, as well as its many potential benefits and its complexities and regulations.

Speakers include:

Dr. William Stanford, Ottawa Research Institute:
Dr. Stanford is a leading stem cell researcher focusing on understanding and manipulating human embryonic stem cells for development of novel therapeutics for many human diseases including cancer.

Me. William Brock:
A partner at Davies who is also a leukemia survivor that underwent stem cell treatment. Me. Brock will give us his legal and personal perspective on the technology.

Dr. Michel L. Tremblay:
Dr. Tremblay is a leading researcher in the Biochemistry Department at McGill University. Dr. Tremblay will discuss techniques for modifying stem cells using CRSPR-Technology in the lab as well as the current use of stem cells in the clinic. Dr. Tremblay will also discuss pertinent ethical issues such as who owns stem cells, stem cell “tourism”, and the future of stem cells including drug development, use of stem cells for tissue/organ replacement and stem cells versus robotics-cybog.

The event will take place at the Thompson House Restaurant (3650 Mc Tavish, Montreal) on November 1st 2017 from 4:45 PM until 6:00 PM. No tickets required. Please arrive 10 minutes in advance to secure a seat.

Artificial Intelligence in Health Care: Are the Legal Algorithms Ready for the Future?

Posted By Dr. Anastasia Greenberg

Big data has become a buzz word, a catch-all term representing a digital reality that is already changing the world and replacing oil as the newest lucrative commodity. But, it is often overlooked that big data alone is literally useless – much like unrefined oil. If big data is the question, artificial intelligence (AI) is the answer. The marriage between big data and AI has been called the fourth industrial revolution. AI has been defined as “the development of computers to engage in human-like thought processes such as learning, reasoning and self-correction”. Machine learning is a field of AI that is concerned with pattern extraction from complex, multidimensional [big] data. Although when we think of AI we tend to think of robots, machine learning algorithms – software, not hardware – are the game changer as they can sort through massive amounts of information that would take a human centuries to do.

7562831366_66f986c3ea_oMachine learning software can harness the usefull information hidden in big data for health care applications || (Source: Flickr // Merrill College of Journalism Press Releases)

While machine learning is already used in fields like finance and economics, the health care industry is the next frontier. In both Canada and the United States, about three-quarters of physicians have now switched to Electronic Health Records, where medical records are digitized as big data. The ability to use such data for medical purposes comes with a host of legal and policy implications. The Canadian Federal Budget 2017 includes $125 million in funding for a Pan-Canadian AI strategy, part of which is meant to “develop global thought leadership on the economic, ethical, policy and legal implications of advances in artificial intelligence.”

How Does a Machine “Learn”?

Machine learning essentially trains a program to predict unknown and/or future values of variables in a similar way to how humans learn. Imagine that you are visiting a friend’s house and you see a furry creature run past you. Automatically, you identify this creature as a cat, not a small dog, even though both are furry animals, and despite never having come across this specific cat before. Your past learning experiences with other cats allowed you to make an inference and classify a novel creature into the “cat” category. This is exactly the kind of task that used to be impossible for computers to perform but has become a reality with machine learning. A computer program is first fed massive amounts of data – for instance, images of biopsies that are pre-classified into categories such as “tumour” and “no tumour” (this procedure is called “supervised” training). The computer then learns to recognize complex patterns within the data, enabling it to classify novel images as either tumour or no tumour.

html php java source codeA machine can be trained to classify health care data for various purposes such as the detection of tumours in biopsy images. || (Source: Flickr // Markus Spiske)

Artificial Intelligence in Health care

Machine learning in the health care context holds a lot of promise for diagnosis, disease onset prediction, and prognosis. Since misdiagnoses are the leading cause of malpractice claims in both Canada and the United States, machine learning could greatly diminish health care and legal costs by improving diagnostic accuracy. With these promises in mind, the research literature on AI applications for health care has taken off in the last few years. Machine learning has been shown to successfully diagnose Parkinson’s Disease using Magnetic Resonance Imaging (MRI) data, predict ischemic stroke, predict lung cancer prognosis better than pathologists, and predict colorectal cancer patient prognosis better than colorectal surgeons. Perhaps most evocative, machine learning algorithms trained using Electronic Health Record data can predict suicide up to two years in advance with 80 percent accuracy.

In the private sector, IBM’s Watson, famous for its performance on Jeopardy, has been reinvented into Watson Health. Watson Health has “read” millions of medical journal articles and understands natural language to answer a host of medical questions. Although it is in its primitive stages, IBM hopes that Watson will soon be able to harness data on patients’ laboratory tests, symptoms, and genetic information. Together, this information processed with Watson’s medical knowledge could be used to effectively diagnose and suggest treatment options.

Legal and Ethical Implications of AI in Health care

In light of the ethical and legal implications, these advances should be both celebrated and received with caution. To understand the presenting issues, it is important to comprehend how these algorithms work. Machine learning is able to take in complex data made up of billions of data points; the more complex the input data that the machine is trained on, the more information it has available for making accurate predictions, but the higher the risk of overfitting. Overfitting occurs when algorithms become very good at modelling the current dataset but cannot successfully generalize to a new dataset – the dataset used to make decisions pertaining to new patients.

Perhaps most evocative, machine learning algorithms trained using Electronic Health Record data can predict suicide up to two years in advance with 80 percent accuracy.

A further complication is that because the machine is learning somewhat autonomously about the data patterns, the process is a black box. It is difficult to understand which features within the data are most important and what kinds of complex nonlinear relationships the machine is modeling to make its predictions. That’s why diligent machine learning researchers test their algorithms on new datasets to figure out how reliable and generalizable they are. In any case, a high level of expertise is required and some level of error is inevitable.

The Current Legal Algorithms

So, what happens when a patient is misdiagnosed using AI? How would the current legal structure deal with such a scenario? One possibility is that such a situation would fall under the default fault-based regime (or “algorithm”) under the Civil Code of Quebec (CCQ, art. 1457), analogous to the tort of negligence in common law Canada. This would imply that the physician using AI technology may be liable for its misdiagnosis. However, AI does not seem well suited for such a regime. As discussed above, the algorithm is going to be more accurate on average than a physician. Thus, once AI in health care becomes commonplace, it would be difficult to find the physician negligent, especially when the physician is not the one writing the code. Given that the accuracy would be an improvement over physician knowledge alone, the physician “expert standard” (where a physician’s actions are compared to her peers) may shift to require the use of such technology.

35338054331_37ea4257a2_oA physician may soon be able to use AI for diagnosing patients and suggesting treatment options || (Source: Flickr // Hamza Butt)

Another option would be to fit AI within the realm of product liability, which, under the CCQ (art. 1468, 1469) is a form of strict liability, meaning that the evidentiary burden would be placed on the defendant, in this case the manufacturer of the AI software, not the patient. This means that the patient has a “leg up” in such a litigation scenario. In line with this idea, in 2017 the Committee on Legal Affairs of the European Union submitted a motion to the European parliament calling for law reform on the issue of robotics (included under the AI umbrella), to implement a strict liability regime. Under the CCQ, the rule governing the autonomous act of a thing (art. 1465), which is close to strict liability (presumption), is another potential possibility, since AI learns semi (or even fully) autonomously. This would again implicate the physician as the guardian of the AI.

The issue with a strict liability regime for AI is that unlike a traditional defective “product”, a human (the physician) and AI are going to work together in the decision making process, making it difficult to point the finger at the manufacturer, software developer or the physician alone. For example, a recent study showed that when presented with images of lymph node biopsies for detecting metastatic cancer, AI was better at finding those hard-to-detect cases while the human physician was better at rejecting false positives (correctly rejecting a cancer diagnosis). When the human and AI skills were combined, correct metastatic cancer detection was at 99.5% accuracy. Another issue comes from the classic economic argument that strict liability can hamper innovation.

AI does not easily fit into any existing private law regimes for compensating patients who will inevitably suffer harm from the however small amount of errors. Another possibility would be to remove AI from the private law system and create compensation schemes. Like AI, vaccines are the utilitarian choice due to their low complication rates and because they do not easily fit into negligence-based regimes. For this reason, the United States has a National Vaccine Injury Compensation Program and Quebec has a similar provincial Program.

AI, along with personalized big data, is projected to be the next disruptive technology that will transform medicine by allowing physicians to see into the future. With great computing power comes great responsibility to develop the right legal and regulatory “scripts” for the implementation of AI in health care.

Anastasia Greenberg is a second-year student in the B.C.L/LL.B. program at McGill University’s Faculty of Law and is the Executive Online Editor of the McGill Journal of Law and Health. Anastasia holds a PhD in Neuroscience from the University of Alberta.