When Science meets Alternative Sentencing: Young Adult Courts (Part 1)

Contributed by Souhila Baba

On the theme of psychology, this two parts blog-series will showcase recent developments in alternative sentencing, first in the United States and second in Canada, portraying how findings in science contribute to innovation in the legal field.

Don’t Treat Young Adults as Teenagers.” “Why Reimagining Prison for Young Adults Matters.” “​How Germany Treats Young Criminals.” “Criminals under 25 should not go to adult prison, MPs say.” These are but a few examples of the headlines urging change with regard to young adults in criminal justice around the world. While the law sets a threshold in differentiating adolescents from adults (18 years of age), science shows that the young adult (18-24 years of age) brain is still developing.

Young Adults in the Criminal Justice System

In Canada, once a person reaches the age of 18, they are no longer treated by the justice system as a juvenile offender, but as an adult. This results in an overrepresentation of young adults in the prison system. Following a section 7 constitutional challenge in 2008, the Supreme Court of Canada found that juvenile defendants (under 18 years of age) not only have a presumption of reduced moral culpability, they also cannot be sentenced as adults unless the Crown proves it is adequate beyond a reasonable doubt to do so. Since young adults are tried and sentenced as adults, they do not benefit from a similar presumption, and cannot be tried as juveniles. The Supreme Court of Canada is silent on this issue, and there are currently no alternative programs or sentences specifically catered to this age group.

Similarly, in the United States, young adults roughly ranging in age from 18-29, are also consistently overrepresented in prisons: making up 21% of inmates while representing only 10% of the population. Neuroscientific evidence has long held that the brain is continuously in development, from conception to death. While there are critical periods of significant brain development during infancy, childhood, and adolescence, the brain continues to undergo changes even into adulthood and beyond. In law, when a person reaches the threshold of 18 years of age they are characterized as an adult, without considering the developmental gradients that scientists are aware of.

The Science of Brain Development

Neuroscientists distinguish between an 18-year-old and a 26-year-old, as the development of certain brain regions is still in progress. Already in 1999, an experiment performed using Magnetic Resonance Imaging (MRI) technology to map differences in brain anatomy between adolescents (ages 12-16) and young adults (ages 23-30) found that the maturation at this stage was localized in the frontal lobe. The frontal lobe is responsible for decision making, assessment of risks and consequences, and impulse control. The study found that the maturation seen in the older age group was due to an increase in information transmission speed between brain cells, leading to increased cognitive function.

frontal The frontal cortex continues to develop in the young adult brain || (Source: Flickr // Laura Dahl)

But what does this mean? Simply put, with regards to impulsivity or assessment of risks, the young adult brain is not at the same cognitive level as the adult brain: an adolescent, or even a young adult, does not have the same appreciation of risks as older adults do. In this transition phase, young adults are prone to irresponsible and at times reckless behaviour. In failing to account for neurological and behavioural differences between young adults and adults, the criminal justice system sets standards that may be inadequate to account for the mens rea (i.e. moral blameworthiness) needed for a criminal conviction.

While these findings provide for a general understanding of this age group on average, they do not suggest that any given individual with specific anatomical characteristics has failed to appreciate the consequences of their actions. Sentencing is an individual-driven process, and as such, scientific findings about a particular age group only inform the possibility of reduced moral blameworthiness, but do not impose it. In any case, a lack of understanding of developmental nuances seems to correlate with an overrepresentation of young adults in penitentiaries.

Young Adult Courts

In several US states, specifically Idaho, Illinois and most recently California, there is an increase in sentencing diversion programs catered to young adults. In basing their programs on the neuroscientific evidence that young adult brains are still developing, young adult courts were created with preventive and rehabilitative goals in mind. No defendants for violent crimes are admissible to the program as the court mainly deals with felonies such as robbery and assault. All young adults (ages 18-24) admitted to the program go through mandatory classes on controlling emotions and impulses, anger management, and receive therapy. Moreover, they meet with the same judge on a weekly basis where their standing in the program is assessed: if their performance is adequate, they may continue in the program and eventually “graduate”, if it is not, they are sent to jail. Graduating from the program leads to reduced charges or full exoneration.

2870256515_283fcfc87d_o.jpg Young adult courts in the Unites States provide creative alternatives to traditional sentencing for young adults || (Source: Flickr // Priya Deonarain)

As these courts are still in their early stages, it is difficult to assess their effectiveness in reducing the overrepresentation of young adults in prison and in preventing recidivism. One thing is certain, however, in establishing alternatives to prison terms: this court is using an approach that is proactive rather than relying on the currently reactive system.

A Balancing Act

It is difficult to balance the autonomy of young adults and the need to protect a particularly vulnerable age group (see MJLH’s Medical Records Episode 1 with Prof. Shauna Van Praagh). In assessing the mental element of an offence (the mens rea), how much sway should the age of the defendant hold? The legal doctrine in the matter is divided, while the scientific evidence will inevitably be nuanced by social and environmental factors. For example, under certain conditions, young adults may exhibit higher-level reasoning than adolescents, performing at a comparable level to adults. A recent study investigating the cognitive control of individuals, found that young adults’ cognitive performance depended on their emotional level. The study specifically found that when shown images of people experiencing negative emotions, young adults reacted as impulsively as adolescents. However, when the participants were shown images of positive or neutral emotions, young adults reacted similarly to adults over 21 years of age.  They found that in the transitional phase of young adulthood, behaviour may be dictated by emotional state, where a negative emotional state resulted in similar behaviour as adolescents. This may be related to the criminal justice system context as criminal acts may correlate with negative emotional states.

In basing their programs on the neuroscientific evidence that young adult brains are still developing, young adult courts were created with preventive and rehabilitative goals in mind.

Moreover, neuroscience has shown us that there are no clear lines to be drawn between adolescents, young adults, and adults – effectively reflecting the legal approach of basing certain policies differently according to maturity level in different circumstances. As highlighted in this article, maturity level of an adolescent is based on the circumstances that are being assessed. This survey showed that although adolescents have the cognitive ability to make an informed decision pertaining to abortion, it does not necessarily follow that they should be treated as adults with regard to criminal consequences. This is due to the different cognitive abilities assessed in each situation. In deciding on abortion, young people are to be assessed based on their ability to reflect on moral and social implications. In this case, young adults are just as competent as adults. On the other hand, when determining moral culpability in a criminal matter, the cognitive function to be assessed is related to the young person’s psychosocial abilities such as impulse control and resistance to peer pressure: the cognitive skills that are still developing in young adults. This context-specific understanding of young adult decision making should be in line with the law’s reluctance to impose a higher standard in determining criminal culpability when dealing with young defendants, while still respecting the autonomy of young people to make and be responsible for their actions.

These findings show that there is still a need for research in this area, particularly as the young adult age group has been historically studied as part of the adult group. In this case, advocating for young adults to be treated as juvenile defendants may be an overstatement of the available scientific evidence. Instead, the establishment of young adult courts provide for a creative alternative in the wake of evidence pointing to the lowered moral culpability of young adults. The ongoing legal experiment in the US may provide future insights for the Canadian context.

Souhila Baba is a Senior Online Editor with the McGill Journal of Law and Health with a keen interest in mental health, access to health services, and access to justice. She holds a BSc in Psychology from Concordia University. Since she joined the Faculty of Law at McGill University in 2016, she has been able to expand her interests in policy, technology, science, and the law, and the important contributions that women make to these fields and their intersections.

Event: Ethical and Legal Ramifications of Stem Cell Research

We have provided a recording of the event below:

For our first Speaker Series event of the 2017-2018 academic year we are excited to present a stimulating discussion on the legal and ethical ramifications of stem cell research. This event will present diverse perspectives on research involving the development, use and destruction of human embryos, as well as its many potential benefits and its complexities and regulations.

Speakers include:

Dr. William Stanford, Ottawa Research Institute:
Dr. Stanford is a leading stem cell researcher focusing on understanding and manipulating human embryonic stem cells for development of novel therapeutics for many human diseases including cancer.

Me. William Brock:
A partner at Davies who is also a leukemia survivor that underwent stem cell treatment. Me. Brock will give us his legal and personal perspective on the technology.

Dr. Michel L. Tremblay:
Dr. Tremblay is a leading researcher in the Biochemistry Department at McGill University. Dr. Tremblay will discuss techniques for modifying stem cells using CRSPR-Technology in the lab as well as the current use of stem cells in the clinic. Dr. Tremblay will also discuss pertinent ethical issues such as who owns stem cells, stem cell “tourism”, and the future of stem cells including drug development, use of stem cells for tissue/organ replacement and stem cells versus robotics-cybog.

The event will take place at the Thompson House Restaurant (3650 Mc Tavish, Montreal) on November 1st 2017 from 4:45 PM until 6:00 PM. No tickets required. Please arrive 10 minutes in advance to secure a seat.

Artificial Intelligence in Health Care: Are the Legal Algorithms Ready for the Future?

Contributed by Dr. Anastasia Greenberg

Big data has become a buzz word, a catch-all term representing a digital reality that is already changing the world and replacing oil as the newest lucrative commodity. But, it is often overlooked that big data alone is literally useless – much like unrefined oil. If big data is the question, artificial intelligence (AI) is the answer. The marriage between big data and AI has been called the fourth industrial revolution. AI has been defined as “the development of computers to engage in human-like thought processes such as learning, reasoning and self-correction”. Machine learning is a field of AI that is concerned with pattern extraction from complex, multidimensional [big] data. Although when we think of AI we tend to think of robots, machine learning algorithms – software, not hardware – are the game changer as they can sort through massive amounts of information that would take a human centuries to do.

7562831366_66f986c3ea_oMachine learning software can harness the usefull information hidden in big data for health care applications || (Source: Flickr // Merrill College of Journalism Press Releases)

While machine learning is already used in fields like finance and economics, the health care industry is the next frontier. In both Canada and the United States, about three-quarters of physicians have now switched to Electronic Health Records, where medical records are digitized as big data. The ability to use such data for medical purposes comes with a host of legal and policy implications. The Canadian Federal Budget 2017 includes $125 million in funding for a Pan-Canadian AI strategy, part of which is meant to “develop global thought leadership on the economic, ethical, policy and legal implications of advances in artificial intelligence.”

How Does a Machine “Learn”?

Machine learning essentially trains a program to predict unknown and/or future values of variables in a similar way to how humans learn. Imagine that you are visiting a friend’s house and you see a furry creature run past you. Automatically, you identify this creature as a cat, not a small dog, even though both are furry animals, and despite never having come across this specific cat before. Your past learning experiences with other cats allowed you to make an inference and classify a novel creature into the “cat” category. This is exactly the kind of task that used to be impossible for computers to perform but has become a reality with machine learning. A computer program is first fed massive amounts of data – for instance, images of biopsies that are pre-classified into categories such as “tumour” and “no tumour” (this procedure is called “supervised” training). The computer then learns to recognize complex patterns within the data, enabling it to classify novel images as either tumour or no tumour.

html php java source codeA machine can be trained to classify health care data for various purposes such as the detection of tumours in biopsy images. || (Source: Flickr // Markus Spiske)

Artificial Intelligence in Health care

Machine learning in the health care context holds a lot of promise for diagnosis, disease onset prediction, and prognosis. Since misdiagnoses are the leading cause of malpractice claims in both Canada and the United States, machine learning could greatly diminish health care and legal costs by improving diagnostic accuracy. With these promises in mind, the research literature on AI applications for health care has taken off in the last few years. Machine learning has been shown to successfully diagnose Parkinson’s Disease using Magnetic Resonance Imaging (MRI) data, predict ischemic stroke, predict lung cancer prognosis better than pathologists, and predict colorectal cancer patient prognosis better than colorectal surgeons. Perhaps most evocative, machine learning algorithms trained using Electronic Health Record data can predict suicide up to two years in advance with 80 percent accuracy.

In the private sector, IBM’s Watson, famous for its performance on Jeopardy, has been reinvented into Watson Health. Watson Health has “read” millions of medical journal articles and understands natural language to answer a host of medical questions. Although it is in its primitive stages, IBM hopes that Watson will soon be able to harness data on patients’ laboratory tests, symptoms, and genetic information. Together, this information processed with Watson’s medical knowledge could be used to effectively diagnose and suggest treatment options.

Legal and Ethical Implications of AI in Health care

In light of the ethical and legal implications, these advances should be both celebrated and received with caution. To understand the presenting issues, it is important to comprehend how these algorithms work. Machine learning is able to take in complex data made up of billions of data points; the more complex the input data that the machine is trained on, the more information it has available for making accurate predictions, but the higher the risk of overfitting. Overfitting occurs when algorithms become very good at modelling the current dataset but cannot successfully generalize to a new dataset – the dataset used to make decisions pertaining to new patients.

Perhaps most evocative, machine learning algorithms trained using Electronic Health Record data can predict suicide up to two years in advance with 80 percent accuracy.

A further complication is that because the machine is learning somewhat autonomously about the data patterns, the process is a black box. It is difficult to understand which features within the data are most important and what kinds of complex nonlinear relationships the machine is modeling to make its predictions. That’s why diligent machine learning researchers test their algorithms on new datasets to figure out how reliable and generalizable they are. In any case, a high level of expertise is required and some level of error is inevitable.

The Current Legal Algorithms

So, what happens when a patient is misdiagnosed using AI? How would the current legal structure deal with such a scenario? One possibility is that such a situation would fall under the default fault-based regime (or “algorithm”) under the Civil Code of Quebec (CCQ, art. 1457), analogous to the tort of negligence in common law Canada. This would imply that the physician using AI technology may be liable for its misdiagnosis. However, AI does not seem well suited for such a regime. As discussed above, the algorithm is going to be more accurate on average than a physician. Thus, once AI in health care becomes commonplace, it would be difficult to find the physician negligent, especially when the physician is not the one writing the code. Given that the accuracy would be an improvement over physician knowledge alone, the physician “expert standard” (where a physician’s actions are compared to her peers) may shift to require the use of such technology.

35338054331_37ea4257a2_oA physician may soon be able to use AI for diagnosing patients and suggesting treatment options || (Source: Flickr // Hamza Butt)

Another option would be to fit AI within the realm of product liability, which, under the CCQ (art. 1468, 1469) is a form of strict liability, meaning that the evidentiary burden would be placed on the defendant, in this case the manufacturer of the AI software, not the patient. This means that the patient has a “leg up” in such a litigation scenario. In line with this idea, in 2017 the Committee on Legal Affairs of the European Union submitted a motion to the European parliament calling for law reform on the issue of robotics (included under the AI umbrella), to implement a strict liability regime. Under the CCQ, the rule governing the autonomous act of a thing (art. 1465), which is close to strict liability (presumption), is another potential possibility, since AI learns semi (or even fully) autonomously. This would again implicate the physician as the guardian of the AI.

The issue with a strict liability regime for AI is that unlike a traditional defective “product”, a human (the physician) and AI are going to work together in the decision making process, making it difficult to point the finger at the manufacturer, software developer or the physician alone. For example, a recent study showed that when presented with images of lymph node biopsies for detecting metastatic cancer, AI was better at finding those hard-to-detect cases while the human physician was better at rejecting false positives (correctly rejecting a cancer diagnosis). When the human and AI skills were combined, correct metastatic cancer detection was at 99.5% accuracy. Another issue comes from the classic economic argument that strict liability can hamper innovation.

AI does not easily fit into any existing private law regimes for compensating patients who will inevitably suffer harm from the however small amount of errors. Another possibility would be to remove AI from the private law system and create compensation schemes. Like AI, vaccines are the utilitarian choice due to their low complication rates and because they do not easily fit into negligence-based regimes. For this reason, the United States has a National Vaccine Injury Compensation Program and Quebec has a similar provincial Program.

AI, along with personalized big data, is projected to be the next disruptive technology that will transform medicine by allowing physicians to see into the future. With great computing power comes great responsibility to develop the right legal and regulatory “scripts” for the implementation of AI in health care.

Anastasia Greenberg is a second-year student in the B.C.L/LL.B. program at McGill University’s Faculty of Law and is the Executive Online Editor of the McGill Journal of Law and Health. Anastasia holds a PhD in Neuroscience from the University of Alberta.

Recruitment 2017-2018

We have officially opened recruitment for all boards on the MJLH. We invite students from all years of the McGill Faculty of Law to send your applications to any of our four executive boards: Editorial, Online, Management and Strategic Planning & Solicitations.

Notez bien que le comité de rédaction est bilingue et les rédacteurs / rédactrices travaillent avec les articles en anglais ainsi que le français. Vous avez donc l’option de soumettre une application pour le comité anglais ainsi que le comité français.

Applications for all Boards will remain open until Monday, September 11th at 11:59PM (exceptionally for the Editorial Board the deadline is September 15th at 11:59 PM). Executives for each board will hold interviews thereafter. Application instructions can be found below.



Editorial Board:

Please download and complete the following Application Form and Assignment Instructions, along with either an English or French article that you will use to complete the assignment. Please send your completed Application Form, assignment, CV, and cover letter to: editor.mjlh[AT]mail.mcgill.ca (English) or redacteur.rdsm[AT]mail.mcgill.ca (Français). Exceptionally, applications for the Editorial Board will remain open until Friday, September 15th at 11:59 p.m.

Online Board: 

Please send your CV and a cover letter to: web.mjlh[AT]mail.mcgill.ca.

Note: Please keep in mind that you do not need to be on the Online Board to submit online articles. We solicit content for the Online blog from all law students and professors in the legal community at McGill and beyond.

Management Board:

Please download and complete the following Application Form and send the completed form along with your CV and a cover letter to manager.mjlh[AT]mail.mcgill.ca.

SPAS Board:

Please download and complete the following Application Form and send the completed form along with your CV and a cover letter to info.mjlh[AT]mcgill.ca.

Applications for all Boards (except Editorial) will remain open until Monday, September 11th at 11:59PM.

Vous pouvez soumettre votre application en français.

Estate Planning: Ten Practical Steps to Improve Written Advance Directives in Powers of Attorney for Healthcare

Posted By Dr. Laura Hawryluck


Advance care planning was originally devised as a process to ensure that a person’s wishes and values are respected in some of the most important decisions of his or her life when that voice is silenced due to incapacity. Such wishes could be expressed, verbally or in writing, to those who would subsequently ensure that these wishes and values were factored into any decisions regarding healthcare and treatment choices. The importance of having one’s voice heard quickly translated into the creation of powers of attorney documents for personal or health care as a legal document with the aim to express one’s directives or appoint someone to make substitute decisions. These documents are now almost always generated at the same time as other, more traditional aspects of estate planning.

The underlying idea holds great promise despite the often flawed execution due to a range of common assumptions made by the parties to a power of attorney document. While people often wish to avoid life-sustaining treatments at the end of life, they rarely understand the realities of the life-sustaining treatments and resuscitation their directives most commonly speak to, the contextin which such treatments may be required or their implications for quality of life. Nor do they seek to deepen their knowledge by systematically consulting their physicians or healthcare teams prior to completing these legal documents, even if their lawyer encourages them to do so.

s_129841.jpg     (Source: Fairview)

Further, most people appointed as substitute decision-makers or falling under the statutorily defined hierarchy of substitute decision-makers have never discussed such personal wishes and values with those they are designated to represent. As a result, many end up substituting their own values for those of the now incapable person and often consent to or request treatments that are more aggressive than the person actually wanted to undergoespecially as the end of life nears. Adding to these issues, decisions with respect to life support and resuscitation often need to be made quickly, which only adds to the tension and stress experienced by SDMs and healthcare teams.

These problems, if unaddressed, are only going to cause increasing challenges in medical and legal practice: as people live longer; as families become smaller; as friendships fade or evolve; as more people choose to stay single and not have children, more people may find themselves alone and may, therefore, seek to express their wishes and values through written advance directives rather than solely appointing a substitute decision-maker. However, the text and content of advance directives are often ambiguous, do not speak to the situation, and are frequently overly standardized in legal practice in ways that prohibit the expression of unique personal wishes and instead promote conformity. While the actual wording and systematic inclusion of “no heroic” clauses in power of attorney documents has not been altered for years, such clauses as currently written are not particularly helpful in clinical practice. For advance directives to speak clearly, truly respect autonomy and provide meaningful guidance, the text of legal documents must improve—the time for them to do so is now.

This series therefore aims to set out an intensivist’s perspectives on the top ten ways estate lawyers can improve the content and scope of advance directives, the wording of any “standardized” no heroics clauses, the process to ensure these wishes and values are respected and, as a result, improve the advice they provide to their clients and the quality of end of care they ultimately receive.

Go to Step 1

The author wishes to encourage and engage in discussion regarding the directives she proposes here. Please take a moment to express your thoughts and critical commentary in the comments section below.

About the Author


LAURA HAWRYLUCK received her MD in 1992 from the University of Western Ontario where she also served her Internal Medicine residency. She completed a Fellowship in Critical Care at the University of Manitoba in 1997 and received her MSc in Bioethics in 1999 from the Joint Centre for Bioethics and the Institute of Medical Science at the University of Toronto. From 1999-2001 she was Assistant Professor of Critical Care/Internal Medicine, Queen’s University, Kingston, Ontario. In March 2000 she was appointed Physician Leader of the national Ian Anderson Continuing Education Program in End-of-Life Care at the University of Toronto and is currently Associate Professor of Critical Care Medicine at the University of Toronto. In 2002, she was awarded the Queen’s Golden Jubilee Medal for contributions to Canada in recognition of her work in creating the Anderson Program and improving end of life care for Canadians. Dr. Hawryluck is co-author and editor of “Law of Acute Care in Canada” to be published shortly by Carswell, a division of Thomson Reuters.

Dr. Hawryluck is deeply involved in international humanitarian projects. She has worked with critical care and burn units in Indore India and Cote d’Ivoire on a variety of quality improvement and educational initiatives. She was co-creator and co-Director for RCCI of the first Doctorate in Medicine Program in Critical Care in the entire country of Nepal. She worked with the Nepal Medical Council as an international consultant to enact a Code of Ethics and Professionalism for all physicians in Nepal.