Exploring the Potential Discriminatory Effects Arising from the Use of AI in Healthcare

Contributed by Annelise Harnanan

Introduction

The Canadian health care sector is increasingly making use of new technologies that encompass Artificial Intelligence (AI). AI is a broad term that refers to computer algorithms that can perform human-like activities such as reasoning and problem solving. In health care, technologies encompassing AI have the potential to improve many areas of patient care. However, authors in the field have observed that sometimes, these algorithms can have discriminatory effects. Therefore, care must be taken when using technology that incorporates AI. Many people have noted the presence of systemic racism in health care in Canada, and we must ensure that these biases are not perpetuated  as a result of the algorithms we use.   This blog post will discuss the use of AI in health care, explore the potential discriminatory effects that can occur when using AI, and briefly examine the role that the law might play in protecting individuals from these discriminatory effects.

What is AI?

AI-based computer programs analyse large amounts of data to perform certain tasks, such as making predictions. Many of these new technologies utilize machine learning, a subset of AI. Machine learning “refers to the ability of computers to improve their performance on a task without being explicitly programmed to do so”. Machine learning techniques are advantageous because they can process enormous amounts of input data and perform complex analyses. Furthermore, some of these technological advances encompass deep learning, which use multiple levels of variables to process data and predict outcomes.

AI-based computer programs analyse large amounts of data to perform certain tasks, such as making predictions. || (Source: creativecommons // deepakiqlect)

How can AI be used in healthcare?

Many have heralded AI for its potential to help improve efficiency in the health care sector. Experts in the field believe that it can “improve the ability of health professionals to establish a prognosis” and “improve diagnostic accuracy”. AI can also be used in managing health complications, assisting with patient care during treatment, and in research “aimed at discovery or treatment of disease”. For example, at Toronto’s Hospital for Sick Children,  researchers have come up with a computer model to predict  “whether a patient will go into cardiac arrest  in the next five minutes”.

In addition, some researchers have proposed that AI can address racial disparities in health care. This suggestion may seem to contradict the premise of this article that AI can be discriminatory. However, it is important to note that AI is not inherently good or bad. Rather, it can have desirable or undesirable results depending on the data used to train the algorithm. In a recently published study, Pierson et al used a machine learning algorithm on a dataset of knee radiographs. They claim that because they used a racially and socioeconomically diverse dataset, their algorithm was able to “reduce unexplained [pain] disparities in underserved populations”.   Their conclusion was that the algorithm’s predictions “could potentially redress disparities in access to treatments like arthroplasty”. Notably, the study had limitations. Because of the “black box” nature of the algorithm (a common critique of AI is that it can be unclear how exactly the algorithm arrives at its results), the authors admitted that they could not identify which features of the knee the algorithm used to predict pain. As a result, it was not evident what features were being missed by radiographers and it is therefore unclear whether altering treatment plans for these individuals would confer any benefits on them. Nevertheless, this study is an example of the possible positive impact that algorithms can have on providing health care.

While it is exciting that AI has the ability to improve efficiency in the health care setting, and might even help to reduce disparities, many authors in the field warn of the need to proceed with caution. They cite concerns over the lack of algorithmic transparency (especially when governmental bodies use AI to make important, life-changing decisions), the impact on an individual’s privacy, and the potential for these algorithms to have an indirectly discriminatory effect. Given that discrimination within health care is systemic and often indirect, it is important to be attentive to the use of these new technologies and to ensure they do not promulgate discrimination.

In a recently published study, Pierson et al used a machine learning algorithm on a dataset of knee radiographs. || (Source: creativecommons // Minnaert)

How can AI in healthcare have a discriminatory effect?

AI can be biased. Barton et al explain that an algorithm might make predictions that disparately impact a particular group of people, causing them to experience certain harms, “where there is no relevant difference between groups that justifies such harms”. This bias can can show up in many different phases of a project entailing AI, but Barton et al contend that there are two especially problematic causes of bias: historical human biases and unrepresentative data.

They explain that historic human biases and “deeply embedded prejudices” can become amplified in computer algorithms. For example, an algorithm used by judges to determine whether offenders were likely to reoffend (impacting whether or not they should be released on bail) used multiple variables to determine an offender’s risk score. These variables included defendant demographics and arrest records. Barton et al explain that “African-Americans were more likely to be assigned a higher risk score” when “compared to whites who were equally likely to re-offend”. Given that they were comparing individuals that were equally likely to re-offend, why were members of one group more likely to be assigned a higher risk score by the algorithm? It had to do with the use of arrest records in the training data. The authors highlight the importance of the social context and the role of historical human biases: “if African-Americans are more likely to be arrested and incarcerated […] due to historical racism, disparities in policing practices or other inequalities within the criminal justice system, these realities will be reflected in the training data and used to make suggestions about whether a defendant should be detained”. 

Unrepresentative data poses another problem. Barton et al point out that if data used to train the algorithm is under-representative of a certain group, the predictions the model makes might be less accurate for that group. On the other hand, algorithms might include and rely on too much data pertaining to a particular group and therefore “skew the decision toward a particular result”.

What can be done about this risk of discrimination?

Many have wondered how individuals can be protected against this risk. First, it is important to consider the complexity of the problem. It might be difficult to avoid disparate impacts in practice because (apparently neutral) variables used in algorithms can easily, and without any intention on the part of the developer, function as a proxy for individuals belonging to a group protected from discrimination under the law. However, the solution is not to simply remove all proxies that can result in bias. Cofone explains that “proxies for a protected category could also be a proxy for useful and legitimate information”, so removing proxies might have the effect of removing accuracy as well. Additionally, it might be difficult for developers to foresee the effects of using the variables they choose. Nevertheless, the problems discussed above can be mitigated by a more careful and deliberate selection of the training data being used as input in these algorithms; Cofone therefore recommends regulating the data used as an approach to combat discrimination. He points out that regulation can entail requiring companies to modify their training data, for example by training their models in such a way that they treat groups equally in the decision-making process of the algorithm, as opposed to entrenching real-world biases.

Because of the importance of the data selected to train an algorithm, researchers propose an ex-ante approach to regulating AI. For example, in the United States, lawmakers have proposed an “Algorithmic Accountability Act” to regulate AI, which would  require companies “to conduct impact assessments for existing and new ‘high-risk automated systems’”. Impact assessments have been suggested as a useful tool in regulating AI. They entail an evaluation of “how an automated system is designed and used, including the training data it relies on, the risks a system poses to privacy or security, and various other factors”. In a similar vein, Canadian legislators may consider legislation that would require companies, service providers (such as hospitals), and decision makers to check for bias that might occur from their use of AI.

Another proposal for addressing algorithmic discrimination is to update Canada’s privacy laws by requiring “public bodies to log and trace their data collection and processing systems” and giving Canadians the right to know exactly which personal information, reasons and factors formed the basis for decisions made about them. Although this suggestion was made to address algorithmic decisions made by federal public bodies, it might also be a useful tool to regulate decisions made by hospitals given the importance of these decisions on an individual’s life. This might entail an update of existing provincial health privacy legislation, such as Ontario’s Personal Health Information Act.

Conclusion

Exciting technological advances that encompass the use of AI in health care have the potential to dramatically improve the lives of patients across Canada. However, these advances are not without their risks. Given the high stakes in health care, Canadian lawmakers should work to ensure that when AI is used, it does not have the unintended effect of perpetuating biases and disadvantaging individuals, especially ones who have historically faced barriers to equal access to health care. Many commentators are calling for the government to regulate AI. It is imperative that these calls, and their legal responses, seek to address the risks of discriminatory effects resulting from the use of AI in health care.

Annelise Harnanan is a Senior Online Editor with the McGill Journal of Law and Health. She is in her third year of the B.C.L./ LL.B. program at McGill University’s Faculty of Law. She holds a BA with distinction in Political Science from Dalhousie University and has a keen interest in health policy.

Mental Health and Anti-Discrimination Law at Work

Contributed by Annelise Harnanan

Introduction

In 2015, the government of Canada released its “Report from the Canadian Chronic Disease Surveillance System: Mental Illness in Canada.” The report used data from provincial and territorial administrative health databases to identify cases of mental illness in Canada. The report’s findings indicated that “one in three Canadians will experience a mood disorder, generalized anxiety disorder or substance abuse dependence in their lifetime”. Many people with mental health conditions, however, can and do remain in the Canadian workforce. Based on data from 2011, the Mental Health Commission of Canada stated that “21.4% of the working population in Canada currently experience mental health problems and illnesses.” Considering this, the question arises: what protections does the Canadian legal system offer to persons who face discrimination at work in relation to a mental health condition? This article briefly examines the protections that anti-discrimination law can offer to these individuals.

The Anti-Discrimination Law Framework

Federal and provincial human rights acts endeavor to prevent discrimination in the workplace based on various enumerated grounds such as age, religion, disability and sex. Under the legislation, mental illness typically falls under the ground “disability.” For example, the Nova Scotia Human Rights Act explicitly prohibits discrimination based on “physical and mental disability.” If an individual with a mental illness feels that they have been discriminated against because of their condition by an employer, they can file a complaint with their provincial human rights commission or tribunal. Continue reading “Mental Health and Anti-Discrimination Law at Work”

Recap of the 12th Annual Colloquium (Part 1)

Contributed by Annelise Harnanan

On February 8, the McGill Journal of Law and Health welcomed four fantastic and engaging speakers to our 12th annual colloquium titled “Neurolaw: Combining the Science of the Brain and the Law.” Following is a brief summary of the first two speaker’s presentations.

82984047_722175408190412_2255148101469732864_n

Fernanda Pérez-Gay Juarez

Dr Juarez commenced the colloquium with a presentation entitled “Brain Science and the Law: Can Neuroimaging Techniques Tell Us What We Want to Know in Court?”. Her presentation provided an excellent summary of what we can learn from emerging neuroscience techniques. She noted that many people were becoming excited about these techniques because of a hope that they would provide insight into exactly what is going on in the brain at any given point in time. This could be especially useful in many criminal law cases, where some crimes are more severely punished than others depending on the mental state of accused. She noted that cognitive neuroscience especially has been the source of great excitement. Cognitive neuroscience is a behavioural branch of neuroscience, in which researchers attempt to study how matter, such as molecules, can give rise to the non-objective phenomenon that is the mind.

Dr Juarez provided a brief summary on some current functional neuroimaging techniques, which allow us to gain some insight into what is going on in the brain as we do various tasks. These include the EEG (Electroencephalogram), fMRI (functional magnetic resonance imaging), and PET (Position Emission Tomography)/ CT (Computed Tomography) scans. Dr Juarez noted that these techniques require pertinent, well-designed research questions. Researchers have to consider what task they will give to the subject and how they will measure what they see. Furthermore, conclusions are based on averaging across multiple participants; studies with few participants will not be very conclusive. She also observed that none of these techniques can actually give access to the content of what is actually going on in a subject’s head.

Dr Juarez then explored the debate over what neuroscience can tell us. She used the case of free will to demonstrate this debate. Specifically, she talked about the Libet Experiment, in which researcher Benjamin Libet measured the EEG activity of subjects who were deciding when to push a button. He found that EEG activity was occurring before the subject had made the conscious decision to act. Some people took this as an indication that there is no such thing as free will because the subject experienced brain activity associated with the selection of the button before having the opportunity to consciously decide what they were doing. Some, however, said that the occurrence of this brain activity prior to a conscious decision did not necessarily indicate the absence of free will. Others critiqued the conclusion that there is no free will by noting real life involves far more complex decision making than simply whether or not to push a button. Dr Juarez concluded her presentation by stating that neuroimaging techniques cannot tell us what we want to know in court. We are not able to use neuroscience to read the cognitive states and intentions of persons of interests. We cannot access the content of their mind or understand how they feel when they commit crimes. The brain is complex, and interactions between the brain, mind, culture, and society are multi-dimensional phenomena.

Adrian Thorogood

Next, Me Adrian Thorogood gave his presentation titled “Involving Persons with Dementia in Data-Driven Neuroscience”. Me Thorogood’s presentation focused on two neurodegenerative diseases: dementia and Alzheimer’s disease. He noted that there are two levels of societal responses: at the narrow level, society is working to invent treatment; at the broader level, society must consider how to better support caretakers and provide appropriate healthcare. Unfortunately, however, research in the context of these diseases is not very successful: clinical trials have a 99.6% failure rate.

Me Thorogood proceeded by explaining why we are currently facing these difficulties in moving towards a cure for these neurodegenerative diseases. Firstly, the brain is incredibly complex and notoriously hard to inspect. Secondly, on the science policy side, there are some issues surrounding innovation. Because this area has the potential to have a huge impact and generate large amounts of profit, there is a large amount of secrecy surrounding research. Whatever the solution is, many researchers and industry players hope that they will own the patent on that molecule. This, in turn, creates transactional problems where researchers patent and keep their work secretly, which requires subsequent researchers adding to that work to acquire a license to do so. Because of this, research in the field is moving at a slow pace.

Me Thorogood observed that much of the scientific world believes that big data can be a solution for this problem. Data sharing, he observed, enables us to rapidly verify and refine results and to do larger scale, meta analyses. Data sharing also allows for opportunities for creative reuse of data. Additionally, many are pushing towards more transparent clinical trials. This involves the registration and reporting of clinical trial results even if these results are not positive. These solutions, however, do collide somewhat with the current push towards more stringent data governance. With the use and sharing of data, one important thing to remember is the need for patient consent. This brings us to the next consideration – what is the best way to get the consent of persons with dementia and Alzheimer’s disease? These diseases are associated with diminished mental capacity, making it hard to get legal consent.

Me Thorogood then provided recommendations for data-driven health research. An important pillar of research is consent and the need to protect vulnerable persons. In this regard, researchers should assume that all adults have the capacity to make legal decisions, instead of jumping to the conclusion that someone with dementia can no longer consent. Persons with dementia and Alzheimer’s have varying levels of mental capacity. Persons should only be treated as not capable to make decisions if it has been confirmed that they lack capacity. Furthermore, the law should be clearer on who can substitute consent. Me Thorogood also highlighted the importance of supported decision-making, in which persons with these diseases are given the support to help them make their own decisions, instead of having another person make decisions for them. When persons do lack legal capacity to make decisions, they should still be at least involved in the decisions being made about themselves. Lastly, it is important for legally authorized representatives to abide by certain rules when making a decision for an individual with a neurodegenerative disease. They should do everything they can to understand the person, involve them, and rely on autonomous expressions that they have recorded or expressed in the past, which involves advanced health care planning and directives. In the context of neurodegenerative disease, issues of the “self” come up. Is there one “self”, or more? How do you reconcile the desires of an individual with dementia today, with what they said they wanted in the past? There is no easy solution, Me Thorogood concluded, but health care personnel and legal representatives must do their best to understand and interpret the wills and preferences of individuals with neurodegenerative disease.

Summaries of the final two presentations are available here.

Challenging the “Six-month Sober” Rule for Liver Transplants in Canada

Contributed by Annelise Harnanan

Introduction: The Six-Month Sober Rule

Many of the transplant programs across Canada require patients to abstain from alcohol for six months before becoming eligible to be placed on a waitlist for a liver transplant. Generally, reasons for the rule (often referred to as the “six-month sober rule”) are based on the scarcity of donated livers available for transplantation. Indeed, in 2017, Ontario’s transplant agency, Trillium Gift of Life Network, (Trillium) stated that “the number of patients needing a transplant continues to exceed organs available for transplant.”

The six-month rule appears to stem from a concern that a person with alcohol-related liver disease will return to a pattern of alcohol consumption, which could cause complications with the new liver or a recurrence of liver disease. It has been suggested that this perception is based on a study published in 1994 on recidivism after liver transplants in patients with end-stage alcohol liver disease. The authors of the study claimed that only one variable was associated with a return to alcohol abuse: sobriety from alcohol for less than six months. The authors concluded that this finding supported sobriety from alcohol for at least six months as a selection criterion for liver transplants. The six-month rule could also be based on a perception that this period of abstinence would provide sufficient time for a patient’s health to improve “to the point that liver transplantation may not be needed.”

13041_lores.jpg A liver tissue extract from an individual with alcohol liver disease. || (Source: Centers for Disease Control and Prevention Public Health Image Library // World Health Organization)

Despite mounting criticisms, Trillium stated that it did not intend to abolish the rule given that there was insufficient evidence to support such a change. Trillium did, however, launch a three-year pilot program in 2018 to gather evidence to determine whether or not there is a scientific basis for changing the rule.  Continue reading “Challenging the “Six-month Sober” Rule for Liver Transplants in Canada”

Electronic Health Records: a Glimpse Into the Legal Framework

Contributed by Annelise Harnanan 

Introduction

In an attempt to improve and modernize the state of health care in Canada, many provinces have been investing significant amounts of money and time into the digitization and centralization of health records. In 2013, Quebec introduced an electronic database, called the Québec Health Record, to securely share a patient’s information with their other healthcare providers. The aggregation of patients’ health data on a single electronic database has many advantages. However, the increasing use of Electronic Health Records (EHRs) has caused some concern that the legislative framework surrounding health information privacy might require some adaptations.

The term “Electronic Health Record” has been defined somewhat inconsistently in the literature. Häyrinen, Saranto & Nykänen, basing their definition off of the International Organization for Standardization, have described it as a “repository of patient data in digital form, stored and exchanged securely, and accessible by multiple authorized users”. EHRs bring all of a patient’s health data into one digital location. This information can be accessed by any of that patient’s health care providers when authorized by a patient. Notably, there are other electronic forms of medical information, such as electronic patient information files, which are localised at hospitals and health clinics and are not shared amongst health care providers. Continue reading “Electronic Health Records: a Glimpse Into the Legal Framework”