Contributed by Annelise Harnanan
Introduction
The Canadian health care sector is increasingly making use of new technologies that encompass Artificial Intelligence (AI). AI is a broad term that refers to computer algorithms that can perform human-like activities such as reasoning and problem solving. In health care, technologies encompassing AI have the potential to improve many areas of patient care. However, authors in the field have observed that sometimes, these algorithms can have discriminatory effects. Therefore, care must be taken when using technology that incorporates AI. Many people have noted the presence of systemic racism in health care in Canada, and we must ensure that these biases are not perpetuated as a result of the algorithms we use. This blog post will discuss the use of AI in health care, explore the potential discriminatory effects that can occur when using AI, and briefly examine the role that the law might play in protecting individuals from these discriminatory effects.
What is AI?
AI-based computer programs analyse large amounts of data to perform certain tasks, such as making predictions. Many of these new technologies utilize machine learning, a subset of AI. Machine learning “refers to the ability of computers to improve their performance on a task without being explicitly programmed to do so”. Machine learning techniques are advantageous because they can process enormous amounts of input data and perform complex analyses. Furthermore, some of these technological advances encompass deep learning, which use multiple levels of variables to process data and predict outcomes.

AI-based computer programs analyse large amounts of data to perform certain tasks, such as making predictions. || (Source: creativecommons // deepakiqlect)
How can AI be used in healthcare?
Many have heralded AI for its potential to help improve efficiency in the health care sector. Experts in the field believe that it can “improve the ability of health professionals to establish a prognosis” and “improve diagnostic accuracy”. AI can also be used in managing health complications, assisting with patient care during treatment, and in research “aimed at discovery or treatment of disease”. For example, at Toronto’s Hospital for Sick Children, researchers have come up with a computer model to predict “whether a patient will go into cardiac arrest in the next five minutes”.
In addition, some researchers have proposed that AI can address racial disparities in health care. This suggestion may seem to contradict the premise of this article that AI can be discriminatory. However, it is important to note that AI is not inherently good or bad. Rather, it can have desirable or undesirable results depending on the data used to train the algorithm. In a recently published study, Pierson et al used a machine learning algorithm on a dataset of knee radiographs. They claim that because they used a racially and socioeconomically diverse dataset, their algorithm was able to “reduce unexplained [pain] disparities in underserved populations”. Their conclusion was that the algorithm’s predictions “could potentially redress disparities in access to treatments like arthroplasty”. Notably, the study had limitations. Because of the “black box” nature of the algorithm (a common critique of AI is that it can be unclear how exactly the algorithm arrives at its results), the authors admitted that they could not identify which features of the knee the algorithm used to predict pain. As a result, it was not evident what features were being missed by radiographers and it is therefore unclear whether altering treatment plans for these individuals would confer any benefits on them. Nevertheless, this study is an example of the possible positive impact that algorithms can have on providing health care.
While it is exciting that AI has the ability to improve efficiency in the health care setting, and might even help to reduce disparities, many authors in the field warn of the need to proceed with caution. They cite concerns over the lack of algorithmic transparency (especially when governmental bodies use AI to make important, life-changing decisions), the impact on an individual’s privacy, and the potential for these algorithms to have an indirectly discriminatory effect. Given that discrimination within health care is systemic and often indirect, it is important to be attentive to the use of these new technologies and to ensure they do not promulgate discrimination.

In a recently published study, Pierson et al used a machine learning algorithm on a dataset of knee radiographs. || (Source: creativecommons // Minnaert)
How can AI in healthcare have a discriminatory effect?
AI can be biased. Barton et al explain that an algorithm might make predictions that disparately impact a particular group of people, causing them to experience certain harms, “where there is no relevant difference between groups that justifies such harms”. This bias can can show up in many different phases of a project entailing AI, but Barton et al contend that there are two especially problematic causes of bias: historical human biases and unrepresentative data.
They explain that historic human biases and “deeply embedded prejudices” can become amplified in computer algorithms. For example, an algorithm used by judges to determine whether offenders were likely to reoffend (impacting whether or not they should be released on bail) used multiple variables to determine an offender’s risk score. These variables included defendant demographics and arrest records. Barton et al explain that “African-Americans were more likely to be assigned a higher risk score” when “compared to whites who were equally likely to re-offend”. Given that they were comparing individuals that were equally likely to re-offend, why were members of one group more likely to be assigned a higher risk score by the algorithm? It had to do with the use of arrest records in the training data. The authors highlight the importance of the social context and the role of historical human biases: “if African-Americans are more likely to be arrested and incarcerated […] due to historical racism, disparities in policing practices or other inequalities within the criminal justice system, these realities will be reflected in the training data and used to make suggestions about whether a defendant should be detained”.
Unrepresentative data poses another problem. Barton et al point out that if data used to train the algorithm is under-representative of a certain group, the predictions the model makes might be less accurate for that group. On the other hand, algorithms might include and rely on too much data pertaining to a particular group and therefore “skew the decision toward a particular result”.
What can be done about this risk of discrimination?
Many have wondered how individuals can be protected against this risk. First, it is important to consider the complexity of the problem. It might be difficult to avoid disparate impacts in practice because (apparently neutral) variables used in algorithms can easily, and without any intention on the part of the developer, function as a proxy for individuals belonging to a group protected from discrimination under the law. However, the solution is not to simply remove all proxies that can result in bias. Cofone explains that “proxies for a protected category could also be a proxy for useful and legitimate information”, so removing proxies might have the effect of removing accuracy as well. Additionally, it might be difficult for developers to foresee the effects of using the variables they choose. Nevertheless, the problems discussed above can be mitigated by a more careful and deliberate selection of the training data being used as input in these algorithms; Cofone therefore recommends regulating the data used as an approach to combat discrimination. He points out that regulation can entail requiring companies to modify their training data, for example by training their models in such a way that they treat groups equally in the decision-making process of the algorithm, as opposed to entrenching real-world biases.
Because of the importance of the data selected to train an algorithm, researchers propose an ex-ante approach to regulating AI. For example, in the United States, lawmakers have proposed an “Algorithmic Accountability Act” to regulate AI, which would require companies “to conduct impact assessments for existing and new ‘high-risk automated systems’”. Impact assessments have been suggested as a useful tool in regulating AI. They entail an evaluation of “how an automated system is designed and used, including the training data it relies on, the risks a system poses to privacy or security, and various other factors”. In a similar vein, Canadian legislators may consider legislation that would require companies, service providers (such as hospitals), and decision makers to check for bias that might occur from their use of AI.
Another proposal for addressing algorithmic discrimination is to update Canada’s privacy laws by requiring “public bodies to log and trace their data collection and processing systems” and giving Canadians the right to know exactly which personal information, reasons and factors formed the basis for decisions made about them. Although this suggestion was made to address algorithmic decisions made by federal public bodies, it might also be a useful tool to regulate decisions made by hospitals given the importance of these decisions on an individual’s life. This might entail an update of existing provincial health privacy legislation, such as Ontario’s Personal Health Information Act.
Conclusion
Exciting technological advances that encompass the use of AI in health care have the potential to dramatically improve the lives of patients across Canada. However, these advances are not without their risks. Given the high stakes in health care, Canadian lawmakers should work to ensure that when AI is used, it does not have the unintended effect of perpetuating biases and disadvantaging individuals, especially ones who have historically faced barriers to equal access to health care. Many commentators are calling for the government to regulate AI. It is imperative that these calls, and their legal responses, seek to address the risks of discriminatory effects resulting from the use of AI in health care.
Annelise Harnanan is a Senior Online Editor with the McGill Journal of Law and Health. She is in her third year of the B.C.L./ LL.B. program at McGill University’s Faculty of Law. She holds a BA with distinction in Political Science from Dalhousie University and has a keen interest in health policy.