On February 3rd 2018, the McGill Journal of Law and Health held its 10th annual colloquium entitled Changing the Face of Health Care through Artificial Intelligence: Emerging Ethical and Legal Debates. This year’s edition was particularly topical considering Montreal’s growing presence on the international artificial intelligence (AI) scene. A variety of lawyers, physicians, computer scientists, as well as law and medical students attended the event. The event’s program included two expert panel discussions: one meant to give an overview of the development of artificial intelligence technologies, and one meant to provide an idea of the road towards a regulatory framework on artificial intelligence, particularly in the field of the right to health in Canada.
Panel 1: An Overview of the Development of Artificial Intelligence Technologies
Contributed by Catherine Labasi-Sammartino
The first panel was composed of Dr. Jonathan Kanevsky, a final year resident in Plastic and Reconstructive Surgery at McGill who has developed several medical devices to improve therapy for skin cancer and scaring; Christelle Papineau, a PhD candidate in the international thesis program established between Paris University Panthéon-Sorbonne and the University of Montreal who’s research focuses on the interactions between law and artificial intelligence with a comparative perspective between Europe and North America; and Me. Antoine Guilman, a current lawyer at Fasken and member of the national group of Information and privacy protection who holds a PhD in Information Technology law from the University of Montreal and the Paris University Panthéon-Sorbonne.
Panel 1 speakers, from left to right: Dr. Jonathan Kanevsky, Christelle Papineau, and Me. Antoine Guilman
Dr. Kanevsky started the panel off with a discussion on the potential of AI in health care, which he demonstrated by sharing examples of AI excelling in pattern recognition tasks, such as tumour detection in human biopsies. To create advancement in health care, it is important to recognize that some skills, such as pattern recognition, are not only human skills. This shift is similar to the one that took place when the recognition that the human mind could not possibly retain all the required information to treat patients put forward the idea that doctors should be using a data base to keep medical records. Dr. Kanevsky provided his audience with several insights about what AI can do in health care. These included classification (i.e., identify cancer types), prediction (i.e., make predictions based on physical appearance), and diagnosis (i.e., detect cancer cells).
Dr. Kanevsky also addressed the ethical challenges raised by the use of AI. Is AI good or bad? All three speakers jumped in to answer this question. Christelle Papineau brought up current studies on an algorithm’s potential (e.g., the Compass and LSI-R algorithm) to determine the appropriate sentence in criminal offence cases to illustrate AI’s potential role on our legal system and its associated risks. She stressed the value of human involvement in legal decision-making and the social responsibility AI innovators had to not delegate an irresponsible part of the cognitive processing required in legal settings to AI. Dr. Kanevsky echoed these concerns and left the audience with a rhetorical question regarding AI’s role in removing a scientist’s thought process.
Me. Guilman brought up issues surrounding the anonymization of personal information, such as doubts regarding its effectiveness and reliability. Furthermore, he explained the current trend of increasing the amount of data collected, without discriminating according to data type, and how it has created a series of challenges for the lawyers and business owners working according to the current Canadian laws on data protection. These laws are widely recognized as being out of touch with recent technological changes and have left the legal community with a variety of wide interpretations.
Overall, the first panel succeeded in bringing the audience to reflect on what AI to the legal and health care fields, while stressing the need for a continued responsible attitude towards its implementation. For all of the speakers, this responsibility translated itself in always having the possibility to include human interaction in when relying on AI decision-making. The importance of sensitising and sharing information with the general population regarding AI’s growing presence, in events such as the MJLH colloquium, were acknowledged as being effective tools to promote the responsible use of AI.
Panel 2: Road Towards a Regulatory Framework for Artificial Intelligence
Contributed by Handi Xu
Nicole Mardis is a Project Officer for the CIHR Institute of Health Services and Policy Research and a PhD Candidate at McGill University with specializations in medical sociology and industrial relations. Mardis began her talk by explaining that artificial intelligence signifies new patterns of human-computer interaction at the programming level that are expected to: expand the scope of activity that can be augmented by technology, accelerate algorithm development, and generate more independent machines. While traditional programming involves step by step problem-solving based on hard coded rules, AI consists of machines learning from data and examples, which puts less burden on programmers to embed all relevant context and meaning in the instructions that they write for computers.
While the productive potential of AI is still not fully understood, comparisons are often made to the Industrial Revolution. It is important to note that the mechanization and centralization of production that occurred during the Industrial Revolution gave rise to major productivity gains, but these gains were distributed in such a fashion that large segments of the population saw their material well-being and quality of life initially decrease.
Panel 2 speakers, from left to right: Christelle Papineau (moderator), Nicole Mardis, Dr. Frank Rudzicz, and Me. Marie Hirtle
The Digital Revolution appears to be following a different pattern: information technology has diffused widely and costs have fallen, but productivity gains are hard to locate. Will the AI Revolution change this? What we do know is that more R&D is needed to make AI mainstream, and we should be particularly mindful of what data/examples are used to drive this activity. Health care providers (e.g., hospitals, clinics, and governments) now house very rich sources of population-based clinical and social data that could be used for AI. In partnership with these entities, research funders such as the Canadian Institutes of Health Research are investing in platforms and services to make this data available to university and hospital-based researchers. Yet, because AI cuts across many different fields of research and is driven in large part by industry, governments and other research funders will have to think more strategically about how public data assets are used to shape the trajectory of AI, as well as how they structure partnerships to maximize social and economic benefits for citizens.
Dr. Frank Rudzicz is a scientist at the Toronto Rehabilitation Institute (University Health Network), an assistant professor of Computer Science at the University of Toronto, co-founder and President of WinterLight Labs Inc., faculty member at the Vector Institute, and President of the international joint ACL/ISCA special interest group on Speech and Language Processing for Assistive Technologies. Dr. Rudzicz talked about the importance of using AI and software tools for medical diagnosis in the health care system in an ethical manner.
Current trends in AI research involve deep neural networks, big (interlinked) data, recurrent neural networks for temporal/dynamic data, reinforcement learning, active learning, telehealth and remote monitoring as well as causal/explainable models. Reinforcement learning consists of systems learning ‘online’ by taking imperfect observations, inferring the unseen state, then taking an action. This type of learning necessitates some exploration, where rewards and costs are usually supplied by humans. Active learning, which involves doctors using AI to determine a person’s disease, is efficient, but also risks putting doctors in a feedback loop and creating a blind reliance on AI. Neural networks learn to associate input features with output categories, but there is no abstract logic or interpretable reasoning to those associations; correlation is not causation, meaning that one usually cannot tell why or how a neural network made a decision, which is problematic when it comes to assigning responsibility.
Humans are notoriously bad with information: patients misread or miscommunicate their symptoms while doctors make diagnostic errors. A study by Bennett and Hauser (2013), which compared patient outcomes between doctors and sequential decision-making algorithms, concluded that AI technology was not only less costly, but also led to 50% better outcomes. Clinical doctors prescribe medication after informing patients of its benefits and side-effects. However, the AI doctor prescribes medication, partially, as an experiment, which allows it to directly and continuously learn from the outcomes, making it difficult to determine which set of ethics apply. Current regulatory frameworks will face ethical challenges, and will certainty need to adapt to the rise of AI, but most importantly, they need to continue to respect individual rights.
Marie Hirtle is a lawyer with a background in ethics and specialization in health issues ranging from community-based health and social services, to tertiary and quaternary care, biomedical research, and public health. She is currently Manager of the Centre for Applied Ethics at the McGill University Health Centre (MUHC), where she leads a team of professional ethicists who provide clinical, organizational, and research ethics services to the MUHC community. Using the example of the artificial pancreas, Face2Gene and Big Data, Me. Hirtle discussed regulatory issues raised by different applications of AI in health care settings. The artificial pancreas uses an insulin pump, a continuous glucose sensor, and a control algorithm to help patients with diabetes, but the dosing algorithm is self-learning, which is difficult to regulate. Face2Gene is an application which collects personal health information, such as photographs of faces of babies, to facilitate the detection of facial dysmorphic features and recognizable patterns of human malformations, while referencing comprehensive and up-to-date genetic information. It uses advanced technologies including computer vision, deep learning, and other AI algorithms to analyze patient symptoms, features, and genomic data. Face2Gene allows labs to interpret genetic information more accurately, thus helping clinicians diagnose rare diseases earlier on.
Me. Hirtle also discussed the legal issues associated with Big Data. Currently, Big Data is being collected, stored, used and disclosed, either when individuals consent or when the law explicitly allows it. Although obtaining individual consent is desirable, it can be impracticable. Individuals often click on “I agree” without reading the terms and conditions, therefore not knowing what they are consenting to. Furthermore, even though the law allows the use of non-identifiable data, the re-identification of these data is technically possible, which could potentially infringe on the right to privacy.
Overall, the second panel of the event drew the audience’s attention to the uncertain future of AI and the need to develop appropriate legal and regulatory frameworks to ensure that the benefits of AI can be harnessed while tempering the risks.