Mental Health on the Front-line: Legal Barriers to Psychological Injury Compensation for Public Safety Personnel

Contributed by Souhila Baba and Andréanne Angehrn

Post-Traumatic Stress Disorder

Public Safety Personnel (PSP) such as first responders, firefighters, police and correctional officers, often see their work as a duty, public service, and vocation. But what happens when this line of work becomes the source of an illness? This is the case for Natalie Harris, a paramedic from Ontario, and one of the many PSP who developed Post-Traumatic Stress Disorder (PTSD) due to the traumatic nature of her work. The likelihood of experiencing at least one traumatic event for Canadians is extremely high (74.2% for women and 81.3% for men), and approximately 15% to 24% of the individuals who experience a traumatic event will develop PTSD. However, among PSP trauma is a regular occurrence. PTSD is characterized by symptoms of hyperarousal, avoidance, intrusive memories and numbing, and approximately 9% of Canadians, and more than 21% of PSP workers, will go on to develop PTSD. Living through or witnessing life-threatening events, such as car accidents, physical or sexual assaults, and natural disasters, among others, can be the cause of PTSD. As the occurrence of traumas multiply, so does the risk of developing PTSD. Moreover, PTSD is often associated with other mental health problems such as depression, anxiety disorders, alcohol and/or substance dependence and/or abuse, as well as heightened suicide ideation and attempts. Particularly, more than 50 Canadian PSP took their own lives in 2017 and more than 27% reported having considered suicide in their lifetime. Natalie Harris has not yet returned to work after her PTSD diagnosis, but she is devoted to increasing awareness and legal support for others, who just like her, have to face unimaginable trauma as part of their 9-to-5 job.

The Legislative Issue

A team of experts led by Dr. Nicholas Carleton at the University of Regina conducted a nation-wide study of PSP (defined as dispatchers, correctional workers, firefighters, police officers, paramedics, and RCMP officers) and their experiences of general mental disorder symptoms. From a sample of 5,813 first responders, the results suggested that the prevalence of mental disorders in this cohort was significantly higher than that of the general public. However, because of sampling differences, direct comparisons were not possible. As such, what this study suggested is that this group is potentially at a greater risk of suffering from a mental disorder than the rest of the population. In light of these considerations, certain provinces, notably Alberta, Manitoba, New Brunswick, Ontario, and Saskatchewan have legislated a presumption of causality between the development of a mental disorder, like PTSD, and the workplace for certain professions. Consequently, the process of worker’s compensation for a workplace injury is facilitated and simplified. In Québec, however, there are no such presumptions for first responders, which forces them to go through the regular channel for workplace compensation. This process can become time-consuming, stressful, and expensive.

Firefighters.jpg Public safety personnel, such as firefighters, have a heightened propensity to developing mental health issues, including post-traumatic stress disorder || (Source: Flickr // Heather Paul )

The Administrative and Legal Framework

In Ontario, a first responder who develops PTSD stemming from witnessing or experiencing a distressing event benefits from the presumption that this diagnosis is a workplace injury (s.14, Workers Safety and Insurance Act). As such, if they present to the Workers Safety and Insurance Board (WSIB) with a diagnosis of PTSD by a psychologist or a psychiatrist, their claim should be presumed valid. This is, however, a rebuttable presumption, meaning that the WSIB or the employer can deny a claim if they believe that the PTSD did not in fact stem from the workplace environment. The first responder can then appeal this decision by using an “intent to object” form and requiring the Appeal Resolution Officer (ARO) to reconsider the decision. Up to this point, all of the decisions are administrative, passing directly through the WSIB framework. If the first responder wants to pursue the claim further by appealing to the Workplace Safety and Insurance Appeals Tribunal (WSIAT), then the WISB or the employer will have to show, on a balance of probabilities, that the injury (i.e., the PTSD) did not stem from the workplace. The first responder in this example does not have the burden of proof.

Up to the stage of appeal to the Tribunal Administratif du Travail (TAT), the procedure in Québec  is quite similar in that it is an administrative decision through the Commission des normes, de l’équité, de la santé et de la sécurité du travail (CNESST). However, if a first responder decides to appeal to the TAT, then the burden of proof is on them to show, on a balance of probabilities, that the PTSD diagnosis was in fact a workplace injury to enable them to be compensated through CNESST. This major difference in the two frameworks (between Ontario and Québec) possibly reflects a different understanding of the link between first responders’ daily work and their mental health. Additionally, presumption schemes are not unknown to the Québec framework. Indeed, the Act Respecting Industrial Accidents and Occupational Diseases, which governs CNESST, contains presumptions of causality for various diseases caused by infectious or physical agents (Schedule I). None of the diseases, however, are related to mental health.

The Missing First Responders: The Case of Nurses

An important group of professionals missing from the Ontario legislation (s.14(2)) and the nation-wide study mentioned above, are nurses. Often subject to violence in the workplace, nurses, especially emergency and psychiatry nurses, are at the highest risk (across health sector professionals) of experiencing such violence. Moreover, this type of violence is gendered, with women nurses overwhelmingly being the target of workplace violence. In the United States, workplace violence is the second leading cause of occupational deaths in women. Instances of violence or trauma, even a single event, can lead to the development of mental health issues, particularly PTSD. To counter this, the Registered Nurses’ Association of Ontario (RNAO) is currently lobbying for their profession to be included in the list of first responders benefiting from a presumption of causality regarding PTSD as a workplace injury. Moreover, the government of Ontario issued a news release in December 2017, stating the need to include nurses in the presumption legislation.

fence.jpg Front-line workers including nurses face legal barriers to receiving workplace compensation for mental health injuries || (Source: Flickr // Daniel Steinberg )

In contrast, in Québec, the current framework is burdensome on a particularly vulnerable population. For example, in a specific case, a worker, who was a hospital attendant in training to become a nurse, had been the victim of an incident of violence by a patient. Although the CSST (now CNESST) allowed her to claim compensation for her physical injuries, her claim for reimbursement of medication for her diagnosed PTSD was contested, alleging that she had developed PTSD prior to the violent event. At trial, she presented evidence from ten different doctors, psychiatrists, and psychologists to support her claim. The difficulty in such cases lies in the nature of the illness and its possible comorbidity with other health concerns such as substance use or abuse and depression symptoms. This blurs the line of causality between the workplace trauma and the subsequent diagnosis. Although in this case her claim eventually succeeded, the process of the reviews of administrative decisions by the TAT, the gathering of evidence, the hiring of legal representation, and the mental strain of testimony and trial procedure may have been avoided if there existed a presumption in the law.

Instances of violence or trauma, even a single event, can lead to the development of mental health issues, particularly PTSD.

While many are calling for a nation-wide, cohesive framework to protect the front-liners who put their security and mental health on the line daily to protect us, conversation around this issue has not led to public action. Should the Québec framework allow for a presumption for all workers who suffer from PTSD as is the case in Manitoba? Or should all psychological disorders benefit from such a presumption as is the case in Saskatchewan? Last year, the Report of the Standing Committee on Public Safety and National Security recommended the creation of a Canadian Institute for Public Safety Officer Health Research and the elaboration of a national strategy regarding operational stress injuries. Undeniably, we have yet to see the results and possible action plans recommended by this report. Meanwhile, workers all around Canada, like Natalie Harris, have to face ambiguous and strenuous legal procedures before being able to focus on their ultimate duty: their own recovery, mental health, and eventual journey back to doing the work that they love.

Andréanne Angehrn holds a BA (Honours) with distinction in psychology from Concordia University. She will join Dr Nicholas Carleton’s team in the fall as a graduate student in clinical psychology at the University of Regina. Andréanne is enthusiastic about providing care and support to those affected by trauma, and about extending the scope of research to minorities. Recently she presented her undergraduate thesis that focused on circadian autonomic functioning and stress in children at the American Psychosomatic Society’s annual conference in Louisville, KY under the supervision of Dr Jennifer J McGrath. 

Souhila Baba is a Senior Online Editor with the McGill Journal of Law and Health with a keen interest in mental health, access to health services, and access to justice. She holds a BSc with distinction in Psychology, with a minor in Political Science from Concordia University. Since she joined the Faculty of Law at McGill University in 2016, she has been able to expand her interests in policy, technology, science, and the law, and the important contributions that women make to these fields and their intersections. Souhila is currently interning with the McGill Research Group on Health and Law at the CIUSSS du Centre-Ouest-de-l’Île-de-Montréal under the supervision of Me. Nathalie Lecoq.

The authors would like to thank Me. Cristina Toteda for her guidance and insight on the practical and real-world implications of the occupational safety and health framework in Québec. Souhila would further like to thank Prof. Derek J. Jones for allowing her to explore this topic further through a research paper in the course “Law and Psychiatry”.

Mind Protection: Data Privacy Legislation in the Age of Brain-Machine Interfaces

Contributed by Dr. Anastasia Greenberg

Brain-machine interfaces (BMIs) are a class of devices that allow for direct communication between a human brain and a device such as a computer, a prosthetic limb, or a robot. This technology works by having the user wear an electroencephalography (EEG) cap that extracts brain activity, in the form of brain waves. These waves are then processed and interpreted by advanced software to “decode” the brain’s intended actions. These intended actions are translated into a command sent either to a computer or a mechanical device – the gadget options are seemingly infinite. With the growth of big data analytics and artificial intelligence (read an MJLH article on this issue), the proliferation of BMIs pose a unique legal and ethical risk for personal data privacy and security given the highly intimate nature of the information that BMIs gather.

Recent Advances in BMIs

The major limiting factor of the widespread application of BMIs is the ability to accurately interpret a person’s thoughts from their recorded brain activity. Major headway has been made in the last decade. A highly publicized example includes a quadriplegic patient with an implanted brain chip (instead of a non-invasive EEG cap) who was able to check emails, turn lights on and off, and play video games using his thoughts alone. A newer version of this chip, developed by a company called Braingate, is currently undergoing clinical trials. Similarly, such developments have potentially life-changing heath care implications for locked-in syndrome patients who have lost ability to communicate due to muscle paralysis. BMIs allow locked-in patients to communicate using their thoughts.

BMI1 Brain-machine interfaces allow for control of computers and mechanical objects using thoughts || (Source: Flickr // Ars Electronica )

The applications of BMIs extend beyond health care into the consumer context. A company called Emotiv Lifesciences created a sophisticated driving simulator that allows for thought-controlled navigation through a virtual course. Another company called Muse offers an enhanced meditation experience by providing feedback to allow users to modulate their own brain waves.

BMI technology can also be used for direct brain-to-brain communication. In 2013, researcher Dr. Rajesh Rao sat in his laboratory at the University of Washington wearing an EEG cap and faced a computer screen displaying a simple video game. The object of the game was to fire a canon at a target by pressing a key on a keyboard at the right moment. Rao did not touch the keyboard and instead used his thoughts to imagine moving his right hand to press the key. On the other end of the university campus, Dr. Andrea Stocco sat in his own laboratory with a Magnetoencephalography (MEG) stimulation coil (which is used to activate specific areas of the brain) placed over the part of his motor cortex that controls hand movements. Stocco did not have access to the video game display in front of him. Every time that Rao imagined firing the canon, a command would be sent via the internet to trigger the MEG stimulation over Stocco’s head, forcing his finger to press a keyboard key which would then fire the canon at the target on Rao’s computer screen. Therefore, Rao was able to control Stocco’s movements through the web with his thoughts.

Data Privacy in Canada

In the age of big data, personal information in the form of search engine entries, online shopping activity, and website visits, when aggregated, can reveal highly accurate details about a person’s life. This reality has raised public concerns over data privacy in Canada. As BMIs increasingly enter the market and join the “internet of things”, organizations will for the first time, have access to the most personal information yet – information obtained directly from the brain.

In Canada, the protection of personal data, such as brain data, can be captured by a complex web of privacy legislation. Although the Canadian Charter of Rights and Freedoms does not explicitly mention a right to privacy, it is protected to some degree by sections 7 (liberty) and 8 (unreasonable search and seizure). The Privacy Act governs the handling of personal information by the federal government, while the Personal Information and Electronic Documents Act (PIPEDA) is a federal statute that applies to businesses in Canada that collect, use, and disclose personal data for commercial purposes. PIPEDA was enacted in 2000 in attempt to harmonize data privacy standards across the country and to strike a balance between economic benefits stemming from private data use and respect for individual privacy. To add extra complexity, provinces and territories can enact their own data privacy legislation which supersede PIPEDA if the federal government considers the legislation to be “substantially similar” to PIPEDA.

BMI2.jpg Privacy legislation in Canada and abroad aims to protect personal information, such as health-related data || (Source: Flickr // luckey_sun )

PIPEDA has been criticized heavily since coming into force for its feeble enforcement mechanisms. As a result, in 2015, amendments to PIPEDA introduced a requirement to notify the Privacy Commissioner of any data privacy breach creating significant harm to an individual, including bodily harm, reputational harm, and identity theft. Failure to notify can result in fines up to $100,000. Furthermore, the Office of the Privacy Commissioner provided guidance on section 5(3) of PIPEDA which prohibits inappropriate collection, use, and disclosure of personal data. The so called “No-Go Zones” under section 5(3) prohibit activities such as: the processing of data in a way that would lead to unethical or discriminatory treatment, and data uses that are likely to cause significant harm. Significant harm means, “bodily harm, humiliation, damage to reputation or relationships, loss of employment, business or professional opportunities, financial loss, identity theft, negative effects on one’s credit record and damage to or loss of property”. These changes can bolster privacy protection of brain data.

What remains intact following the amendments is an insidious provision that leaves the door ajar for government surveillance. Section 7(3)(c.1) is a blanket provision that mandates private entities to disclose personal information at the request of the government in the name of national security and law enforcement. Given the rich information that brain data contains, it is not evident how the government may decide to use such unfettered access in its activities.

Data Privacy Internationally

Europe is known to have the world’s highest data privacy standards. The European Union Data Protection Directive (Directive 95/46) is interpreted in light of the Charter of Fundamental Rights of the European Union, which specifically recognizes personal data protection as a human right. Article 8(1) of the directive provides that member states adopt prohibitions on processing sensitive data including health-related data, which brain data may indeed fall under. However, much like PIPEDA, the desire to balance organizational interests with privacy protection is reflected in exceptions to this prohibition if consent is obtained from the data subject, if the data processing is in the public interest, or for certain medical and health care purposes.

In May of 2018, the General Data Protection Regulation (GDPR) will officially replace Directive 95/46. One of the prominent changes from Directive 95/46 relates to the widening of jurisdiction, as the GDRP will apply to all companies processing the personal data of individuals located within the EU, irrespective of where a company is located. The effect of this change will likely force non-EU companies, including Canadian companies, to comply with the GDPR to allow for cross-border data transfers. The strategy behind this new approach is to ensure that Europe lays the ground rules for the international data privacy game.

As BMIs increasingly enter the market and join the “internet of things”, organizations will for the first time, have access to the most personal information yet – information obtained directly from the brain.

Other major changes that will be introduced with the GDRP are the inclusion of the “right to access”, in which a data subject will be able to request copies of their personal data, and the “right to be forgotten” in which the data subject can request for their personal data to be permanently erased. Just as BMIs are introducing highly intimate data into the mix, the GDRP may offset some of the increased privacy risks by putting more control in the hands of the data subject and by attempting to coerce international privacy standards.

The Future of Privacy

The promise of brain-machine interfaces is hard to overstate. BMIs can already restore lost abilities such as vision, hearing, movement, and communication.  Beyond restoration, BMIs allow for super-human enhancement in the form of control over virtual environments, manipulation of robots, and even transmitting linguistic messages without vocalizing speech. The effective implementation of BMIs speaks directly to the effectiveness of neural decoding: the technology’s ability to “mind read” – albeit currently in crude form. Organizations that create BMIs and control its software will have access to rich brain data. Governments will desire access to that data. The EEG data in question are as unique as one’s fingerprints, providing biomarkers for the prediction of individual intelligence, and predispositions to neurological disorders such as depression, Alzheimer’s disease, and autism. The ongoing development of data privacy legislation in Canada and abroad will shape future control of the mind’s personal bits.

 

Anastasia Greenberg is a second-year student in the B.C.L/LL.B. program at McGill University’s Faculty of Law and is the Executive Online Editor of the McGill Journal of Law and Health. Anastasia holds a PhD in Neuroscience from the University of Alberta.

Recap of Changing the Face of Health Care through Artificial Intelligence: Emerging Ethical and Legal Debates

On February 3rd 2018, the McGill Journal of Law and Health held its 10th annual colloquium entitled Changing the Face of Health Care through Artificial Intelligence: Emerging Ethical and Legal Debates. This year’s edition was particularly topical considering Montreal’s growing presence on the international artificial intelligence (AI) scene. A variety of lawyers, physicians, computer scientists, as well as law and medical students attended the event. The event’s program included two expert panel discussions: one meant to give an overview of the development of artificial intelligence technologies, and one meant to provide an idea of the road towards a regulatory framework on artificial intelligence, particularly in the field of the right to health in Canada.

Panel 1: An Overview of the Development of Artificial Intelligence Technologies

Contributed by Catherine Labasi-Sammartino

The first panel was composed of Dr. Jonathan Kanevsky, a final year resident in Plastic and Reconstructive Surgery at McGill who has developed several medical devices to improve therapy for skin cancer and scaring; Christelle Papineau, a PhD candidate in the international thesis program established between Paris University Panthéon-Sorbonne and the University of Montreal who’s research focuses on the interactions between law and artificial intelligence with a comparative perspective between Europe and North America; and Me. Antoine Guilman, a current lawyer at Fasken and member of the national group of Information and privacy protection who holds a PhD in Information Technology law from the University of Montreal and the Paris University Panthéon-Sorbonne.

Image1 Panel 1 speakers, from left to right: Dr. Jonathan Kanevsky, Christelle Papineau, and Me. Antoine Guilman 

Dr. Kanevsky started the panel off with a discussion on the potential of AI in health care, which he demonstrated by sharing examples of AI excelling in pattern recognition tasks, such as tumour detection in human biopsies. To create advancement in health care, it is important to recognize that some skills, such as pattern recognition, are not only human skills. This shift is similar to the one that took place when the recognition that the human mind could not possibly retain all the required information to treat patients put forward the idea that doctors should be using a data base to keep medical records. Dr. Kanevsky provided his audience with several insights about what AI can do in health care. These included classification (i.e., identify cancer types), prediction (i.e., make predictions based on physical appearance), and diagnosis (i.e., detect cancer cells).

Dr. Kanevsky also addressed the ethical challenges raised by the use of AI. Is AI good or bad? All three speakers jumped in to answer this question. Christelle Papineau brought up current studies on an algorithm’s potential (e.g., the Compass and LSI-R algorithm) to determine the appropriate sentence in criminal offence cases to illustrate AI’s potential role on our legal system and its associated risks. She stressed the value of human involvement in legal decision-making and the social responsibility AI innovators had to not delegate an irresponsible part of the cognitive processing required in legal settings to AI. Dr. Kanevsky echoed these concerns and left the audience with a rhetorical question regarding AI’s role in removing a scientist’s thought process.

Me. Guilman brought up issues surrounding the anonymization of personal information, such as doubts regarding its effectiveness and reliability. Furthermore, he explained the current trend of increasing the amount of data collected, without discriminating according to data type, and how it has created a series of challenges for the lawyers and business owners working according to the current Canadian laws on data protection. These laws are widely recognized as being out of touch with recent technological changes and have left the legal community with a variety of wide interpretations.

Overall, the first panel succeeded in bringing the audience to reflect on what AI to the legal and health care fields, while stressing the need for a continued responsible attitude towards its implementation. For all of the speakers, this responsibility translated itself in always having the possibility to include human interaction in when relying on AI decision-making. The importance of sensitising and sharing information with the general population regarding AI’s growing presence, in events such as the MJLH colloquium, were acknowledged as being effective tools to promote the responsible use of AI.

Panel 2: Road Towards a Regulatory Framework for Artificial Intelligence

Contributed by Handi Xu

Nicole Mardis is a Project Officer for the CIHR Institute of Health Services and Policy Research and a PhD Candidate at McGill University with specializations in medical sociology and industrial relations. Mardis began her talk by explaining that artificial intelligence signifies new patterns of human-computer interaction at the programming level that are expected to: expand the scope of activity that can be augmented by technology, accelerate algorithm development, and generate more independent machines. While traditional programming involves step by step problem-solving based on hard coded rules, AI consists of machines learning from data and examples, which puts less burden on programmers to embed all relevant context and meaning in the instructions that they write for computers.

While the productive potential of AI is still not fully understood, comparisons are often made to the Industrial Revolution. It is important to note that the mechanization and centralization of production that occurred during the Industrial Revolution gave rise to major productivity gains, but these gains were distributed in such a fashion that large segments of the population saw their material well-being and quality of life initially decrease.

Image2 Panel 2 speakers, from left to right: Christelle Papineau (moderator), Nicole Mardis, Dr. Frank Rudzicz, and Me. Marie Hirtle 

The Digital Revolution appears to be following a different pattern: information technology has diffused widely and costs have fallen, but productivity gains are hard to locate. Will the AI Revolution change this? What we do know is that more R&D is needed to make AI mainstream, and we should be particularly mindful of what data/examples are used to drive this activity. Health care providers (e.g., hospitals, clinics, and governments) now house very rich sources of population-based clinical and social data that could be used for AI. In partnership with these entities, research funders such as the Canadian Institutes of Health Research are investing in platforms and services to make this data available to university and hospital-based researchers. Yet, because AI cuts across many different fields of research and is driven in large part by industry, governments and other research funders will have to think more strategically about how public data assets are used to shape the trajectory of AI, as well as how they structure partnerships to maximize social and economic benefits for citizens.

Dr. Frank Rudzicz is a scientist at the Toronto Rehabilitation Institute (University Health Network), an assistant professor of Computer Science at the University of Toronto, co-founder and President of WinterLight Labs Inc., faculty member at the Vector Institute, and President of the international joint ACL/ISCA special interest group on Speech and Language Processing for Assistive Technologies. Dr. Rudzicz talked about the importance of using AI and software tools for medical diagnosis in the health care system in an ethical manner.

Current trends in AI research involve deep neural networks, big (interlinked) data, recurrent neural networks for temporal/dynamic data, reinforcement learning, active learning, telehealth and remote monitoring as well as causal/explainable models. Reinforcement learning consists of systems learning ‘online’ by taking imperfect observations, inferring the unseen state, then taking an action. This type of learning necessitates some exploration, where rewards and costs are usually supplied by humans. Active learning, which involves doctors using AI to determine a person’s disease, is efficient, but also risks putting doctors in a feedback loop and creating a blind reliance on AI. Neural networks learn to associate input features with output categories, but there is no abstract logic or interpretable reasoning to those associations; correlation is not causation, meaning that one usually cannot tell why or how a neural network made a decision, which is problematic when it comes to assigning responsibility.

Humans are notoriously bad with information: patients misread or miscommunicate their symptoms while doctors make diagnostic errors. A study by Bennett and Hauser (2013), which compared patient outcomes between doctors and sequential decision-making algorithms, concluded that AI technology was not only less costly, but also led to 50% better outcomes. Clinical doctors prescribe medication after informing patients of its benefits and side-effects. However, the AI doctor prescribes medication, partially, as an experiment, which allows it to directly and continuously learn from the outcomes, making it difficult to determine which set of ethics apply. Current regulatory frameworks will face ethical challenges, and will certainty need to adapt to the rise of AI, but most importantly, they need to continue to respect individual rights.

Marie Hirtle is a lawyer with a background in ethics and specialization in health issues ranging from community-based health and social services, to tertiary and quaternary care, biomedical research, and public health. She is currently Manager of the Centre for Applied Ethics at the McGill University Health Centre (MUHC), where she leads a team of professional ethicists who provide clinical, organizational, and research ethics services to the MUHC community. Using the example of the artificial pancreas, Face2Gene and Big Data, Me. Hirtle discussed regulatory issues raised by different applications of AI in health care settings. The artificial pancreas uses an insulin pump, a continuous glucose sensor, and a control algorithm to help patients with diabetes, but the dosing algorithm is self-learning, which is difficult to regulate. Face2Gene is an application which collects personal health information, such as photographs of faces of babies, to facilitate the detection of facial dysmorphic features and recognizable patterns of human malformations, while referencing comprehensive and up-to-date genetic information. It uses advanced technologies including computer vision, deep learning, and other AI algorithms to analyze patient symptoms, features, and genomic data. Face2Gene allows labs to interpret genetic information more accurately, thus helping clinicians diagnose rare diseases earlier on.

Me. Hirtle also discussed the legal issues associated with Big Data. Currently, Big Data is being collected, stored, used and disclosed, either when individuals consent or when the law explicitly allows it. Although obtaining individual consent is desirable, it can be impracticable. Individuals often click on “I agree” without reading the terms and conditions, therefore not knowing what they are consenting to. Furthermore, even though the law allows the use of non-identifiable data, the re-identification of these data is technically possible, which could potentially infringe on the right to privacy.

Overall, the second panel of the event drew the audience’s attention to the uncertain future of AI and the need to develop appropriate legal and regulatory frameworks to ensure that the benefits of AI can be harnessed while tempering the risks.