Mind Protection: Data Privacy Legislation in the Age of Brain-Machine Interfaces

Contributed by Dr. Anastasia Greenberg

Brain-machine interfaces (BMIs) are a class of devices that allow for direct communication between a human brain and a device such as a computer, a prosthetic limb, or a robot. This technology works by having the user wear an electroencephalography (EEG) cap that extracts brain activity, in the form of brain waves. These waves are then processed and interpreted by advanced software to “decode” the brain’s intended actions. These intended actions are translated into a command sent either to a computer or a mechanical device – the gadget options are seemingly infinite. With the growth of big data analytics and artificial intelligence (read an MJLH article on this issue), the proliferation of BMIs pose a unique legal and ethical risk for personal data privacy and security given the highly intimate nature of the information that BMIs gather.

Recent Advances in BMIs

The major limiting factor of the widespread application of BMIs is the ability to accurately interpret a person’s thoughts from their recorded brain activity. Major headway has been made in the last decade. A highly publicized example includes a quadriplegic patient with an implanted brain chip (instead of a non-invasive EEG cap) who was able to check emails, turn lights on and off, and play video games using his thoughts alone. A newer version of this chip, developed by a company called Braingate, is currently undergoing clinical trials. Similarly, such developments have potentially life-changing heath care implications for locked-in syndrome patients who have lost ability to communicate due to muscle paralysis. BMIs allow locked-in patients to communicate using their thoughts.

BMI1 Brain-machine interfaces allow for control of computers and mechanical objects using thoughts || (Source: Flickr // Ars Electronica )

The applications of BMIs extend beyond health care into the consumer context. A company called Emotiv Lifesciences created a sophisticated driving simulator that allows for thought-controlled navigation through a virtual course. Another company called Muse offers an enhanced meditation experience by providing feedback to allow users to modulate their own brain waves.

BMI technology can also be used for direct brain-to-brain communication. In 2013, researcher Dr. Rajesh Rao sat in his laboratory at the University of Washington wearing an EEG cap and faced a computer screen displaying a simple video game. The object of the game was to fire a canon at a target by pressing a key on a keyboard at the right moment. Rao did not touch the keyboard and instead used his thoughts to imagine moving his right hand to press the key. On the other end of the university campus, Dr. Andrea Stocco sat in his own laboratory with a Magnetoencephalography (MEG) stimulation coil (which is used to activate specific areas of the brain) placed over the part of his motor cortex that controls hand movements. Stocco did not have access to the video game display in front of him. Every time that Rao imagined firing the canon, a command would be sent via the internet to trigger the MEG stimulation over Stocco’s head, forcing his finger to press a keyboard key which would then fire the canon at the target on Rao’s computer screen. Therefore, Rao was able to control Stocco’s movements through the web with his thoughts.

Data Privacy in Canada

In the age of big data, personal information in the form of search engine entries, online shopping activity, and website visits, when aggregated, can reveal highly accurate details about a person’s life. This reality has raised public concerns over data privacy in Canada. As BMIs increasingly enter the market and join the “internet of things”, organizations will for the first time, have access to the most personal information yet – information obtained directly from the brain.

In Canada, the protection of personal data, such as brain data, can be captured by a complex web of privacy legislation. Although the Canadian Charter of Rights and Freedoms does not explicitly mention a right to privacy, it is protected to some degree by sections 7 (liberty) and 8 (unreasonable search and seizure). The Privacy Act governs the handling of personal information by the federal government, while the Personal Information and Electronic Documents Act (PIPEDA) is a federal statute that applies to businesses in Canada that collect, use, and disclose personal data for commercial purposes. PIPEDA was enacted in 2000 in attempt to harmonize data privacy standards across the country and to strike a balance between economic benefits stemming from private data use and respect for individual privacy. To add extra complexity, provinces and territories can enact their own data privacy legislation which supersede PIPEDA if the federal government considers the legislation to be “substantially similar” to PIPEDA.

BMI2.jpg Privacy legislation in Canada and abroad aims to protect personal information, such as health-related data || (Source: Flickr // luckey_sun )

PIPEDA has been criticized heavily since coming into force for its feeble enforcement mechanisms. As a result, in 2015, amendments to PIPEDA introduced a requirement to notify the Privacy Commissioner of any data privacy breach creating significant harm to an individual, including bodily harm, reputational harm, and identity theft. Failure to notify can result in fines up to $100,000. Furthermore, the Office of the Privacy Commissioner provided guidance on section 5(3) of PIPEDA which prohibits inappropriate collection, use, and disclosure of personal data. The so called “No-Go Zones” under section 5(3) prohibit activities such as: the processing of data in a way that would lead to unethical or discriminatory treatment, and data uses that are likely to cause significant harm. Significant harm means, “bodily harm, humiliation, damage to reputation or relationships, loss of employment, business or professional opportunities, financial loss, identity theft, negative effects on one’s credit record and damage to or loss of property”. These changes can bolster privacy protection of brain data.

What remains intact following the amendments is an insidious provision that leaves the door ajar for government surveillance. Section 7(3)(c.1) is a blanket provision that mandates private entities to disclose personal information at the request of the government in the name of national security and law enforcement. Given the rich information that brain data contains, it is not evident how the government may decide to use such unfettered access in its activities.

Data Privacy Internationally

Europe is known to have the world’s highest data privacy standards. The European Union Data Protection Directive (Directive 95/46) is interpreted in light of the Charter of Fundamental Rights of the European Union, which specifically recognizes personal data protection as a human right. Article 8(1) of the directive provides that member states adopt prohibitions on processing sensitive data including health-related data, which brain data may indeed fall under. However, much like PIPEDA, the desire to balance organizational interests with privacy protection is reflected in exceptions to this prohibition if consent is obtained from the data subject, if the data processing is in the public interest, or for certain medical and health care purposes.

In May of 2018, the General Data Protection Regulation (GDPR) will officially replace Directive 95/46. One of the prominent changes from Directive 95/46 relates to the widening of jurisdiction, as the GDRP will apply to all companies processing the personal data of individuals located within the EU, irrespective of where a company is located. The effect of this change will likely force non-EU companies, including Canadian companies, to comply with the GDPR to allow for cross-border data transfers. The strategy behind this new approach is to ensure that Europe lays the ground rules for the international data privacy game.

As BMIs increasingly enter the market and join the “internet of things”, organizations will for the first time, have access to the most personal information yet – information obtained directly from the brain.

Other major changes that will be introduced with the GDRP are the inclusion of the “right to access”, in which a data subject will be able to request copies of their personal data, and the “right to be forgotten” in which the data subject can request for their personal data to be permanently erased. Just as BMIs are introducing highly intimate data into the mix, the GDRP may offset some of the increased privacy risks by putting more control in the hands of the data subject and by attempting to coerce international privacy standards.

The Future of Privacy

The promise of brain-machine interfaces is hard to overstate. BMIs can already restore lost abilities such as vision, hearing, movement, and communication.  Beyond restoration, BMIs allow for super-human enhancement in the form of control over virtual environments, manipulation of robots, and even transmitting linguistic messages without vocalizing speech. The effective implementation of BMIs speaks directly to the effectiveness of neural decoding: the technology’s ability to “mind read” – albeit currently in crude form. Organizations that create BMIs and control its software will have access to rich brain data. Governments will desire access to that data. The EEG data in question are as unique as one’s fingerprints, providing biomarkers for the prediction of individual intelligence, and predispositions to neurological disorders such as depression, Alzheimer’s disease, and autism. The ongoing development of data privacy legislation in Canada and abroad will shape future control of the mind’s personal bits.


Anastasia Greenberg is a second-year student in the B.C.L/LL.B. program at McGill University’s Faculty of Law and is the Executive Online Editor of the McGill Journal of Law and Health. Anastasia holds a PhD in Neuroscience from the University of Alberta.

Artificial Intelligence in Health Care: Are the Legal Algorithms Ready for the Future?

Contributed by Dr. Anastasia Greenberg

Big data has become a buzz word, a catch-all term representing a digital reality that is already changing the world and replacing oil as the newest lucrative commodity. But, it is often overlooked that big data alone is literally useless – much like unrefined oil. If big data is the question, artificial intelligence (AI) is the answer. The marriage between big data and AI has been called the fourth industrial revolution. AI has been defined as “the development of computers to engage in human-like thought processes such as learning, reasoning and self-correction”. Machine learning is a field of AI that is concerned with pattern extraction from complex, multidimensional [big] data. Although when we think of AI we tend to think of robots, machine learning algorithms – software, not hardware – are the game changer as they can sort through massive amounts of information that would take a human centuries to do.

7562831366_66f986c3ea_oMachine learning software can harness the usefull information hidden in big data for health care applications || (Source: Flickr // Merrill College of Journalism Press Releases)

While machine learning is already used in fields like finance and economics, the health care industry is the next frontier. In both Canada and the United States, about three-quarters of physicians have now switched to Electronic Health Records, where medical records are digitized as big data. The ability to use such data for medical purposes comes with a host of legal and policy implications. The Canadian Federal Budget 2017 includes $125 million in funding for a Pan-Canadian AI strategy, part of which is meant to “develop global thought leadership on the economic, ethical, policy and legal implications of advances in artificial intelligence.”

How Does a Machine “Learn”?

Machine learning essentially trains a program to predict unknown and/or future values of variables in a similar way to how humans learn. Imagine that you are visiting a friend’s house and you see a furry creature run past you. Automatically, you identify this creature as a cat, not a small dog, even though both are furry animals, and despite never having come across this specific cat before. Your past learning experiences with other cats allowed you to make an inference and classify a novel creature into the “cat” category. This is exactly the kind of task that used to be impossible for computers to perform but has become a reality with machine learning. A computer program is first fed massive amounts of data – for instance, images of biopsies that are pre-classified into categories such as “tumour” and “no tumour” (this procedure is called “supervised” training). The computer then learns to recognize complex patterns within the data, enabling it to classify novel images as either tumour or no tumour.

html php java source codeA machine can be trained to classify health care data for various purposes such as the detection of tumours in biopsy images. || (Source: Flickr // Markus Spiske)

Artificial Intelligence in Health care

Machine learning in the health care context holds a lot of promise for diagnosis, disease onset prediction, and prognosis. Since misdiagnoses are the leading cause of malpractice claims in both Canada and the United States, machine learning could greatly diminish health care and legal costs by improving diagnostic accuracy. With these promises in mind, the research literature on AI applications for health care has taken off in the last few years. Machine learning has been shown to successfully diagnose Parkinson’s Disease using Magnetic Resonance Imaging (MRI) data, predict ischemic stroke, predict lung cancer prognosis better than pathologists, and predict colorectal cancer patient prognosis better than colorectal surgeons. Perhaps most evocative, machine learning algorithms trained using Electronic Health Record data can predict suicide up to two years in advance with 80 percent accuracy.

In the private sector, IBM’s Watson, famous for its performance on Jeopardy, has been reinvented into Watson Health. Watson Health has “read” millions of medical journal articles and understands natural language to answer a host of medical questions. Although it is in its primitive stages, IBM hopes that Watson will soon be able to harness data on patients’ laboratory tests, symptoms, and genetic information. Together, this information processed with Watson’s medical knowledge could be used to effectively diagnose and suggest treatment options.

Legal and Ethical Implications of AI in Health care

In light of the ethical and legal implications, these advances should be both celebrated and received with caution. To understand the presenting issues, it is important to comprehend how these algorithms work. Machine learning is able to take in complex data made up of billions of data points; the more complex the input data that the machine is trained on, the more information it has available for making accurate predictions, but the higher the risk of overfitting. Overfitting occurs when algorithms become very good at modelling the current dataset but cannot successfully generalize to a new dataset – the dataset used to make decisions pertaining to new patients.

Perhaps most evocative, machine learning algorithms trained using Electronic Health Record data can predict suicide up to two years in advance with 80 percent accuracy.

A further complication is that because the machine is learning somewhat autonomously about the data patterns, the process is a black box. It is difficult to understand which features within the data are most important and what kinds of complex nonlinear relationships the machine is modeling to make its predictions. That’s why diligent machine learning researchers test their algorithms on new datasets to figure out how reliable and generalizable they are. In any case, a high level of expertise is required and some level of error is inevitable.

The Current Legal Algorithms

So, what happens when a patient is misdiagnosed using AI? How would the current legal structure deal with such a scenario? One possibility is that such a situation would fall under the default fault-based regime (or “algorithm”) under the Civil Code of Quebec (CCQ, art. 1457), analogous to the tort of negligence in common law Canada. This would imply that the physician using AI technology may be liable for its misdiagnosis. However, AI does not seem well suited for such a regime. As discussed above, the algorithm is going to be more accurate on average than a physician. Thus, once AI in health care becomes commonplace, it would be difficult to find the physician negligent, especially when the physician is not the one writing the code. Given that the accuracy would be an improvement over physician knowledge alone, the physician “expert standard” (where a physician’s actions are compared to her peers) may shift to require the use of such technology.

35338054331_37ea4257a2_oA physician may soon be able to use AI for diagnosing patients and suggesting treatment options || (Source: Flickr // Hamza Butt)

Another option would be to fit AI within the realm of product liability, which, under the CCQ (art. 1468, 1469) is a form of strict liability, meaning that the evidentiary burden would be placed on the defendant, in this case the manufacturer of the AI software, not the patient. This means that the patient has a “leg up” in such a litigation scenario. In line with this idea, in 2017 the Committee on Legal Affairs of the European Union submitted a motion to the European parliament calling for law reform on the issue of robotics (included under the AI umbrella), to implement a strict liability regime. Under the CCQ, the rule governing the autonomous act of a thing (art. 1465), which is close to strict liability (presumption), is another potential possibility, since AI learns semi (or even fully) autonomously. This would again implicate the physician as the guardian of the AI.

The issue with a strict liability regime for AI is that unlike a traditional defective “product”, a human (the physician) and AI are going to work together in the decision making process, making it difficult to point the finger at the manufacturer, software developer or the physician alone. For example, a recent study showed that when presented with images of lymph node biopsies for detecting metastatic cancer, AI was better at finding those hard-to-detect cases while the human physician was better at rejecting false positives (correctly rejecting a cancer diagnosis). When the human and AI skills were combined, correct metastatic cancer detection was at 99.5% accuracy. Another issue comes from the classic economic argument that strict liability can hamper innovation.

AI does not easily fit into any existing private law regimes for compensating patients who will inevitably suffer harm from the however small amount of errors. Another possibility would be to remove AI from the private law system and create compensation schemes. Like AI, vaccines are the utilitarian choice due to their low complication rates and because they do not easily fit into negligence-based regimes. For this reason, the United States has a National Vaccine Injury Compensation Program and Quebec has a similar provincial Program.

AI, along with personalized big data, is projected to be the next disruptive technology that will transform medicine by allowing physicians to see into the future. With great computing power comes great responsibility to develop the right legal and regulatory “scripts” for the implementation of AI in health care.

Anastasia Greenberg is a second-year student in the B.C.L/LL.B. program at McGill University’s Faculty of Law and is the Executive Online Editor of the McGill Journal of Law and Health. Anastasia holds a PhD in Neuroscience from the University of Alberta.

From Lab to Court: Neuroscientific Evidence and Legal Decisions in Disorders of Consciousness and Beyond

Contributed by Dr. Anastasia Greenberg

For many, the term “vegetative state” brings to mind the American case of Terri Schiavo and her decade long legal battle (1992-2002) surrounding the “right-to-die”. Terri sustained serious brain damage in 1990 following a cardiac arrest, which led to an eventual diagnosis of a persistent vegetative state. Terri’s husband fought for the right to remove her feeding tube while her parents were desperate to keep her alive, believing she was conscious. Ultimately, Terri’s artificial life support was withdrawn in 2005, stirring an ongoing debate on the difficult ethical and legal implications in similar cases. Progress in neuroscience gives us hope in being able to answer key questions about brain and behaviour with direct relevance for the legislature and the courtroom.

Disorders of Consciousness

Vegetative state can be defined as “wakefulness without awareness”, in which patients show normal sleep-wake cycles (unlike a coma which is analogous to a deep sleep) but without any evidence of purposeful behaviour connected to awareness of the self or the environment. While wakefulness is straightforward to detect based on sustained eye opening and specific electroencephalogram (EEG) activity, the existence of awareness poses a much more complicated question.

In order to measure consciousness or awareness, we rely on behavioural evidence of “command following” as a proxy to make inferences about mental states. For example, locked-in syndrome patients have lost almost all ability to make voluntary movements but retain the ability to respond to “yes” or “no” questions by moving their eyes or eyelids in a consistent manner. This residual ability to form purposeful behaviour leaves no question that the patient is indeed conscious.

Hospital BedAdvances in medical science have changed our understanding of consciousness in patients in a “vegetative” state. || (Source: Flickr // Presidencia de la República Mexicana)

Unfortunately, the difficulty with vegetative state patients is that they do not show any such meaningful behaviour or evidence of language comprehension. These patients will stare into space, move their eyes in an inconsistent manner and may even burst out into laugher or tears; however, none of these behaviours are linked to environmental stimuli.

For a long time, it was believed that such patients were completely unconscious. However, in the last decade this orthodox notion has faced serious scrutiny, regarding at least some of these patients, due in large part to the work of Canadian neuroscientist Dr. Adrian Owen from the University of Western Ontario.

Neural Activity as A Proxy for Behaviour

Dr. Owen’s research method allows certain patients who are labelled as vegetative to communicate solely by modulating their brain activity, recoded using functional magnetic resonance imaging (fMRI). fMRI makes inferences about brain activity indirectly by measuring blood flow, which is temporally linked to neural activity in that recently active cells require a fresh supply of oxygenated blood. This allows scientists to gauge which parts of the brain are involved in various cognitive tasks with high spatial resolution.

In a notable study, Dr. Owen’s team asked “vegetative state” patients in the fMRI scanner to imagine playing tennis or to imagine walking around their house from room to room. When healthy patients are asked to perform this same task, imagining playing tennis shows activation in a part of the brain called the supplementary motor area (SMA) while walking around the house activates parahippocampal corticies (PPA) which are involved in real and imaginary spatial navigation.

Remarkably, a portion of vegetative state patients (17%), diagnosed based on internationally recognized behavioural standards, show consistent SMA activity when instructed to imagine playing tennis and PPA activity in the case of walking around the house. Even more remarkably, they were then able to use imagining playing tennis or imagining walking around the house to respond “yes” or “no” to questions – with 100 percent accuracy. Using their imagination, this select group of vegetative state patients responded correctly to questions about their own name, their parents’ names, the current year, and so forth.

Ethical Issues

These findings make legal characterizations pertaining to the decision to withdraw nutrition and hydration even more complicated. In a personal communication with Dr. Owen, he mentioned that one such patient was asked whether he wished to continue living. He responded: “yes”. This is exciting news in the context of legal decision-making; perhaps we could simply ask the fMRI-responsive patients to decide their own fate.

24130148711_12dae8e061_kfMRI scans allow doctors to prompt and make inferences about neural activity in patients in a “vegetative” state, in some cases enabling a limited channel of communication. || (Source: Flickr // NIH Image Gallery)

But what can be said for the remaining 83% of patients? Can we conclude that they are simply not conscious, and thus truly fit their derogatory label of “vegetative”? The problem with such a conclusion is one of false negatives. When someone consistently “responds” to high-level questions with their brain activity, we can be sure of their consciousness – arguably to the same extent as someone who is saying “yes” and “no” in plain English (or French).

However, when a vegetative patient fails to show any meaningful fMRI responses, we cannot be certain that they are not conscious. Consider, for example, patients that have lost function in their auditory cortex and thus cannot hear the task instruction nor questions – not to mention many more nuanced neural complications that may prevent successful performance despite consciousness.

Legal Applications for Neuroscience Data

Dr. Owen’s work has received enormous media attention and, most relevant to the legal context, Dr. Owen recently submitted an affidavit that was admitted into evidence by the Supreme Court of British Columbia (BC) in Ng v Ng (2013). Kenny Ng was involved in a motor vehicle collision that left him in a minimally conscious state (higher functioning than vegetative) from 2005 onward. Kenny’s wife, who was entitled to give substitute consent for Kenny under BC’s Patient’s Property Act (PPA) and Health Care (Consent) and Care Facility (Admission) Act (HCCFA), decided to take Kenny off of life support in spite of opposition from his siblings.

In a personal communication with Dr. Owen, he mentioned that one such patient was asked whether he wished to continue living. He responded: “yes”.


Dr. Owen’s affidavit could not speak specifically to Kenny’s case given that Kenny never participated in any studies by Dr. Owen’s team. However, it suggested that Kenny could potentially fall into the category of those with awareness and is a good candidate for further study. Ultimately, though, the court ruled in favour of Kenny’s wife since she held the decision-making authority pursuant to legislation and since the removal of feeding was found to be reasonable given the available medical evidence supporting Kenny’s poor clinical prognosis. The court had no legal mechanism by which to order that Kenny be tested by Dr. Owen, a neuroscientist, and set aside the recommendations of Kenny’s team of medical doctors.

An Eye to the Future

The potential applications of neuroscientific evidence in courtrooms and in end-of-life legislation are exponentially increasing. Publications in the developing study of neuroscience and law, coined “neurolaw”, have spiked since 2006. Both neuroscientists and legal scholars express optimism, but they also emphasize erring on the side of caution when admitting flashy neuroscience into court. While the direct legal relevance of Dr. Owen’s work for use in a courtroom setting is persuasive, it also presents many opportunities for abuse, or innocent misinterpretation, of neuroscientific information.

US courts have admitted brain scans (including fMRI) into evidence in criminal cases involving insanity defenses (called defense of mental disorder in Canada), as well as highly controversial fMRI lie-detection evidence. In Canada, fMRI data has not yet seen its day in court and may raise serious Charter issues in relation to brain privacy. Dr. Owen’s affidavit in Ng v Ng is one of only two Canadian cases to ever mention fMRI in more than an incidental way. In a controversial ruling in Italy, a court reduced a sentence for murder after being presented with neuroscientific evidence in the form of brain scans and genetic evidence that suggested links to poor impulse control.

CourtLegislators and the courts will have to grapple with the risks and benefits of allowing the adducement of neuroscientific evidence before a judge. || (Source: Flickr // Jordan Schulz)

A deep understanding of neuroscientific technology and methodology is invaluable in drawing valid conclusions based on the data presented. A jury may interpret a colourful fMRI image as analogous to an X-ray – being able to “see” brain activity – when in fact the image is created through a series of inferential steps involving complicated statistical analyses performed on the data. These steps are peppered with human decisions about which statistical thresholds are to be used, which behavioural conditions should be compared, and so forth. Concerns over “overclaim syndrome” relate to the persuasive “wow” factor neuroscientific evidence evokes. In one study, mock jurors were more likely to give a verdict of “not guilty” if a defense of mental disorder was presented along with MRI images.

Neuroscientific evidence also has the potential to influence end-of-life legislation, such as BC’s PPA and HCCFA that were used to transfer consent to Kenny’s wife, by requiring neuroscientific interventions before transferring consent. Currently, however, such a provision can only exist in the parliament of dreams, as neuroscientific tests of consciousness are far from routine procedures.

Neuroscience and law have begun to converge, developing the field of neurolaw with international neurolaw conferences and societies bringing scholars and practitioners from both disciplines together to explore their mutual interests. Professor Henry T Greely of Stanford Law School predicts that neuroscience will revolutionize the law: while the consequences of this neurolaw revolution carry serious risks, a future that offers a “window into the mind” may prove more conducive to justice. For those conscious patients trapped behind a “vegetative” label, neuroscientific evidence may provide sufficient weight to tip the scales of justice.

Dr. Greenberg holds a PhD in Neuroscience from the University of Alberta, and recently began studies at the McGill Faculty of Law.