top of page
Search

Patient Rights are Human Rights in the Time of AI: an interview with Andrea Downing from the Light Collective (US)

Writer's picture: Adrian de LeonAdrian de Leon

Updated: Jul 1, 2024

In this month’s blog post, bleepDigital is looking at the impact of AI, med tech and (lack thereof) regulation on patient rights, at a time when rapid technological development threatens doctor-patient relationships. Adrian de Leon, Head of Research at bleepDigital, spoke to Andrea Downing, co-founder of US-based non-profit, the Light Collective to find out more about their work and the importance of human rights and legislations to ensure the privacy of patient data. 


bleepDigital was founded upon the premise that rapid technological changes bring innovation but also threaten to negatively impact both our healthcare systems and the doctor-patient relationship. This is why we are committed to developing and delivering training on biotech syndromes to hospital staff, and to raising awareness of the societal and individual implications of emerging medical technologies (including but not limited to AI) to the general public. Amongst the proliferation of AI-powered assistants, or the ground-breaking discoveries in fields such as genetics or neurological diseases, one aspect of the growing influence of technology in medicine remains under-reported: patient rights to privacy.


Proponents of deepening the integration of data, machine-learning, and algorithmic functions in medicine, whether in diagnostics, results-analysis, or even in patient consultations, promote its clear advantages. We are told that technology will help alleviate the pressures on medical infrastructures, reduce waiting list numbers and costs for governments concerned about a growing ageing-population, and will help reduce the workload of medical professionals still recovering from the COVID-19 pandemic. 


Technological progress will see increased efficiency, reduced decision-making time, and will by-pass the dangers of human error. The foundation of this progress is data - a lot of data - and this data is our data. After all, machine learning and Large Language Models (LLMs) rely on patient data to power its algorithms and produce the results that are used by health professionals to inform their decision making. In fact, this patient data is very valuable, as demonstrated by the recent decision by NHS England to award Palantir, a CIA-funded technology firm from the USA, a contract worth £480m to manage its patient data


One may ask, why is our data so important? What are the current dangers linked to our data and potential privacy breaches? Moreover, what are our rights amidst all of this rapid change?

 

For this article, I had the pleasure of speaking to Andrea Downing, President and Co-Founder of the Light Collective. The mission of the organisation, based in the USA, is to advance the collective rights, interests and voices of patient communities so that those participating in health technologies are safe from exploitation and harm. Andrea’s drive for founding and leading the Light Collective stemmed from her own experience with privacy and health data breaches. 


In 2018, the Cambridge Analytica scandal broke out: personal information of Facebook users including likes, group membership and wider activity on Facebook were harvested by the company and used by political groups for Donald Trump’s presidential campaign and for the Leave campaign group in the US and UK respectively. At the heart of the scandal was the third-party sharing and privacy breach of individuals’ data on Facebook to influence political outcomes. As a patient advocate with a background in technology, Andrea was interested in finding out whether a similar breach could occur with her health data. 


“After reading some of the more technical blogs about what happened with Cambridge Analytica; I [asked] myself a simple question: ‘if you can scrape profiles at the user level API, what can you do with [Facebook] groups?’, said Andrea.

To answer her question she began ‘red-teaming’ Facebook groups - red-teaming is a term to describe the ethical hacking of platforms and organisations to reveal security and data-protection flaws. Despite possessing little cybersecurity experience, Andrea found something quite concerning. 


"I took some of my findings to a couple of folks who were very experienced in healthcare cybersecurity. And that was the start of a very scary period where I had found a flaw in Facebook's group architecture that could programmatically scrape all closed groups: their real names, along with a health fact about them”, said Andrea. 

What Andrea had essentially uncovered was that members of Facebook groups, who had joined a support group with the understanding that their information and activity would be private, were actually in danger of having their data scraped by outside groups, or unauthorised third-parties. As Andrea shares, "it meant that you could, from outside of the groups, scrape the user interface with real names, and the group health condition, and then match that up with people's physical locations, phone numbers, emails, all kinds of stuff. In the world of healthcare, that's PHI (Protected Health Information)”. This security flaw uncovered the real possibility that individual people’s data could be used against them as part of an individual or group targeting campaign. 

Andrea’s discovery was only one part of the tail, imbued by what she had uncovered, Andrea continued her research and found something else. “The second phase of our research was around how companies could use pixels or cross site trackers and tracking technologies on what are called HIPAA covered entities.” Put simply, HIPAA covered entities are institutions or organisations that are involved in the transmission of protected health information. In partnership with the Duke Clinical Research Institute, Andrea carried out a study that highlighted how common marketing tools share sensitive health data with Meta without patient consent. A follow-up investigation by The Markup found that ”30 of the top 100 hospitals in the USA were leaking data from their patient portals to Meta and other third parties.” 


Essentially, Andrea uncovered that patient data was being sold to third-parties without the patients’ knowledge or consent; a clear breach of the right to privacy. This discovery shines a light on the importance of understanding the use of patient data in the technologies that are proliferating our healthcare systems and that are determining patient outcomes. Moreover, it has been found that patient data, operating in a poorly regulated environment, has fuelled both a scaling-up of big-pharma and user-targeted advertising that contributes to the proliferation of misinformation. 


This concern for patient’s data and its potential misuse by companies or other nefarious actors, is at the very centre of the Light Collective’s latest report: AI Rights for Patients. It is a living document, whose aim is to address what they consider to be a glaring oversight: the exclusion of patient perspectives in the design and governance of AI solutions. As Andrea said:


“When we are now in this wave of AI, and thinking about how predictive models scoop up data to make predictions about people, we're asking: how can that be used for a patient's benefit or how can it be used to create harmful products or even weapons?”. She continues, "as a patient advocate, being somebody who's trying to protect these communities, I'm just trying to build capacity within grassroots organisations to really look at this and take a human rights approach to finding gaps in policy and making sure that we are well positioned to establish our voices as AI policy gets made”.

Helping communities to build their capacity is front and centre of the AI Rights for Patient, written in a clear and accessible manner, the document sets out seven key tenets for AI and data to be properly regulated and beneficial to all stakeholders, not just shareholders:

  1. Patient-led governance;

  2. Independent duty to patients;

  3. Transparency;

  4. Self-determination;

  5. Identity security and privacy;

  6. Right of action;

  7. Shared benefit.

The dissemination of this document that highlights the dangers linked to data misuse by big tech and medical companies is to put power back in the hands of the patient, and reiterate the fiduciary duties of doctors and medical institutions. “The AI Rights Initiative came about due to the lack of patient voice, and the centrepiece of what we're saying is: we require independent duty to patients. And what we mean by that is,when we think about the law, fiduciary duty is a legal responsibility or real legal rights that allow a patient, when harm happens, to take real legal action, and to have somebody with a duty of care and loyalty.” 


Unfortunately, as rapid technology has proliferated in the US medical industry, bringing in billions of dollars, priority has been given to profit, companies and their shareholders, above the safety and rights of civil society. As Andrea reminds us, the Cambridge Analytica scandal is a prime example of the dangers and the negative impact unabated companies can have on societal issues. 


“We continue to hear a lot about how AI is going to replace this function or that function; but at the centre of it, doctors have that duty of care always to patients and if we’re removing that in any way and replacing it with a tech company, we have to realise - and we learned this from Cambridge Analytica - primary duty is to shareholders when you’re dealing with companies. So, if we don’t re-establish that relationship and [adopt] new legal duties independently from duty to shareholders, we are going to continue to see the same things that happened with Cambridge Analytica, only worse”, warns Andrea.

Unregulated market and Human Rights

Disruption is a key mantra of big tech, digital companies and AI supporters; the connotation is that technology is a force for good and that innovation will bring the future that will solve the issues of today. This philosophy is often accompanied by a distrust, and at times a distaste, for regulation or interference of the free market. In theory, monitoring and regulation  from governments or civil society disrupts the attainment of a better future. In practice, unregulated markets often fall into the trap of monopolies, where a few conglomerates or large companies hold the keys to the supply that often lead to poorer outcomes for individuals with little access to alternative resources. As Andrea says, “we have a monopoly here in the US and monopolies are never a good thing”, yet “we’re continuing to advocate here in the US that the industry ‘can regulate themselves’”. 


The existing regulatory framework in the US isn’t equipped to deal with the resources and ability of large tech companies to evade accountability and responsibility for their actions. “There should be norms around how we use AI, but they're more like guidelines, they're more like voluntary. When we get into that place, we really can't have paths to accountability when patients experience harm. And we won't get into a place where we get ahead of the next threats because these companies, when they face the threat of liability, are going to fight back”, Andrea says.


Though Andrea is speaking from a US context, the increase in privatisation of the NHS and the growing influence of big-tech companies in our healthcare system, means that this lack of regulation and predisposition to protect market forces are a threat to patient care and patient rights here in the United Kingdom. However, the right to privacy is a human right that is recognised and defended by a number of treaties at the international, regional and local level.  Yet, little is known about these rights for individuals, as humans and as patients. This lack of clarity helps to compound the impunity of large tech companies who may access and utilise your data.  



Patient data is related to the concept of the ‘right to privacy’, and is protected by Article 17.1 of the International Covenant on Civil and Political Rights that states “No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence”. This right is also included in the European Convention on Human Rights under Article 8, and in fact, has been enforced in UK law by the Human Rights Act, which was introduced by Tony Blair’s government.


An explanatory document on the Human Rights Act provides information to the public to promote a better understanding of what is covered and to what standards governments and institutions should be held accountable to. It says "the Human Rights Act: is the law that staff in NHS services, local councils and services should respect and protect your human rights”. In fact, it explicitly states that staff, working for the NHS, must do three things: respect, protect and fulfil your human rights. Therefore protecting one’s right to privacy and protecting one’s information from intrusion is not only the professional duty of health care professionals but their legal duty. 


What is currently being omitted from the conversation around AI applications in healthcare, particularly around how our data is used, is what rights do we have as patients and citizens, and what laws are big-tech companies permitted to infringe as they continue to exploit our information to feed their algorithms and Large Language models - the defining technologies of this ‘revolution’. Despite this omission from civil conversations, the idea that changes in medicine and technology may have adverse effects on patient’s privacy and outcomes isn’t new.


In 1997, the Council of Europe, an institution that upholds human rights and the democratic process across Europe, introduced the Convention for the protection of Human Rights and Dignity of the Human Being with regard to the application of biology and medicine, also known as the Oviedo Convention. At the time, it was the first legally-binding international text designed to preserve human dignity, rights and freedoms against the misuse of biological and medical advances. Its explicit aim is to ensure that the "interests of human beings must come before the interests of science or society”. An explanatory report to the convention states that "Science, with its new complexity and extensive ramifications, thus presents a dark side or a bright side according to how it is used”. This perfectly sums-up where we find ourselves as we face the rapid, and unabated, proliferation of these new technologies. 


For further reading and understanding of the potential impact of AI and new technologies on our healthcare systems and the patient-doctor relationship, a report by the Council of Europe sets out six key themes that civil society must address to ensure continued protection of patients:

  1. Inequality in access to high quality healthcare; 

  2. Transparency to health professionals and patients; 

  3. Risk of social bias in AI systems; 

  4. Dilution of the patient’s account of well-being; 

  5. Risk of automation bias, deskilling, and displaced liability; and (6) Impact on the right to privacy.

These are key themes that underpin the work of the Light Collective, and form part of their aim to demystify the impact of technology on healthcare. As Andrea told us, "the uses or ways to weaponise a certain type of AI are going to depend on the technology, the way it's deployed, and the community that it is deployed upon.” In other words, we must contextualise our understanding of AI, and raise awareness that whilst the technology has great potential it must be done within a system that regulates and upholds the privacy of patients.


Andrea likes to stress that it is more than just data, that it is the whole system that requires structural changes to ensure that we avoid worst-case scenarios, from commercial corruption to nefarious state (and non-state) actors exploiting our data.


"My challenge within my role at the Light Collective is helping a lot of patient communities who have been harmed, or whose trust has been broken, to understand that isn’t just about the money, it isn’t just about grabbing back your data, and sharing it with who you choose, this is a need for structural change in our legal system or at a global policy level. We’re not going to get ahead of this by just adopting a shiny new group support platform that has terms of services that can change at any time. We will get ourselves in the same repeated mistakes; when this happens the trust will be broken again, to the point where we won’t want to use technology anymore. We are getting to this point where some members of the community do not want to use technology at all… that’s how bad it has got."

0 comments

Recent Posts

See All

Comments


    bottom of page