Ask the Experts: The "Tech back your bits" Panel
- Adrian de Leon

- Jan 31
- 16 min read
On 13 July, bleepDigital held an event - Tech Back Your Bits - in collaboration with the London Vagina Museum and with support from the Unlimited Foundation and UCL. The day began with open-to-the-public workshops to raise awareness of the risks of emerging technologies for marginalised communities. You can read more about this in this previously published article.
The second half of the event saw an exciting panel of experts from across academia, policy and the private sector discuss some of the most pressing topics surrounding AI and the impact of other emerging technologies on society. The talk saw a Q&A session, with questions from our audience answered by our panel. For this event we were delighted to welcome:
Dr Xiao Liu is a medical doctor, with a background in clinical, industry and policy work and currently an Associate Professor in Artificial Intelligence and Digital Health at the University of Birmingham.
Dr Shakir Mohamed is a Research Director at Google DeepMind, overseeing several strands of work, whose overarching theme is overseeing the development and implementation of new technologies that hold a social purpose at its core.
Dr Nikolaos Koukopoulos is a Research Fellow at the UCL Computer Science and is part of the university’s Gender and Tech team, investigating the intersection between technology, security and gender.
Dr Maryam Mehrnezhad is an Associate Professor at the Royal Holloway University of London in the Information Security Department and a System Security Researcher. Dr Mehrnezhad, and her team, runs attacks on devices and applications to help companies fix these issues. A particular focus of the team has been on researching and highlighting the impact of these security flaws on marginalised communities, including gender and LGBTQ+ issues.
The panel was led by Dr Isabel Straw, Director of bleepDigital, an AI & Cybersecurity researcher and an Emergency Doctor in the NHS.

The aim of the panel was to bring together voices of experts, conducting some of the leading research on the potential harms of emerging technologies across various settings, including healthcare, domestic abuse, and the marginalisation of communities through bias in Artificial Intelligence (AI) algorithms. An overarching theme of the panel was concerns regarding the safety of AI and what its negative potential for society could be.
The threat of unregulated AI is a widely documented concern in the media, particularly since the advent of Large Language Model powered technologies such as Open AI’s ChatGTP. According to Dr Mohamed, “the kind of discussion around AI that we’re having today, so much of it is around ChatGPT, and that is what the imagination of what AI is; but I really want to emphasise that AI can be so many more kinds of things”. In this, Dr Mohamed espoused an optimist sentiment regarding the potential benefit of AI across a variety of issues, such as in education.
According to Dr Mohamed, AI “can be the technology that helps us address a key question in education. One of the key things we know in education, is this famous curve, it’s called Bloom’s curve, which tried to assess that in the 80s, the biggest intervention you could do is one-on-one tutoring for people. And if you could do one-on-one tutoring, you would literally change the educational outcomes of those people because it was more effective than large classes”.
Nonetheless, despite the optimism, Dr Mohamed was keen to emphasise that even in the positive applications of AI, discrimination can occur as a counterpoint. “So much of education or AI is really about this question of representation. Representation is: what is in the data? What is not in the data, what kind of problems do they solve? And what kind of problems do we choose not to solve? And I think really this is where there's so much opportunity for harm; and we have endless numbers of examples today from facial recognition, that does not work for women for Black people and Black women in particular.“
In essence, data is representation and what is represented in this data is vital to building a fair and equitable society. A significant concern is the role of bias and its reification in the construction and deployment of algorithms. Dr Mohamed said that “bias has the element of erasing culture, of amplifying misinformation, of amplifying stereotypes; of really, not representing the kind of breadth [of society] and also keeping us stuck in some way to older values when our societies have maybe moved on”.

Existing societal bias being reflected in the algorithms and data that powers AI is a theme that was also highlighted by Dr Merhnezhad, who shared a concern regarding the data sets that inform the performance of everyday devices used by women. As Dr Mehrnezhad pointed out “despite many decades of research trying to close this gap, we still know that the female body is not really studied as much as it deserves”. This existing reality of the under-representation of women in medical research is exacerbated by the fact that “AI based products protect their data sets and models as the intellectual property of the organisation, [which] closes any opportunities that will allow us to do some sort of auditing to see if this data set or model has any sort of bias or discrimination, against a particular group or person”.
Another common theme of these emerging technologies is how rapidly they are evolving, and with each novel iteration of a technology, new threats and concerns arise alongside. The latest technology to raise concerns is generative AI (e.g., ChatGPT, Gemini, Claude), where LLMs are able to produce content (audio, written, video) with only a few prompts from a user. Therefore, an example of a novel emerging threat is the misuse or abuse of images that overwhelmingly target women. Dr Koukopoulos raised this as his biggest concern when reflecting on the AI optimism that is generally attached to the LLMs and other generative AI technologies. He said that whilst “everyone going into this field is wanting to do something positive”, there doesn’t seem to be “a lot of thought about: ’how are we going to do this in a safe way?’”, and the issue is that because “people can get a software that generates images and then manipulate them so that they generate images of child abuse or images of naked women without their consent and so on - I think it's a major problem”.
What is novel about these emerging technologies is how easy it is for everyday users to manipulate them to create harmful images, and for Dr Koukopoulos the concern is “that things, over time, are getting easier in terms of producing these images; it's getting easier and also it's becoming much easier to affect more people in a way”. The compounding issue with this is that “what we tend to see a lot is that the effects of any technology problems are much worse for marginalised groups. And also the fact that the more characteristics of marginalisation someone has the greater the impact might be”. This reality highlights the importance of taking an intersectional lens at the impact of novel technologies and placing more importance and onus on the voices of people who have been, and continue to be, marginalised.

Despite these valid concerns, Dr Liu was keen to reassure the audience that there is real positive potential in AI, particularly in healthcare, or as she shared "I wouldn't boycott working in this field entirely”. Dr Liu points out that “there are really exciting examples of early detection of disease and prediction on deterioration that gives you those hours and days where you can potentially intervene”. For example, “last year, a colleague at Moorfield Eye hospital, published a study [that demonstrated that] eye scans [using AI] can pick up signs of Parkinson's disease seven years before current diagnoses, which in a disease where early treatment is really vital, can have huge potential. The next step is to validate the findings, but the potential is hugely exciting”.
Dr Liu’s optimism of the impact of AI on healthcare was paired with caution, as she said “in terms of cautions, there are many cautions, right? One is that AI keeps getting used as a political football and taking up a lot of resources and creating a lot of distractions by our policy makers and people who hold funds that can resource tackling real problems today”.
She continues, “within healthcare, we see huge amounts of funding being poured into the latest, most exciting AIs technology or startup”. The issue with this focus on funding startups, and investors attempting to get ahead of the curve in the search for the next ‘unicorn’ is that these startups “barely have any evidence behind it”. For Dr Liu, the focus should be elsewhere, “we have real problems that really need resourcing, that's not to say we shouldn't invest in the future; of course we should, but there's a tendency, for some reason, for everyone to just forget what we normally do. You know, forget that we practise evidence-based medicine, forget that we need to test and validate and that we need actual evidence that these things work in improving patient outcomes before being willing to invest heavily in it.”

The questions continued to flow-in from the audience, and the next one asked the panel, what advice would they give to somebody’s daughter, to remain safe with the advent of these new technologies? Dr Koukopoulos was first to answer, by switching the centre of attention, saying “I think my first approach to this would be probably advising my son rather than my daughter. I feel like there are more discussions we need to have with men and boys about these issues rather than women because what we tend to see in research is that women and girls are most often, the victim, or survivors, of these issues rather than the perpetrators.”
Whilst raising awareness and ensuring that women and girls approach online services and digital devices safely is important, according to Dr Koukopoulos, “the onus of changing how we approach these issues should be with the perpetrators”. At the heart of these technological harms is the notion of consent, and more precisely, a lack of understanding surrounding consent. “There are a lot of misconceptions about technology and, essentially, there isn't an understanding of consent, which I feel is really important in the same way that we're talking very openly now about sexual consent. I think it's important to also start talking about technological consent”. In fact, according to Dr Koukopoulos, “there's this perception that whatever you're able to do in a device, you're free to do it essentially”.
In everyday situations, in both the personal and healthcare setting, we are seeing an increase in the misuse, or abuse, of everyday technologies to perpetuate harm, especially in situations of domestic abuse. To counter these threats, a movement for change begins in education, as Dr Koukopoulos said, “it all starts from education for me and awareness of those issues and kind of start thinking about, perhaps a digital citizenship”. So, the question that remains is “how do we grow the citizens that we have?”.
Dr Mernehzad, as a cybersecurity expert, felt that particularly for younger children, the question could be reverted back to the parents, and rather than ask what advice should daughters receive, what advice should parents receive? In Dr Mernehzad’s case, “what I do for her is that I'll constantly monitor her use of technology, and try to educate her from a very young age on what is harmful and what is not harmful”. This educational piece is so important that, she proposes, “I had this idea that we have to insert cyber security and online privacy in primary school teaching and education. It's that important now, because there are so many creative ways that inappropriate content could reach your child, that you wouldn't even think. For example, a link on YouTube Kids that has already been vetted or moderated, will still have some very subtle violence”.
Technology is evolving at a frantic pace, with new benefits and new harms emerging more rapidly than ever before. As Dr Mernehzad said, the online security space is threatening because “I, as a cyber security expert, feel behind already, and sometimes even powerless”. She continues, “I'm like, if I as a cybersecurity person feel powerless; how can all citizens, let alone kids or all, all the adults feel empowered”. The complexity of this reality raises a point that has been touched on already, “I really don't think, us, giving advice has a great value here. It's much more: having proper policies, proper enforcement, proper education”. A real difference will emerge if all stakeholders are brought together to create educational and policy campaigns that bring all stakeholders together to maintain safety and equity at the heart of any technological revolution.

With the conversation in full flow, the panel then tackled questions surrounding the role of AI and other emerging technologies in healthcare settings, a timely conversation that bleepDigital has highlighted in a number of articles surrounding the increase in medical device uses, AI-powered diagnoses and their risks. Yet, how many of these technologies are currently active? According to Dr Liu, “there are now over 800 FDA approved AI medical devices in the US. The FDA is doing a really good job of actually updating that number regularly, so that number is from a couple of months ago. In the EU and in the UK, we don't really know because we don't have a good database for it, but it's probably in the hundreds. And in the NHS, the College of Radiologists have recently started curating a database of products that are actually in deployment within the NHS and the number's like 40 something”.
The potential for the wide application of AI technologies in healthcare settings is high, but in which ways is technology being used at the minute? According to Dr Liu, “most applications within Healthcare are things like diagnostic tools, triage tools and lots of imaging based: things like picking up lung nodules that look suspicious, picking up signs of breast cancer on mammograms”. The immediate benefit of AI to exploit in healthcare is its ability to carry out a ‘high workflow, [with] lots of image analysis, and also lots of data readily available to train AI algorithms. So for example, in breast cancer screening; diabetic retinopathy screening. So those are the ones that I would say are probably most prevalent”.
A common theme of the event returned: the impact of rapidly evolving technologies and their applications in healthcare settings, most recently the proliferation of LLM powered devices. Dr Liu, said, “we've got kind of new categories, which don't really belong in the medical device category yet, and that's the kind of ChatGPT, LLMs type applications”. A significant new challenge of these technologies is that there is an “ambiguity as to how we would classify these general purpose models as medical devices, [relating to whether] they have the ability of doing medical things”. For example, “if you ask it to diagnose, to give a diagnosis, they will do it, but because it is a device for general purpose, it doesn’t fall under the remit of medical device regulation”, she continued. This reality “means we don't have our usual mechanisms for safety”, namely, ”reassurance and quality management and post-market surveillance and all these things and having an evidence base behind it. So that's an interesting area that we're all grappling with at the moment”.
Interestingly, there is another element to consider and this is what Dr Liu describes as the “direct to consumer category”, and these relate to the “thousands of medical apps on the App Store and Google Play Store” , that “will do a whole range of things from, mental health chatbots to cycle trackers period trackers”. The issue is that “many of them are not registered medical devices; so again, no third parties validated the evidence behind them. And that's a very interesting area for me because,that's completely outside the realms of medical device regulation and it really is a bit of a wild west in terms of what you can put on the App Store”.
As technology proliferates, it begins to impact our everyday lives, and the more multi-purpose or encompassing these technologies are, the more of our everyday concerns become vulnerable to technological concerns. This is where our conversation took us next, and in particular, how everyday apps can expose users to dangers, and as often, these harms are more likely to impact marginalised communities. One example is that of fertility or cycle tracking apps, which are easily accessible and widely used.

Dr Merhnezhad and her team have been doing research on this topic since 2019, a pertinent task, especially since the overturning of Roe v Wade in the United States of America, where there have been “so many conversations around: What is this data? How does it work? Where does it go? Who has access to this? Do I give meaningful consent?”. As part of their research, the team have been studying one of the largest data-sets on IoT (Internet of Things) devices that are advertised as Femtech, and mainly used as “sexual and reproductive health fertility trackers [and] sex toys”. Again, reverting back to the importance of ensuring regulation is updated, an issue we are seeing is that “none of these devices are branded as medical devices”, creating a wild west in which these Apps and technologies are left with under-par regulatory frameworks.
Regulatory oversight leaves the door open for abuse, particularly when, as Dr Merhnezhad discovered, these technologies “collect so much data about every little intimate aspect of your life, including the amount of sex activities that you have on a daily basis”. This includes “medical data, including images of body parts, or medical scans’ all of this is uploaded in their datasets”. Threats to personal security are not only confined to data-sets or software, but also in the hardware of these devices. As part of their research, the team looked “at the hardware… only to find out that there are no security mechanisms to protect the data that is being sent from your IoT device”. Moreover, “you can break that connection and manipulate that”, which is concerning once you know that “the number of trackers that are tracking the user even before the privacy notice is shown”. A state of affairs that is “really shocking for just tracking your cycle”, which makes one wonder “how much technology do I need to track a body cycle?”.
A compounding concern regarding a users’ data, is the impact of the ‘private sector vs public good’ debate, in which concerns for equity and privacy are often obfuscated by a concern for profitability. Dr Mehrnezhad raised the point that “the moment [your data] lands in the company data set, we really don't know what happens [with it]”, such as, “if they're selling it to other companies, if the data is leaked, or if it's stolen”. In any case “the user loses complete agency over their own data”. From this, the discussion turned to the wider, more structural question, of the system we, as citizens live in, and within which technology is deployed. Modern society is impacted by grave disparities in income, leading to growing inequalities and an exacerbation of harms experienced by marginalised communities. A question was raised regarding the dangers of a profit-driven society, underpinned by capitalised ideologies.
Dr Mohamed, raised the notion of accountability and considered this a “challenging” topic, because “the capitalist system we are in actually works very well at gearing funding together, directing it with purpose, and creating organisations”. Outcomes that “are not necessarily things we can get done in the public sector”. Whilst innovation and the driving of funding is a benefit of the private sector, Dr Mohamed believed that the role of “the public sector is reasoning about public goods, common goods; how it is to share these kinds of questions of collective privacy, collective risk, collective ownership, collective values”. The question has always been, and will continue to be “how [to] actually marry, these kinds of two things”, and this is “actually very challenging”, yet Dr Mohamed was also keen to reiterate that “we have seen so many amazing innovations come because we were able to do this kind of partnership”.
Indeed, Dr Mohamed continues, “the foundation of a good model” is one “where we can use the best of what allows the capital markets to create investment, to create teams, to elevate economic growth. But also, to ensure that there is real genuine return ownership of those kinds of public goods that have to come back. I think if we took that seriously, we could do that”. Alongside this optimism, there continued to be an acknowledgement of the impact of the system we have on marginalised communities, both here and abroad.
According to Dr Mohamed, “we are entering in my most critical view, a kind of new era of Empire in some sense because the concept of ‘Empire’ was that we can take resources from places that don't have them to bring them to the Metropole, and then under invest in those places”. This idea of neo-Empire reflects what we are seeing in the ‘Green Tech’ revolution, where, as Dr Mohamed says, “we were the ones who polluted the whole atmosphere in the Industrial Revolution, but we are the ones who have the most advanced research. We are more ahead. We have all the research in creating wind turbines and so eventually we get to solve that for all the places who don't have them, we take the profit and they have to deal with [the consequences]”. A solution comes from, again the regulatory frameworks and the systems we have in place that uphold equity and wider-good, where, according to Dr Mohamed, “we need to create certain kinds of norms”, where “you create new institutions [so that] you can create the right kind of norms, the right kind of norms, [then] create the right kinds of institutions”.
The idea of creating partnerships between different members of our society, including the public and private sector was a theme supported by Dr Koukopoulos, “we need to see more collaborations between the private sector and public sector”, making sure they are “on the same page about things”. In fact, according to Dr Koukopoulos, “as societies become more sensitive to these issues, then the nature of profit might change as well”, as, for example, “people will start thinking about safety when they purchase things”. Moreover, another important notion, one also mentioned earlier in the panel is the threat that, as Dr Koukopoulos said, as “we have more and more with technology”, we are seeing a greater “ loss of agency; a sense that we're not quite using the device, but that it is the technology that is increasingly using us now - and we need to change that somehow”.

We must now not allow technology, and the companies that promote these technologies, to use us, and to forgo the progress we have made on securing more equity and more representation for marginalised communities. A task that must be carried out in society and particularly in the healthcare sector, where everyday patient rights and representation are threatened by the quest for progress.
With this event, our aim was to elucidate some of the most pressing concerns of the sector and help raise awareness on topics or impacts that are too often under-represented in the general conversations surrounding AI and medical devices. To those who attended the event, we hope you enjoyed the event as much as we did, and for those who couldn’t make it, we hope to see you at our next event.
If you want to keep up to date with all things bleepDigital, then sign-up to our newsletter here

Comments