Women4AI Daring Circle Seminar: Working Towards Inclusive AI

3 March, 2023

Women4AI Daring Circle Seminar Working Towards Inclusive AI

Written by Tess Buckley

The Setting

Over 70 women tuned in for the latest Seminar with the Women4AI Daring Circle, a working group facilitated by the Women’s Forum for the Economy and Society. They focus on designing Artificial Intelligence (AI) in a way that drives inclusion, both in its development and its application.

Sophie Lambin CEO and Founder of KiteInsights introduced the event, followed by two presentations. First, delivered by the CEO of The Good AI and cofounder of WomeninAI, Caroline Lair, and the second delivered by Elinor Wahal a European digital policy professional working to advise market-leading companies on their public policy strategies, as a consultant at FTI. Followed by a presentation of a working paper authored by Gina Neff and Clementine Collett from the Oxford Internet Institute and Minderoo Centre for Technology and Democracy titled ‘Three Implementation Gaps to Harnessing Inclusive AI in Organisations’. We concluded with a Q & A session in breakout rooms, allowing for discussion and better connection to the speakers. There was a refreshingly genuine interest and shared curiosity amongst everyone in the Q&A.

The Goal

Women4AI is committed to creating an environment where all women help empower the development of beneficial AI, and AI empowers women. The aspiration of this symbiotic relationship was evident in the support and structure of the seminar event. Women are firmly integrating themselves and younger generations within the cutting-edge of AI and other STEM research spaces. Working groups like Women4AI inspire continuous positive transformations of the ecosystem through research, community collaboration and encouraging companies to commit to inclusive AI. The next phase of AI needs to be one where the technology is designed, developed and adopted by teams from diverse backgrounds and is appropriately governed to ensure AI technology alleviates rather than amplifies existing disparities.

Introduction: Sophie Lambin, CEO and Founder of Kite Insights and Editorial Partner of Women’s Forum for the Economy and Society

Lambin delivers a brief but impassioned introduction with the key message encapsulated in the simple quote “If we do not prioritize inclusion the default will be exclusion.” She defines inclusive AI for the Women4AI Daring Circle as systems that consider marginalized groups and addresses the implications of bias in AI.

She reminds us that the topic of AI is not new but already pervades our present. We can hardly comprehend how much AI impacts our daily decisions. Lambin concludes by acknowledging that we sit in an era of great potential for AI, which makes its considerations both exciting and scarier. She urges the importance of placing intention and inclusion at the forefront.

Presentation 1: Caroline Lair, CEO and Founder of TheGoodAI and Cofounder of WomeninAI

After leaving the Tech Industry due to a lack of purpose Carole Lair joined Snips.AI, where she discovered the possibility of AI for good. She started TheGoodAI and spends her time empowering companies and talent using AI to address sustainability challenges in various industries.

The key stream of conversation; how can we leverage the power of AI to drive net-zero?

Lair lists ways to leverage AIs capabilities for sustainability:

  • Help identify patterns and predict how we can handle a transition to a more sustainable world.
  • Facilitate creativity, pushing us to think outside the box in ways we wouldn’t otherwise
  • Act as a catalyst for sustainable change.
  • Implement AI tools and technologies to decrease CO2 emissions (ex. Optimize fright training, work with AI to allow for predictive maintenance, reduce food waste and carbon accounting).
  • Assist in merging sustainability with design and provide different scenarios to design better products (generative design).

Notably sharing that ITC currently accounts for between 5% and 9% of total electricity consumption but will account for more than 20% of the total electricity consumption by 2030, with extensive use of DL which is CO2 intensive. Lair lists ways that tech innovators can decrease the extreme environmental cost of AI:

  • Consider the location of your cloud.
  • Consider your choice of training model (transfer learning to reduce training time).
  • Consider your data requirements.
  • Consider reporting ESG data.

It’s worth noting that these considerations are all measured within the EthicsGrade model, which focuses on Corporate Digital Responsibility (CDR) and Digital Risk. Reporting on such ESG data will significantly improve transparency in how companies’ use of tech is affecting our environment.

Presentation 2: Elinor Wahal, Tech and Digital Health Policy at FTI Consulting

Wahal advises market-leading companies, primarily pharma and tech, on their public policy strategies at the EU and national levels. It is through her work and passion for AI quality that she became well-versed in women’s health data. Her presentation focused on exactly that, with a clear outline; part one focused on data and women’s health and part two explored the role of regulation.

Part one: Data and women's health

Wahal began by suggesting that AI can help address unmet health needs and fill the demand for medical support when implemented correctly. She shared the example of tracking fertility data proving useful in a successful pregnancy. This data can be repurposed by others for further research and innovation. One of the first things that came to my mind when the example of fertility data was mentioned as useful is the surge of women in the US deleting period tracking apps to protect themselves and data privacy in a world post-Roe v Wade. Though tracking fertility data may prove medically useful, I think it’s important to remain aware that introducing technologies that store such sensitive data would need to be met with ever more stringent privacy and security measures to ensure that data could never be weaponized.

She continues by evaluating women’s caring roles in society. Sadly, the care work is not evenly spread with women managing their own health information and others under their care, such as children or elderly family members. Wahal presents digital solutions to decrease the burden for women to be able to access historical health data of those they care for and simplify their work by sharing this data with healthcare professionals easily. One point I considered here was whether the introduction and usefulness of this technology should be gendered at all. Using health data to enable more effective caregiving should be framed as a useful tool for everybody, and we should avoid imbuing new digital technologies with old conceptual frameworks of social roles and stereotypes.

Part two: The role of regulation

“The one thing they (AI systems) all have in common: data” and this is what Wahal believes we should aim to protect. The role of regulation in women’s health data is to protect patients' privacy and fundamental rights while ensuring trust in healthcare applications. She points to two flagship legislative proposals in digital health:

  • The AI Act: the first legislation of its kind, it includes specific regulations that consider AI systems for healthcare purposes.
  • The European Health Data Space (EHDS) is currently considering:

o the reuse of aggregated and anonymised data which has huge potential for research and representation. o online EU electronic data for health-online repositories which are easy to access and available to providers (2025).

Working Paper: Gina Neff, Executive Director at the Minderoo Centre for Technology and Democracy and Clementine Collett, Doctoral Researcher on AI, and Gender at the University of Oxford

Gina and Clementine present a working paper titled “Three Implementation Gaps to Harnessing Inclusive AI in Organisations” in collaboration between OII, Minderoo Centre for Technology and Democracy, The Women’s Forum for Economy and Society and PwC.

The key questions of the paper – with a focus on SMEs – were:

  • how do companies understand the challenges of AI?
  • how do companies prioritise resources to achieve inclusive AI?

The key argument

  • Companies have the desire to create inclusive AI but are often at a loss on how to convert this desire into action efficiently and effectively.
  • There are three key implementation gaps that arise when companies have difficulty navigating the space: engagement, translation, and dialogue.

The three implementation gaps

  • The engagement gap: if companies do engage with issues such as bias and discrimination in their organisation, they do so at a surface level and ineffectively.
  • The translation gap: an inability to translate principles and ideals into action and practice.
  • The dialogue gap: organisations fail to effectively communicate to and facilitate dialogue between all stakeholders of the technology.

The key findings

  • If organisations are going to create inclusive AI, they will have to be an inclusive organisation more broadly.
  • The person who is leading the company matters, their goals and values are often reflected in the thoroughness and urgency of the inclusive AI strategy of the company.
  • Inclusive AI is a much more intensely organizational practice than both researchers assumed prior to the study.
  • An overwhelming number of companies have difficulty framing questions about AI risk and ethics, and this is a large part of the problem of implementing responsible AI frameworks.
  • Many of the companies found innovation and inclusion as mutually exclusive. However, they should be considered intertwined: a more inclusive company fosters better innovation.


The event reinvigorated me and (hopefully) the seventy other women in attendance. Working consistently to understand the risks, ethics and governance of AI can be overwhelming and emotionally draining. Gatherings like this reaffirmed why I entered the field of AI ethics and showcased how many women and non-binary people in the industry are ready to empower the incoming talent pipeline while we work together to solve one of the most pressing issues of our time: AI.

Sophie Lambin ends the call with a ‘high’ asking all presenters “what excites you about the future of inclusive AI?”

  • Caroline Lair: The possibility to shape a world with a positive and smart use of AI. Especially through its limitless applications in science, where AI has already enabled otherwise unobtainable discoveries. The more women in this mission for ‘good’ AI the better.
  • Elinor Wahal: The opportunity to leverage AI tools to allow us to do things that our bodies or minds wouldn’t otherwise be able to do. The impact that ethical innovation guided by policies will affect how we engage with care, pharma and health data.
  • Gina Neff: It wasn’t long ago that the FACT conference documents were released, where we get some of the best work around AI systems to date (e.g., identifying harmful racial discrimination in facial recognition systems). We must consistently remind ourselves how fast the ‘good stuff’ is moving.
  • Clementine Collett: This is a pivotal opportunity. One must consider the bad and the good, but what an exciting time we find ourselves in. Let us not see innovation and inclusion as appeasing concepts. AI regulation is incoming, and we can put in place the right solutions to harness the potential of AI to address many issues.

For further reading: The Effect of AI on the Working Lives of Women