The Athens Roundtable: The EU AI Act and the Rule of Law
2 December, 2022
Written by Tess Buckley
I’m sitting in the corner of a café at the V&A in London about to log onto a webinar, which will transport me into a room in the company of many of the EU’s leading authorities in AI governance. I’m still at the café sipping on my mocha, but I’m now also sat around The Athens Roundtable, a transnational attempt at steering the conversations surrounding the new AI act and, most importantly, how to enforce it. It’s beautiful and encouraging to see over 600 people in the chat introducing themselves, from Hiroyuki Sanbe of Japan to Gillian Hadfield of Toronto, my childhood home.
The event is an attempt to approach democracy from its bedrock: the rule of law, which, in its simplest form, asserts that no one stands above it, no matter the country or community. It is an approach to governance from a holistic perspective looking at hard and soft law alongside its application and enforcement. The discussion is intended to focus on the lifecycle and ecosystem surrounding the AI legislation and its effect on investors, users, and other stakeholders.
Keynote: Vilas Dhar
Vilas Dhar delivers an impactful keynote that shares a call to action alongside anecdotes on his 20 years in this area. In his first few years of conversations on AI regulation were only shared out of personal interests in small lunchrooms. Quite the contrast to today as he sits delivering a keynote in European Parliament.
He frames the conversation with the following: let us treat AI governance as an opportunity to reconceptualize our democratic models, let us widen the conversations to include perspectives and people outside of tech and policy and let us place moral imperatives at the forefront of the dialogues…
As an opportunity to build ways that improve upon today's system and reconceptualize new democratic models which leave behind the inequalities and injustices embedded in current social and economic structures. Questions of AI regulation and ethics must no longer be treated as merely academic. They must be elevated to properly examine the nature of our nation-state and place human impact at the fore. Dhar shuns any form of reactive procedures. This reminds me of a quote my grandpa always shares with me: look at everything as an opportunity, not an obligation. This always seemed to shift my mindset to a positive and active one. This keynote is stimulating a similar sentiment.
Dhar is a self-titled technology nerd and policy wok. He admits that many of the people here are the same. This sameness he declares a failure, not because those in the room are not qualified, but because they hold too similar a set of tools and frameworks for understanding AI. He rightly reminds us that the dangers of AI and digital technologies are not just relevant for tech geeks and data scientists and that this must be reflected by an effort to expand the conversational space. Conversations with artists, philosophers and anthropologists which move beyond policy and into social impact are his second request.
Thirdly, Dhar distinguishes this conversation from past revolutions. He states that we have dealt with new forms of advancement and mentions the industrial revolution as an example. However, whereas in the past we have often relied on market forces, this new conversation surrounding AI governance has a moral imperative: ‘the world we look to in the future brings forward economic forces but also starts from a bedrock of inclusivity.’ He asks that we (humans) be the inspirations for the technology rather than the often-reversed priority.
He closes by sharing a (rationally) optimistic reminder that technology can transform people’s lives for the better by creating opportunities for human connection and innovation.
Now we have two key bedrocks to refer to the rule of law and inclusivity.
Panel: 'The path towards an enforceable EU AI Act.'
Introductions: The panel is chaired by Nicolas Moës the Director of European AI Governance and a member of The Future Society, who is placed in dialogue with six heavyweights:
- Eva Kaili, Vice President, European Parliament
- Ján Hargaš, Deputy Minister, Ministry of Investments, Regional Development, and Informatization of the Slovak Republic
- Eva Maydell, Member of the European Parliament
- Juraj Čorba, Digital Regulation & Governance Expert, Ministry of Investments, Regional Development, and Information of the Slovak Republic
- Katherine Forrest, Partner, Crovath, Sqaine and Moore LLP and formers U.S. District Judge for the Southern District of New York
- Werner Stengg, Digital Expert, Cabinet of Executive Vice President Margrethe Vestager, European Commission
The first stream of conversation; what will the impact of the act be abroad, will it be complimentary or inspiring?
Stengg sees the EU AI Act as a rule book; stating if the globe multilaterally uses it in their inventions, they will as a result have a larger market to apply their technologies. He hopes to start with the issue of trust. We agree we want trustworthy AI, but how do we make that attainable? Whatever the conversation leads to he is hopeful for something that is practical and applies principles. Stengg believes the EU AI Act has the potential to be a global standard but urges this should be the end result, not the initial goal.
Forrest has no doubt that the EU AI Act will impact the USA, but its adoption will be complicated, like the GDPR, by what is known as the Brussels effect, where there is a unilateral regulatory globalisation caused as a result of the EU imposing its laws beyond its own borders. Forrest believes this AI Act will result in companies complying with contractual arrangements and enforcement briefs. She is uncertain if the US will follow with its own set of regulations due to a few complexities in translation. Firstly, there is the problem of overcoming the bias in the USA towards the adapting existing law. Broadly Americans would much rather have flexible laws than consider additional legislation or acts. The American political system is also more fragmented, with many people focusing on the potential issues of AI in relation to its effect on their specific constituencies. She notes that it’s difficult to know what we are regulating because of the pace of change of technology: the kind of “AI” that exists today is not that which will exist tomorrow. Finally, there is an expense in implementing regulation and governing bodies do not want to inhibit potential innovation or profit for America.
Čorba, of Slovakia, shares interesting points on the impact of the EU AI Act on member states and potential inspirations from the financial sector. He notes that we need to make sure that each member states' perspective as we attempt to tackle the challenge of bringing these rules on AI into practice. When discussing existing frameworks to base discussions he focuses on the financial crash “there is a lot of inspiration to be drawn from this evaluation that works.” Čorba is keen on referring to the financial sector as a space that has existing authorities that oversee the use of algorithms.
Maydell, stresses the importance of making sure the EU AI Act is realistic, with clear sector-specific guidelines. This doesn’t come as a surprise because Maydell was a leader in the trialogues that questioned what happens after legislation? And how will companies make sense of new international standards? She clearly states “we cannot simply write something into existence” we need to make sure the asks are enforceable and do not take skills for granted.
Kaili rounds off the first stream of conversation by leaning forward and pointing the following to Forrest (the resident American); The USA should follow the EU AI act. She highlights that it follows OECD definition and there are no APIs so Americans can take it and stay aligned with the EU which will allow for continuity and collaboration across borders. Finally, she notes what has been iterated by everyone on the panel; there is a need for competent authorities, an agency that is training skills to understand how AI can create more challenges.
The second stream of conversation; what are the most crucial elements for enforcement of the act?
Stengg calls for strong EU-level coordination, expert exchange of ideas and skills in algorithm transparency, and clear categories so that companies can comply with feasible and specific requirements.
Hargas is focused on not over-legalizing the topic. He pleads for flexibility in the definition of high-risk AI to allow for future responses as the AI that we define today will be significantly changed in three years when the law is put into practice. Hargas also asks that we review our capabilities before deciding the form of the AI act. It is important to confirm the EU has the capacity to check and assess compliance with new guidelines. There is already a shortage of skills, which leads to bottlenecks. He takes sandboxes – that is, opportunities to test innovations with real consumers – as necessary safe spaces for innovation, where people's mistakes can be supported and understood. This is where we can connect and learn from fintech.
Maydell is clear on the most crucial element for enforcement of the EU AI Act, which is strong EU-level coordination. Unlike Stengg, Maydell sees sandboxes as key to enhancing innovation and the possibility for AI to be nurtured. Whilst they may not be necessary for enforcement, they are essential for a thriving digital ecosystem. Kaili agrees, recalling most conversations she has about AI, starting with “I am afraid that… I fear that…” we must keep acts complementary to those people innovating, allowing room for testing with the intention to respect values.
Forrest says that “none of the above” are a focus for the USA, instead adding the need for public and private “buy-in”, manifesting in an acceptance of what needs to be done and an agreement to do it. She wants enforcement through protection from friction.
The event gathered an impressive number of key stakeholders and thought leaders; The Future Society, ELONtech, UNESCO, OECD, EU Parliament, IEEE, Amazon Web Services to name a few. As someone still at the beginning of her career, you can get a star-struck hearing people you’ve admired for years engaging with each other on topics you know to be of great importance for your generation.
It often feels like many brilliant people are fighting for an answer rather than collaborating to give one. Promising progress without knowing what that looks like themselves. I understand change starts with our mindset, but it must ultimately manifest in action. It feels frustrating to wait longer for action necessary years ago as shuffles its way through the bureaucratic maze of politics and the structure of law.
It was helpful to see a panel focused entirely on how to actually implement laws once they are written into existence. I smiled when I heard the importance of including more types of people in discussions such as this, those that aren’t ‘policy woks or tech nerds.’ I agreed with the consensus that an Act sounding nice doesn’t mean we have the skillsets to confirm its compliance success. I grinned internally at the consistent nudge by European members for the US to follow suit. All in all, it felt like an opportunity to get everyone on the same page, fill the tanks for 2023 and nod heads at the difficulty of regulating AI, something that is elusive and reactive in nature.
Hopefully, as attendees head into 2023 one thing will have become very clear: the continued growth of AI capabilities and their increasing deployment in societies worldwide echo a call for the protection of democracies and shared fundamental human rights. Organizations are responding with efforts to set rules or standards of development and deployment. We are now in the phase of action-oriented dialogue, next up is the action itself. It is just a matter of time before the policy arrives.