May’s AI News: Key Moments and Why They Matter
2 May, 2023
Chief Executives of BigTech are invited to meet with US Vice President Kamala Harris at the White House
Summary: The Chief executives of Alphabet, Google, Microsoft, OpenAI and Anthropic met with VP Kamala Harris and other top administration officials to kay AI issues. It was noted that Joe Biden shared with the CEOs: “Companies like yours must make sure their products are safe before making them available to the public.” This meeting also comes alongside many initiatives aimed at governing AI from the White House such as public assessments of existing generative AI systems, New policies for the use of AI in public services (OMB) and new investments to power responsible American AI research and development (The National Science Foundation announced $140 million in funding to launch seven National AI Research Institutes).
Why it matters: As we wait for the arrival of hard law to warn BigTech of the impact of their developments and provide some means to hold them accountable a meeting such as the above is reassuring. It is important to consider that the key motive for companies is usually profit. The soft power of politicians is to counteract this drive for profit with ‘warnings’ of how companies can infringe on civil rights and privacy while eroding public faith in democracy. The Biden administration confronting the rapidly expanding use of AI demonstrates the US governments’ awareness and sensitivity to the dangers posed by technology to public safety, privacy and democracy. Although there is a limit to what the White House can do without enforceable legislation, the Vice President telling top tech CEOs that they have the responsibility to ensure that their products are safe is a great start.
Unions: 150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting
Summary: on May 1st more than 150 African workers whose labor supports ChatGPT, TikTok and Facebook voted to unionize at their landmark meeting in Nairobi. All of those in the union are current or former workers who were employed by third-party outsourcing companies to provide content moderation services for AI tools used by OpenAI and Meta. Many have suffered from the mental toll of content moderation work and although their roles are some of the lowest-paid in the global tech industry ($1.50 per four) they often suffer from PTSD. The establishment of the ‘Content Moderators Union’ began in 2019 and is now working to sue both Facebook and Same in Nairobi Court.
Why it Matters: These AI workers are invisible, underpaid, and the backbone of the tech in all our pockets. This will be the first of many unions to come and despite the mental toll of work, the workers are given little resources for therapy and prevention of mental damage. For reference of their pay (the lowest in the tech industry), $1.50 is approximately 203 Kenyan Shillings, and with a PPP (purchasing power parity) of 109.64 that's equivalent to earning $1.80 p/h in the US. These 150 advocates are set to be pioneers for Kenyan content moderation but for the role globally, their advocacy for change will permeate into other regions.
The “Godfather of AI” leaves Google to speak about his fears of AI…
Summary: Geoffrey Hinton, an award-winning computer scientist known as the godfather of AI has left Google to warn freely about the danger of the company’s new technological pursuits. His exit from Google caused a miraculous news storm. He is a rockstar in the world of AI and so stating his regret for his life’s work is a significant statement on the direction of current innovation.
Why It Matters: It drew attention to the responsible AI, AI governance and AI ethics space. But why didn’t Hinton speak up years ago when his Google colleagues (ie. Timnit Gebru) were tinging the same bells and risking a lot more than he is now?
There are narratives of an underbelly to Hinton’s decision to leave Google: Hinton was indeed a ‘godfather’ in Google’s AI development but in the development of neural networks as the basis of AI rather than the Transformer model which Google published or further developments (ie. BERT). Some believe Hinton’s departure could have also been based on not being offered a key role in Google’s GenAI future after the merger of Google Brain and DeepMind. The safety concerns he is now publicizing could also be a great exit plan and further launch into his new chapter in safety advocacy and a launch of a new company.
Whatever the reasons for Hinton’s departure it matters because it has placed ‘AI risks’ in the spotlight (where they have long belonged but been swept under the rug for years). Even Gary Marcus, a significant voice in the AI space who has long been known for his intellectual differences with Geoff, chimed in to say he deeply respects his choice to go public with his concerns.
In short: It is great that Geoff went public with his concerns about AI, one may wish he supported AI ethicists years earlier when he had the power as a big voice in Google to really do something about it. It is harder to stand up when you’re in the belly of the beast, but often more respectable and powerful to do so when you have things on the line but do the right thing anyway.
Norway’s $1.4tn Wealth Fund Calls for AI Governance
Summary: The world’s largest sovereign wealth fund has set a precedent by calling for the state regulation of AI. CEO Nicolai Tangen of this $1.4tn oil fund (1.5% of every listed company globally) will be setting guidelines for how the 9000 companies it invests in should use AI ethically.
Why it matters: This is a great example of how investors have the power to shape company action. Tangen is setting guidelines for companies it invests in on the ‘ethical’ use of AI and by doing so he will not only impact the practice of companies in their fund but set a precedent for other investors, who will hopefully do the same with their funds. This also puts more pressure on authorities and government to regulate as Tangen noted he is not seeing a “pipeline of regulation coming” and this is disconcerting as he says ‘there is not enough regulation of fast-growing sector” and that he “wanted new rules to govern how AI is used.” For the last three years we, at EthicsGrade have supported investors to view their portfolios through their ethical lens, judging their companies based on their corporate digital responsibility.
As we begin the third year of our annual report on the digital responsibility of the largest listed companies in Switzerland with the Ethos Foundation, we welcome this “prioritization of AI governance.” We partner with the Ethos Foundation and examine how the 48 companies listed on the SMI Expanded manage their digital challenges in seven specific areas: governance, transparency, data protection, use of AI, management of sensitive activities and social and environmental impact. The results show that while progress has been made compared to last year, transparency is still sorely lacking. It is important to note that 43 of the 48 companies we engaged in this project in 2021 strengthened their commitment to responsible digitalisation in 2022 and thereby measurably reduced their digital risks. We are excited to see what happens in 2023 as we begin rescoring companies.
Sunak launched 100m pounds for an expert Taskforce to help accelerate UK’s AI sector.
Summary: The UK Prime Minister and Technology Secretary announced a 100-million-pound start-up funding for Taskforce responsible for accelerating the UK’s capability in the rapidly-emerging AI industry. With AI set to contribute billions to the economy, the PM has made it clear that the Taskforce will work to establish the UK as a world leader in foundation models and act as a global standard bearer for AI safety.
Why it matters: Yet another example of government focus on AI, showcasing its rise to fame in the past months after ChatGPT went mainstream. However, unlike the US, this ‘task force’ is not geared at AI ethics but rather helping “cement UKs position as a science and technology superpower by 2030.” The quote shared showcases Sunak’s focus on the economic benefit of AI and is echoed by Michelle Donelan’s (Science, Innovation and Technology Secretary) emphasis on safety and ‘future-proofing our economy’. The government needs to balance this narrative with a stress of the importance of ethics, governance and regulation.
Keep Your Ears Out: Audio Misinformation is on the rise
Summary: There has been a rise in audio Deepfakes, specifically in voice technology. With an increase in audio information police forces and governments are finding them challenging to distinguish as audio is easily recorded, remixed and transcribed. Misinformers are layering old and out-of-context audio and recreating voices to share false narratives with the public.
Why it matters: we have seen the impacts of visual disinformation in deep fakes and written misinformation in the polarization of politics, so what’s next? Audio misinformation. With the wave of narratives, we have been experiencing with visual deep fakes, IP copyright cases and the creative industry disruption for visual artists, music is right around the corner. Why is this worrisome? What does it mean to live in a world where we cannot trust if the content we consume is real or fake? A large issue with visual deep fakes is that they were used to spread false information and manipulate public opinion. Even if the audio material is genius the distrust born by audio misinformation would prevent individuals from actually believing what they hear. This leads to the epistemic threat that ‘Audio deep fakes will interfere with our ability to acquire knowledge about the world by consuming media.
Criminals have also been using voice technologies to impersonate CEOs and children to synthesize a message in hopes of a transfer of funds. Today these fraudulent deep fake voices are used to easily synthesize more natural voices identical to the original voice they are trained on. This new AI tool is disrupting the industry and causing lots of confusion for customers while providing tools for cyber-criminals.
CEO of OpenAI asks Congress for Guardrails for AI…
Summary: The CEO of OpenAI, Sam Altman, went to Congress to discuss rules for A.I. and he is certainly not the first CEO of an AI company to ask for guardrails for AI. Many who are deep into AI research and development recognize the profoundly disruptive nature of this technology on society. They are asking for guidance to level the playing and prevent widespread harm.
Why it matters: The CEO of one of the lead AI companies willing to walk to Congress to plead for these ‘guardrails,’ showcases that companies want AI governance. There are of course existing and emerging laws on AI that codify varying degrees of these obligations, alongside standards as well. There have been some risk-based approaches to govern high-impact systems, testing or auditing of high-risk systems to mitigate risks and appropriate oversight and monitoring of those high-risk systems to avoid harm. Yet, one of the key contentions of the AI Act is “What constitutes as high-risk?” … A further question is the extent and nature of oversight. Is the government the best place to regulate AI or should companies take responsibility and be entrusted to regulate themselves? Who is responsible for AI?