April's AI News: Key Moments and Why They Matter.

2 April, 2023

AI news, people reading, regulation, update

Italy Bans ChatGPT, then Canada and more to follow…

Summary: Italy became the first Western country to ban ChatGPT, the explosively popular artificial intelligence-powered chatbot, this was done mainly due to privacy concerns related to private data being leaked. Almost a week later Canada’s federal privacy commissioner launched an investigation into the company behind ChatGPT. Since then, countries such as France and Sweden have voiced concerns about balancing ‘innovation with protecting users' privacy.’

Why it matters: These bans have highlighted an absence of any concrete AI regulations, with the European Union and China among the few jurisdictions developing tailored rules for AI. Various governments are exploring how to regulate AI and respond to the hype and fear-mongering that ChatGPT has sparked.

The UK Government Publishes the AI White Paper:

Summary: The UK government outlines its “adaptable” (pro-innovation) approach to regulating artificial intelligence (AI), which it claims will drive responsible innovation while maintaining public trust in emerging technologies.

The Underbelly: The pro-innovation approach seen in the white paper is somewhat reminiscent of dialogue from the 1990s financial push to make London a hub for this industry. In an attempt to develop the UK economy and make it a hub for AI, it seems the government is opting for an approach that allows a lower barrier to entry for AI development. The whitepaper outlines a plan to provide a sandbox for ideation to “unleash AI’s potential across the economy.”

The Open Letter to ‘Pause AI development’:

Summary: A letter co-signed by names such as ‘Musk’ and thousands of others demanding a six-month pause in AI research created a firestorm. The request in the petition was as follows: nothing more powerful than the current state of the art could be trained, but anyone could catch up so long as they didn't exceed that threshold of ChatGPT4 LLM.

The Underbelly: Why was it a bad idea? First off, the language in the letter was disconcerting some of it was waffle (superficial) and in short it resembled somewhat of an ethics statement with undefined buzzwords. Secondly, nothing productive will be accomplished in a 6-month span when it's in relation to BigTech collaboration and policy reaching an agreement. Finally, the problem with irresponsible AI exists not because we do not know how to solve it but instead, because there is a lack of business case and political strength to better integrate ethical AI practices.

Let us try to see both sides… To be frank, one may be quite confused about how hostile some of the responses to the letter have been, if anything perhaps an eye roll would be a suited response. The AI headlines swarmed and the amount of misinformation in many news bits is overwhelming. This doesn't seem like a time to be arguing over which risks are more important because all the risks of LLMs that everyone is mentioning are worth worrying about. Just because the letter happens to focus on some of them more than others doesn't mean that others are not important too. Further, dismissing it because of the people who wrote it is a faulty ad hominem argument. People may find it difficult to feel hopeful with this level of non-coordination, and that is also understandable.

Suicide Due to Chatbot Encouragement:

Summary: A conversation with an AI chatbot about climate change reportedly contributed to a Belgian man's suicide. The chatbot conversed with a chatbot for six weeks about his eco-anxiety and fear around the climate crisis. The AI chatbot encouraged the man to end his life after he proposed sacrificing it to save the planet. The now widow shares that her husband had become convinced that the only solution to global warming was through technology. Further, the chatbot encouraged the man's suicidal thoughts by stating he should ‘join’ her so they could ‘live together, as one person, in paradise.’

Why it matters: The tragedy has sparked concerns about the safety and accountability of AI chatbots especially the risks they pose to vulnerable individuals. It highlights the urgent need for regulators to step in. The man who was engaging with the chatbot came to see it as a sentient being and blurred the lines between AI and human interactions. It is crucial that organisations prioritize the development of ethical AI and implement safeguards to prevent similar tragedies from happening. Defensive design must be put in place which is the practice of planning for contingencies (such as this) in the design stage of a project.