OPINION

Real-world Impacts of ChatGPT on Google, Employment, and Schooling

4 January, 2023

Real-world Impacts of ChatGPT on Google, Employment, and Schooling

Written by Tess Buckley

ChatGPT is the diamond in this season’s promenade of AI systems. Starting 2023 strong in the AI space, it seems to be permeating all conversations, at work, online, at the house and at dinner parties. There is an overwhelming amount of news coverage and dialogue surrounding ChatGPT. Its stories are often interesting but oversaturated or entangled with the hype of AI. When something ‘new’ is on the AI scene it is important to figure out its real-world impacts. There are three particularly significant impacts of ChatGPT emerging: on Google, employment and schooling. This article explores all three.

Impact on Google

Google is by now well established as the dominant search engine for internet users, who have undergone years of habituation to automatically turn to Google.com. It is therefore hard to be convinced that Bing's attempt to integrate ChatGPT will (in the long term) shift the tides. This is only comically confirmed by the fact that the #1 search on Bing is ‘Google’… ouch.

In 2019, Microsoft placed a heavy bet on OpenAI’s success when Bing invested $1 billion in OpenAI’s development of beneficial AGI. The partnership focused on developing a hardware and software platform within Microsoft Azure which would scale to AGI. Through this partnership, Microsoft became the exclusive cloud provider for OpenAI. The gamble paid off – Microsoft’s investment has so far culminated in the development of DALL-E and GPT-3, the latter of which Microsoft can exclusively license.

Microsoft is incorporating ChatGPT into Bing, but this new integration is not a magic bullet. In fact, I assume it won’t be threatening in the short term because Google’s ad-driven model is dependent on people clicking on links in search results rather than providing a summary of an answer. Currently the interface of search allows for human sense check, whereas the interface of ChatGPT does not facilitate such critical engagement. I would therefore be curious to see how this new integration would work in practice. Firstly, how does it generate revenue for Bing, and secondly, how can ChatGPT be integrated in such a way as to uphold the users' ability to critically engage with search results?

One of the key problems with generative AI as it currently stands is its propensity to churn out misinformation and make nonsense sound convincing. Humans can often mistake confidence for accuracy or experience automation bias. Therefore, a model adept at sounding confident but less adept at screening for truth promises to create problems. In this sense, ChatGPT cannot replace Google search because it removes the stage of the process where a human user sense checks the results page. We should understand ChatGPT as a complex algorithm with the capability to generate meaningful, yet often inaccurate, sentences. ChatGPT should be treated like a toy, not a tool.

In short, Microsoft's AI integration is not a threat to Google. The integration may have sparked headlines and (only superficially) 'challenged' Google’s dominance, but Google has stated they won’t launch anything similar soon, due to potential reputational risk caused by bias and factuality issues in current AI chatbots. Protection from reputational damage creates value and is something to be considered in a decision, this is what we grade!

If you are a fan of Microsoft, don’t be too quick to conclude this marks the beginning of their dominance as a search engine provider. If you’re team Google, don’t worry, Microsoft doesn't have the best track record of acquisitions. The ChatGPT bandwagon will stall in 2023, and perhaps Microsoft jumping the gun might give companies like Google an edge in the end.

Impact on Employment

Automation of tasks that were previously accomplished by humans has been a topic of significant interest in economics due to the impact it may have on labor demand.

Pop culture (I, robot, the matrix, Ex Machina) has done a great job of perpetuating social fears surrounding AI, most of which are mystical. However, tangible risks for humanity can be seen in automation-spurred job loss. To decrease spending after the multi-billion-dollar investment in OpenAI, Microsoft is set to cut 10 000 jobs. This is an overwhelming number of individuals that will be put out of a job at Microsoft for the sake of an ‘AI revolution’. Alongside the race to develop the flashiest AI models, a new and related race has emerged begun: the race to displace the most human employees. A few days after Microsoft announced their job cuts, Google (Alphabet) one upped them by announcing a cut of 12 000. Amazon soon followed, announcing plans to cut more than 18 000 jobs, the largest number in the firm's history. This is not a financially prosperous time for tech companies. That, paired with an increased capability to inexpensively automate, is not good news for job security in tech.

Organisations such as the WEF have stressed that while jobs will be lost to AI in the short term, it will lead to job growth in the long term. I can’t help but follow up with the question: job growth for who? Upskilling is expensive and time-intensive which often already marginalised groups cannot afford, thus widening the gap between the rich and poor. I applaud organisations such as Google, Microsoft, and Salesforce for pledging finance to create new programs to upskill their workforce. However, there are many citizens that are either not a part of the workforce currently, are in an organisation that is not willing to upskill their employees, or are in an industry that is yet to digitalize. While other groups continue to increase their digital skills, already low-skilled workers may remain untapped and trapped in low-paid work.

Undoubtedly, AI will impact workflows and be used as a useful tool. However, ultimately humans will need to remain in the loop. Clickbait stories warning people that “journalists could be out of a job in just a few years” are following a trend in reporting that has been around for over 100 years, with automation historically targeting routine tasks which are easy to replicate. Even President Kennedy complained that too many workers were being thrown out of work because of automation. ChatGPT will not wipe out entire industries, but rather change the roles that exist within them.

Impact on Schooling

Plagiarism is the act of presenting someone else’s work as your own without their consent, by incorporating it into your work without credit. Educators are raising questions about how they will be able to detect if a student has used ChatGPT to write their assignments. Now that ChatGPT is out of the box one must consider how we can distinguish its output from a human. GPTZero was created by Princeton computer science student Edward Tian to detect text generated by ChatGPT and combat ChatGPT’s potential misuse. Using AI both to write for us, and to detect when content has been written by an AI, seems as though it’s complicating rather than simplifying many of the human processes that preceded digital technology.

Although AI does not guarantee students get straight As, it represents the next step in our evolution from manual computation to technology-aided information recall. Similarly to how we evolved from mental calculus to calculators, from remembering facts to Googling them. At the core of this debate is the relationship between effort and quality, or value. The Effort Heuristic by Kruger, suggests that people tend to measure quality relative to the amount of effort dedicated. For example, using performance-enhancing drugs in sports devalues the quality of the achievement. If an athlete wins a medal, but is discovered to be using drugs, they are considered cheating and disqualified. From Krugers point of view, using ChatGPT to enhance academic performance is a comparable offence.

If the use of ChatGPT or any other natural language processing model for academic performance enhancement cannot be properly tracked, it could end up being impossible to distinguish between those algorithmizing their creative output versus those not. It would become impossible to assess students fairly. How does one level the playing field when potentially untraceable natural language processing is used to generate work? Is it best to penalise the students that are caught using this new tool? Or encourage everyone to? It is crucial that educators find a way to ensure fairness and transparency about the use of ChatGPT.

Someone opting to integrate ChatGPT into education is Ethan Molick, Associate Professor at The Wharton School. He has included a new section of his syllabus on AI policy, encouraging his students to engage with and build skills using AI. Molick places an emphasis on the penalty of providing ‘minimum effort prompts’, (which brings us back to and subverts the relationship between ChatGPT and Kruger’s theory on the relationship between effort and value which we covered earlier in the article). Perhaps teachers and institutes will accept the use of AI ‘tools’ as a new normal. It could present an opportunity in education to rethink how students are being assessed. From this perspective, chatbots aren’t destroying college essay structure, but putting pressure on education institutions to innovate and rethink the way we structure and assess learning.

Despite being conceived Circa 370 BC, Socrates’ take on the invention of writing and its relation to memory remains pertinent when recontextualized in the digital age:

"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem (275b) to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

EthicsGrade Benchmarks

EthicsGrade provides ESG benchmarks for investors to better understand the digital risk and corporate digital responsibility of their portfolios. Click the links below to access the benchmark scores of some of the companies relevant to this article:

Google (consumer devices)

Microsoft (consumer devices)

Microsoft (Azure)