Early Lessons from Elon Musk’s New Twitter
Written by Luke Patterson
As with the 2016 introduction of the GDPR for privacy and data protection, the EU’s proposed digital services act and incoming UK Online Harms Bill show that legislators are prioritising content moderation and transparency as the next frontiers of digital regulation. This would have been good news for Twitter two weeks ago, as arguably the industry forerunner in taking necessary steps toward substantive transparency and a promising approach to content moderation.
Where once an optimist might have claimed that Twitter was helping to capture our imagination for what best-practice for social media companies might look like in the years ahead, the tone of Elon Musk’s takeover (so far) looks to promise to capture our imagination very differently, by showing how a company seemingly embedded into the cultural fabric of the digital age can crumble almost overnight.
Of course, it isn’t inevitable that Musk’s takeover spells the end for Twitter. He has a proven track record of incredible success with Tesla and SpaceX, smattered with the occasional controversy. Perhaps early controversy is just part and parcel of what’s required to strip the company to its barebones and rebuild a good governance structure from the ground up.
This would be the optimist framing the conversation again. Completely stripping away the structures at Twitter which were underscoring their promise in developing a well governed social media space would be an awfully inefficient way of revamping their governance for the better. I wouldn’t demolish my house in order to build its extension (there’s a witty idea for a newspaper cartoonist somewhere in there).
Yet this is exactly what Musk has set about doing. Not only has he fired the entire Twitter board, but also disbanded and fired all but one member of their ML Ethics, Transparency and Accountability team. This comes at a time where Corporate Digital Responsibility is climbing the agenda list of ESG. Less likely is this a move to build a bigger and better ethics framework for Twitter’s future, but an overtly hostile declaration of an ideological overhaul of Twitter’s old guiding values and principles which had just started showing their promise.
On the surface the Twitter-Musk saga looks to be a net negative for the shape of our digital future. Yet it has the potential to prove that the Big Tech behemoths are not as invincible or immovable as has seemed the case since the beginning of the social media explosion in the mid-2000s. If Twitter crumbles as a result of Musk’s obstinate refusal to recognise the value of an online space which is moderated for hateful discourse, or the value of truly transparent reporting, etc., it shows at least two things: (1) that there is a social appetite for online platforms to embed principles of Corporate Digital Responsibility and Corporate Social Responsibility into their governance frameworks, and (2) that if they don’t, then there are alternative platforms users are willing and able to turn to.
Already former Twitter users have taken to platforms such as Discord and Mastodon in response to Musk’s takeover. If he’s started as he means to go on, then it shouldn’t be much longer before one or several new market leaders emerge, particularly to fill the significant gap left by Twitter as the largest social media platform for journalistic and political discourse.
Perhaps, then, in the spirit of old-fashioned free market capitalism, an ideal digital future isn’t just about the same old big names learning from controversy and gradually building towards socially and environmentally responsible governance, but more so about the companies failing to comply with best practice losing their market position and emerging competitors learning from their mistakes.
So, what was Twitter doing well before Musk took over, what has he ruined already, and where are the opportunities for Musk to contradict us all and prove that his Twitter is a beacon of best practice for the digital age? This might make useful reading for him, but also the Chief Executives of the platforms looking to take Twitter’s place…
Moderating Content and Transparency Reporting
The role of Twitter in the events leading up to the Capitol Insurrection in January 2021 was the clearest expression of how insufficiently moderated online platforms can produce explosive consequences in the real world. It pushed Twitter to rework their digital strategy in order to properly moderate and transparently report on harmful online content.
Since then, Twitter had:
- updated its Transparency Centre to introduce new types of data which offer improved insights into the impact of its content moderation decisions.
- produced a standalone transparency blog detailing whether and how their ML recommender algorithms amplify political content.
- in May 2022 established a crisis misinformation policy seeking to elevate content which offers credible information and ensure that their algorithmic systems don’t help to amplify viral misinformation.
- committed to further improving their processes for labelling all automated accounts, and removing harmful bot accounts which contribute to the spread of spam and misinformation.
- introduced an automated Safety Mode for users to filter out hate speech and spam before it reaches their account.
Twitter’s content moderation and transparency projects certainly weren’t finished products. Our EthicsGrade research shows that there was still an overall insufficient level of transparency regarding Twitter’s corporate governance efforts. This was a problem compounded by the design of their website, which makes separating their internal reporting from the content they host difficult and time-consuming. There was also a hot debate over whether their content moderation policies and algorithms were getting the balance right between protecting against hate speech, misinformation, and conspiracy versus freedom of expression.
A ‘free speech absolutist’ himself, Musk explained his take over as in a large part motivated by a commitment to uphold freedom of expression on the platform. Yet since then, he has farcically weaponised the big red content moderator button to ban actors’ accounts after they impersonated his profile in protest of his newly implemented pay-to-verify policy. This doesn’t seem very free-speechy to us.
Wherever you position yourself on the moderation vs. free speech debate, Musk getting rid of the ML Ethics, Transparency and Accountability (META) team was a bad mistake. Just because he may have felt that Twitter’s content moderator algorithms were imperfect doesn’t justify altogether abandoning the project (refer to my extension analogy). META were responsible for some of Twitters’ most noteworthy transparency developments, including a publication which announced that a formerly racially biased cropping feature had since been fixed. It’s both highly unusual and highly refreshing for a social media company to report so openly on their mistakes.
In the hands of their new owner, Twitter’s progress in refining their transparency and content moderation mechanisms looks short-lived. Rather than heavy-handedly sweeping the table of the structures already in place at Twitter, we’d have suggested working from the inside-out to review what the teams already in place were doing well and identify the areas in which there was a need for further refinement and streamlining.
Responsible Machine Learning
The progress of the META team in transparency reporting was part of a broader Responsible Machine Learning Initiative launched in 2021. They had committed to reviewing the harms of Twitter’s algorithms and offering public access to the research: such as a fairness assessment of their home timeline recommendations across racial subgroups, and an analysis of content recommendations for different political ideologies across several continents. In short, Twitter was recognising the propensity of the platform to cause real social damage, and taking industry leading steps to improve this whilst holding themselves publicly accountable.
To reiterate, it isn’t that Twitter had already reached the gold standard on the responsible use of algorithms. It was still the case that inflammatory and hateful content was overly amplified on the platform. However, the work of the META team indicated that they were at least taking progressive steps in the right direction. Their disbandment leaves us guessing over the direction Musk plans to take a responsible ML initiative in the future – assuming he plans on taking one at all.
One recommendation from EthicsGrade would be for Twitter to prioritise improving their service interoperability with other market actors in the social media industry. Twitter worked against this when they prioritised ad monetisation and barred interoperability with LinkedIn in 2012. Putting the fences up on a platform monopolises the data of its users. This generates massive profits for the firm but also limits the competitiveness of emerging market actors, and significantly reduces the quality and diversity of services available to users. Musk here has the opportunity to set a new tone for the social media industry and promote an atmosphere of collaboration as opposed to a fierce competitiveness which ultimately limits room for innovation in the market.
Sustainable Technology Reporting
The sustainability of tech companies is yet to receive sufficient airtime as an issue central to the climate emergency. At EthicsGrade, we expect that this will mark the next frontier of digital regulation.
There is an overall lack of awareness of the huge amounts of energy required to train and sustain the ML algorithms that are behind many of the functions of tech companies like Twitter. Social media companies have been pressed into transparent reporting on social issues such as racial and gender bias, but their environmental reporting is comparatively substandard. Twitter’s 2020 Global Impact Report (the latest report we could find) does a poor job of basic sustainability reporting, and they fail to say much at all about the energy consumed specifically via ML training.
Proper environmental reporting should take a similar shape to how Twitter’s META team had set about reporting on the fairness of their algorithms: blog posts and short research reports providing granular but widely consumable detail on the steps the company are taking to ensure the environmental efficiency of their operations.
Though he may have already potentially unravelled the structures in place that were helping to shape Twitter into a socially responsible company, there is a window here for Musk to emphasise the importance of the role of tech companies in the move toward global environmental sustainability. As of yet, though, there’s no indication that this will end up being the case.
We expect that Twitter’s EthicsGrade score is going to undergo significant changes in the coming quarter. It’ll be useful to take note of their score pre-Musk here, alongside the scores of their social media peers.