Search This Blog

Tuesday, June 11, 2024

The X Factor of Misinformation

Many posts have discussed myths and misinformation.

Marc Burrows at The Independent:

Back when I worked at Twitter, in the days before Elon Musk’s takeover and self-consciously edgy and embarrassing “X” rebrand, we took the threat of misinformation incredibly seriously. I worked on Twitter’s curation team during multiple elections, including in the UK and the US, and through the first two years of the pandemic. I saw how false information spreads quickly and is believed easily, and how difficult it is to stop it travelling once it starts.

We worked with Reuters and the Associated Press to debunk rapidly growing and unreliable stories. We coined the term “pre-bunk” for identifying likely misinformation before it spread. Misleading posts were labelled once they reached a certain influence threshold.

We all knew this was mission critical, because Twitter punches above its weight in terms of influence on the news agenda and public conversation – that’s why Musk became so invested. We all wanted to make it safer. Better. A force for good.

The curation team – my team – was among the first to be cut in Musk’s new regime. In his first weeks, and, in my view, with a lack of subtlety, grace or much sense, he undid years of work; wiping out or reducing those areas of the company that dealt with misinformation and community moderation, disbanding the Trust and Safety team, and unbanning accounts previously sanctioned for spreading harmful lies.

Since then, the EU has found that X is the social media platform with the highest disinformation rate. As Miah Hammond-Errey, the director of the Emerging Technology Program at the United States Studies Centre at the University of Sydney said last year: “Few recent actions have done more to make a social media platform safe for disinformation, extremism, and authoritarian regime propaganda than the changes to Twitter since its purchase by Elon Musk.