Opinion: Here’s a Simple Way Elon Musk Could Make Twitter Better

Last month, Twitter user Qasim Rashid wrote the following:

Crude Oil West Texas Intermediate Prices

Despite this, Rashid’s tweet got 18,000 retweets. At the time of posting, he is still on Twitter.

Tweets like this come to my mind when I think of Twitter’s announcement on Monday that it has struck a deal with Elon Musk to buy Twitter for $44 billion.

“Free speech is the foundation of a functioning democracy, and Twitter is the city’s digital plaza where issues vital to the future of humanity are discussed,”Musk said in a press release announcing the acquisition.

In recent years, Twitter has developed an increasingly sophisticated system to remove various types of malicious and low-quality content from Twitter, such as hate speech, misinformation about vaccines, and former President Donald Trump’s tweets tacitly supporting the January 6 U.S. Capitol uprising.

Rashid’s tweet does not appear to be against any of Twitter’s rules. But unsystematic disinformation is clearly useless for a functioning democracy.

Conversations about the issue tend to fall along already familiar party lines: people on the left are demanding that social media platforms do more to combat disinformation and hate speech, while people on the right denounce it as censorship. Musk supported the free speech argument; it is unlikely that Twitter will moderate content more under Musk’s direction.

But there are options other than simply removing disinformation or leaving it on. A good starting point for Twitter would be to work harder to not actively promote disinformation. This tweet about oil got 18,000 retweets because Twitter is designed to maximize the spread of very “engaging”tweets. And attractive tweets are often bad tweets.

The Problem with Algorithmic News Feeds

When I joined Twitter in 2008, the site showed users each tweet in strict chronological order. In 2016, Twitter introduced a new algorithmic feed that prioritized tweets that Twitter thought were of interest to users. This change met with significant resistance from users, and Twitter initially portrayed it as optional. But over time, Twitter increasingly pushed users to switch. Today algorithmic feed is the default view.

It’s easy to see this shift as a harmless improvement in user experience. If Twitter knows which tweets I might find most interesting, why not show them first? But this transition had major implications for the platform that Twitter would become.

In 2015, I had enough followers on Twitter that I could count on every tweet to get at least a few responses from followers. Some tweets generated more responses than others, and I usually hoped my tweets would go viral. But my main motivation was to share what I found interesting with my immediate followers.

But a few years later, I noticed a growing variability in the level of response to my tweets. If I wrote about an interesting topic (say, US politics), I often got a bunch of likes and a few retweets. But if I tweeted about a less interesting topic, engagement would be very low. Sometimes I would tweet and get no response at all.

The first few times this happened, I wondered if I had written a particularly boring tweet. But now I think the more likely explanation is that almost no one sees these kinds of tweets. Once the Twitter algorithm decides that a tweet is not attractive enough, it stops posting the tweet in people’s news feeds.

The bottom line is that the Twitter software “educates”us all about what tweets to tweet. Nobody stops us from tweeting about uninteresting topics, but when we do, it’s like screaming into the void. Over time, we learn to write more “engagingly,” which often means writing tweets that are partisan, inflammatory, or pander to the biases of our existing followers.

And since most of our public discussion takes place on Twitter, I think this move has had a non-trivial impact on our political culture. Twitter feeds people tweets that confirm their existing prejudices and make them angry or fearful. When we see tweets from the “other side”, it’s often someone saying something outrageous, followed by dunks from our side. We’re less likely to see tweets that challenge our preconceptions or introduce us to topics we didn’t know we were interested in.

This basic understanding is, of course, not new. This has been a common criticism of social media since at least 2010, when writer and activist Eli Pariser coined the term “filter bubble” to describe the phenomenon. But the proliferation of algorithmic channels over the past decade has exacerbated the problem. A common recipe for avoiding filter bubbles is to consciously follow people with ideological views different from your own. But it won’t help if Twitter’s algorithm notices that you’re not particularly interested in their tweets and stops showing them to you.

CDN CTB