Twitter’s Algorithm Amplifies Content Related to Right-Leaning Politics, Says Research

Twitter's algorithm favors tweets based on right-leaning politics over the left

Research by Twitter suggests that Twitter’s algorithm favors content based on right-leaning politics. And it doesn’t give much preference to news related to the left political parties.

During their research, Twitter discovered that when its algorithm recommends political material to users, it might be doing so in a discriminatory manner.

However, it failed to answer ‘why’ on this subject and called it a difficult question to answer at the moment. Twitter has been accused of anti-conservative bias in the past.

The analysis by Twitter involved tweets from political parties and tweeters sharing news stories from seven countries around the world: Canada, France, Germany, Japan, Spain, the United Kingdom, and the United States.

Many millions of tweets sent from 1 April to 15 August 2000 were analyzed by Twitter.

The researchers compared how many tweets were being amplified more on an algorithmically ordered feed to a reverse-chronological feed, both of which are available to users.

They discovered that whereas right-wing parties and media outlets enjoyed greater levels of “algorithmic amplification” than their counterparts on the political left.

Rumman Chowdhury, director of Twitter’s Meta (machine-learning, ethics, transparency, and accountability) team, said the company’s next step was to figure out why it was happening.

“In six of the seven countries, political-right elected officials’ tweets are algorithmically boosted more than those of the political left. Right-leaning news outlets… receive greater amplification than left-leaning publications,” she added.

“It’s a far more difficult problem to answer why these apparent patterns appear, and that’s what Meta will look into.”

According to the researchers, the variations in amplification might be attributed to “the many methods” employed by political parties to communicate with people on the platform.

The researchers added that the findings “do not appear to support” claims from critics who say their algorithms encourage “extremist viewpoints more than conventional political voices.”

This isn’t the first time that Twitter has accused its algorithm of having a bias.

In April, the company announced that it was conducting a study to see whether its algorithms caused “unintended damages.”

In May, the firm said that its automated image cropping had racial and gender biases in its favor for white people versus black people and women versus men.