In April, Twitter announced a Responsible Machine Learning Initiative, which aimed to find biases in its platform’s recommendation systems. In a blog post earlier this week, the company announced that this research showed that its machine learning (ML) algorithms amplify political content disproportionately. The company published a paper, revealing the full details of its findings.
Twitter offers users two ways to view their feed. The first is an algorithmic recommendation-based feed, while the other displays tweets in a reverse chronological order. The research found that the algorithmic feed amplifies political content and noted that content from “right-leaning news outlets” is amplified more than those from left-leaning ones. News outlets were characterized as left or right leaning by independent organizations like AllSites and AdFontes Media.
“Tweets about political content from elected officials, regardless of party or whether the party is in power, do see algorithmic amplification when compared to political content on the reverse chronological timeline,” the company said in a blog post. Content was tested from elected officials from seven countries, including Canada, France, Germany, Japan, Spain, the United Kingdom, and the United States. The company didn’t mention which news outlets were examined.
Further, Twitter said that “group effect did not translate to individual effects”, which meant that party affiliation or a person’s ideology is not something the company’s algorithms factor into making recommendations. “Two individuals in the same political party would not necessarily see the same amplification,” the company said, though that does not clarify whether a person’s political leanings will dictate the content they see on the platform, something that experts have often claimed is the case with all social media.
“In six out of seven countries — all but Germany — tweets posted by accounts from the political right receive more algorithmic amplification than the political left when studied as a group,” the company said.
Twitter said that the study only identified that political content is amplified on the platform, but establishing why these patterns occur is a whole other problem. “The ML Ethics, Transparency and Accountability (META) team’s mission, as researchers and practitioners embedded within a social media company, is to identify both, and mitigate any inequity that may occur,” the blog post said.
The company added that a “root cause analysis” will be done in order to determine what changes are needed, if any. “Algorithmic amplification is not problematic by default – all algorithms amplify. Algorithmic amplification is problematic if there is preferential treatment as a function of how the algorithm is constructed versus the interactions people have with it,” Twitter said