Facebook Is Secretly Rating Users' Trustworthiness
Facebook is using a new, secret rating to judge how trustworthy each user is, as the social network continues to battle against accusations that it's an incubator for fake news. The site's role in the manipulated 2016 US presidential election has prompted numerous changes to Facebook policy, but most users have been unaware that their truthfulness has quietly been assessed.
Facebook has implemented a brace of transparency tools, particularly in the aftermath of the Cambridge Analytica controversy in which a market research firm used a cache of extracted data from the social network to target election messaging. Many of the changes have been intended to give greater insight into who is pushing the messages you see as part of your timeline. For instance, Pages must now disclose any adverts they're running.
What it hasn't revealed until now, though, is the result of a roughly year-long reputation assessment that will classify its users. Facebook is assigning a score – between zero and one – that reflects individual credibility, the company confirmed to The Washington Post. That's based on a whole host of factors.
The goal, the company says, is to better predict who might be attempting to game the system. The new trustworthiness score is used alongside other behavioral clues to that effect. However what it won't be doing is telling users how they're being rated – or what their score is – because it doesn't want to give bad actors an opportunity to try to skew their ratings.
One of the metrics Facebook will admit it's using is reported post behavior. As of a 2015 update to the site, users are able to flag posts that they believe to be false. That can include text, photos, and videos at odds with Facebook's acceptable content policy – such as violence or other extreme content – or things believed to be fake news.
The problem was, Facebook unsurprisingly discovered, users were reporting posts as being false when in fact they just disagreed with them. Now, in combination with independent fact-checkers, the site uses those patterns to help shape each user's trustworthiness score.
"One of the signals we use is how people interact with articles," Tessa Lyons, product manager responsible for fighting misinformation, told the paper. "For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person's future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true."
It's a decision that's likely to lead to further controversy. Facebook has already found itself under attack from users frustrated that their Pages have been deactivated or delisted, with accusations of both left- and right-leaning bias in the way it moderates content. The news that there's another, secret metric that could affect how visible shared content might be is only going to fan those flames.
For Facebook, of course, transparency around its algorithms would do as much technological harm as it might do reputational good. Pulling back the curtain on how it calculates reliability would only pave the way to attempts to manipulate that.