(Credit: Pavlo S/Shutterstock)

Every major social networking platform seems to be struggling to fight fake news these days. Twitter is mowing through millions of account deletions, and YouTube recently pledged $25 million in part to introduce tools to users to identify bad information. But you can argue that Facebook has fought the most visible battles, right down to its CEO Mark Zuckerberg getting grilled by Congress on live TV.

SEE: How to update your security and privacy settings on Facebook

With over 2.2 billion monthly active users, Facebook has an enormous attack surface with wide-ranging influence. And in the wake of the Cambridge Analytica scandal, it has yet to fully escape intense media coverage of its war on fake news.

One way to deal with this harsh limelight is to pull back the curtain a little, and now the Washington Post has revealed that Facebook has been hard at work on a trustworthiness measurement for its actual users, rather than just for the content that its users share.

Facebook product manager Tessa Lyons tells the Post that the company previously introduced a system where users could flag content as possibly fake, but this system was frequently abused to target accurate information that the user simply didn't want to see. The company is also collaborating with professional fact checkers to evaluate what Facebook users are posting.

If malicious content flagging can be coordinated among a group of Facebook users, it can stifle the effect of user-generated evaluation, so it appears that Facebook has decided to up the ante by skipping the content and evaluating the users themselves.

But because an algorithm can be tricked if you know how it operates, Facebook isn't detailing what data it plugs in to arrive at its number, which ranges from 0 to 1.

FOLLOW Download.com on Twitter to keep up with the latest app news.

However, we can still reasonably assume that this converts to a percentage scale, wherein a score of 0.60 would translate to a 60 percent probability that a Facebook user is acting with intent to distribute or directly participate in fake news. In that case, certain actions may be taken against the user if their score comes in too low.

Meanwhile, a user will not see any indication of this trustworthiness score on their end, which could lead to an anxiety-inducing Panopticon effect where you never know if you're under Facebook's microscope, or if you're doing anything that will negatively impact your score, or how often your behavior is being re-assessed.

We reached out to Facebook to ask them if they're concerned about Facebook's users reacting negatively to this new measurement system, but we did not immediately receive a response.

But with more than 2.2 billion users logging in every month, Facebook's new user evaluation tool is presumably scanning the network 24/7 to keep up with all the user activity, and you may never know when or if you've gone under examination.

The takeaways

  1. Facebook's war on fake news has expanded to evaluate the trustworthiness of an actual user, rather than just the trustworthiness of the content they share, reports the Washington Post.
  2. Facebook isn't detailing exactly what data it uses to evaluate a user, nor can a user be aware of what their score is.

Also see

Tom is a senior editor at Download.com.