Like other Facebook users, I saw the ad the top of my feed this morning: “Amber,” it said, “It’s possible to spot false news.” (What, wait? Really?) “As we work to limit the spread, check out a few ways to identify whether a story is genuine.” Click it, and you're led to a page with handy tips like “Be skeptical of headlines,” “Watch for unusual formatting,” and “Is the story a joke?” Oh, Facebook—always looking out for our best interests!
Or perhaps it’s because the company drew heavy criticism for their role in helping disseminate false information during the 2016 election. Surely lighting a flame under their feet is a recent bill passed last week in Germany that can fine social media companies up to $53 million if they don’t remove hate speech and fake news from their sites immediately.
And just today, British newspaper the Times accused Facebook of “refusing to remove potentially illegal terrorist and child pornography” (yikes!) after a reporter went undercover on the site and found offensive material being shared, and even promoted, through Facebook algorithms.
Blame it on those pesky algorithms—they’ve got a mind of their own! Zuckerberg did in a recent interview with Fast Company:
Go back a few years, for example, and we were getting a lot of complaints about click bait. No one wants click bait. But our algorithms at that time were not specifically trained to be able to detect what click bait was.
Facebook’s call-and-response system, Zuckerberg said, is a “constant work in progress,” and he almost started waxing philosophical about the why’s and how’s of (how to eventually manage) the human impulse to tell stories (and spread lies).
It’s not like they are problems that exist because there’s some kind of underlying, nefarious motivation. I mean, certainly giving people a voice leads to more diversity of opinions, which if you don’t manage that can lead to more fragmentation, but I think this is kind of the right order of operations. You know, you give people a voice and then you figure out what the implications of that are, and then you work on those things.
In December, Facebook partnered with fact-checking sites Snopes and PolitiFact to launch focus groups that let users mark news stories as fake. And just today, the company orchestrated a massive crackdown in France, deleting over 30,000 fake accounts.
Those fake news tips circulating right now are part of their latest attempts to cure the spread of viral misinformation. Here’s a good point-by-point breakdown of those tips, which explains why Facebook’s fake news advice doesn’t necessarily help. For instance—when trying to investigate the source of a news story, confirmation bias is a thing that exists:
Sure, this might eliminate the “Denver Guardians” and other baldly false news sources of the world, but on the other hand: Enough people trust Breitbart for it to be a distressingly powerful news organization. And if you check a shady news outfit’s About section … “Well, it says here that this organization is ‘dedicated to unraveling the criminal conspiracy that is the Clinton Foundation.’ Sounds legit!”