Advertisement

Facebook's fake war on fake news

Zuck just gave your racist uncle a bigger platform to stand on.

It's hard watching Facebook struggle. Like how for the past two years it's alternated between looking like it's doing something about fake news, and actually doing something about fake news.

The company's latest stab at the problem is saying it will change what people see in their News Feeds. The goal is to show users fewer posts from companies or brands, and more shares (or posts) from friends; in particular, ones its algorithm thinks will get you excited.

It's not specifically saying this has anything to do with stopping the spread of fake news from virulent racists, politically active conspiracy theorists or propaganda farms successfully goading our country into tearing itself apart.

No, because that would indicate it's identified the problem. Instead, Facebook says this notable change to the News Feed -- its cash cow, fed by your attention -- is to make Facebook feel more positive for users. To bring people closer together.

Wink.

At this stage, Facebook could lead a masterclass on how not to solve the fake-news problem. De-prioritizing actual news organizations and instead highlighting that InfoWars story about eating Tide Pods that your racist uncle shared and commented on five times is just the latest hasty missive in what seems like Facebook's desire to amplify the issue.

While some are stroking their chins thoughtfully and musing unquestionably that Facebook just wants its users to be happy, it's appropriate to examine the contempt for its users that got us in this situation in the first place.

Right after the November 2016 election, despite being warned by the US government and it being widely reported months in advance that fake news and propaganda were influencing American voters, Mark Zuckerberg flatly denied what everyone was telling him, his users and the world. "Of all the content on Facebook, more than 99% of what people see is authentic," he wrote. He also cautioned that the company should not rush into fact-checking.

America began its spin into chaos. Countries around the world, including the US, were seeing racial violence in the streets we now know was directly correlated with racist rhetoric on Facebook.

Facebook had all the data. All the answers.

Facebook treated it like a reputational crisis.

In response, the company rolled out a "Disputed" flagging system announced in a December 2016 post. The weight of policing the fake news disaster was placed on users, who would hopefully flag items they believed were fake. These items were then handed to external fact-checking organizations at a glacial pace. When two of the orgs signed off on the alleged fake item, the post got a very attractive "disputed" icon and the kind of stern warning that any marketer could tell you would be great for getting people to click and share.

The "Disputed" flag system was predicted to fail from the start, but Facebook didn't seem to care.

Facebook characterized its efforts as effective in April 2017 stating that "overall that false news has decreased on Facebook" but did not provide any proof. It said that was because "It's hard for us to measure because we can't read everything that gets posted."

In July, Oxford researchers found that "computational propaganda is now one of the most powerful tools against democracy," including that Facebook plays a critical role in it.

In August, Facebook announced it would ban pages that post hoax stories from being allowed to advertise on the social network. This precluded a bombshell. In September, everyone who'd been trying to ring the alarm about fake news, despite Facebook's denials and downplaying, all found out just how right they were.

This was the month Facebook finally admitted -- under congressional questioning -- that a Russian propaganda mill used the social-media giant's ad service for political operation around the 2016 campaign. This came out when sources revealed to The Washington Post that Facebook was grilled by 2016 Russia-Trump congressional investigators behind closed doors.

Meanwhile, Facebook's flag system shambled along like a zombie abandoned in the desert.

In September Facebook's fact-checking organizations told press they were ready to throw in the towel citing Facebook's own refusal to work with them. Facebook seemed to be actively undermining the fact-checking efforts.

Politico wrote:

(...) because the company has declined to share any internal data from the project, the fact-checkers say they have no way of determining whether the "disputed" tags they're affixing to "fake news" articles slow -- or perhaps even accelerate -- the stories' spread. They also say they're lacking information that would allow them to prioritize the most important stories out of the hundreds possible to fact-check at any given moment.

By November 2017, fake news as a service was booming.

The following month, December 2017, Facebook publicly declared that the disputed-flag system wasn't working after all. One full year after its launch.

Its replacement, "Related Articles," was explained in a Medium post that came across as more experimentation on users and a deep aversion to talk about what's really going on behind the scenes.

There's more to this story, but you get the idea. It's a juggernaut of a rolling disaster capped off by this month's hand-on-heart pledge to connect people better and cut actual news organizations out of the picture.

"Facebook is a living, breathing crime scene for what happened in the 2016 election -- and only they have full access to what happened," Tristan Harris, a former design ethicist at Google told NBC News this week.

Facebook's response included the following in a statement:

In the past year, we've worked to destroy the business model for false news and reduce its spread, stop bad actors from meddling in elections, and bring a new level of transparency to advertising. Last week, we started prioritizing meaningful posts from friends and family in News Feed to help bring people closer together. We have more work to do and we're heads down on getting it done.

As my colleague Swapna Krishna put it, "The company is making it harder for legitimate news organizations to share their stories (and thus counter any false narratives), and by doing so, is creating a breeding ground for the fake news it's trying to stamp out in the first place."

If only Facebook put as much effort into policing fake news as it does actively stomping on the face of free speech in the form of human sexuality, enforcing extreme, antiquated notions of puritanism with its exacting sex censorship.

Indeed, this week a former Facebook content monitor told NBC News that "Facebook's team of content reviewers focused mainly on violence and pornography, making it "incredibly easy" for Russian trolls to fly under the radar with their fake news." They added, "To sum it up, what counts as spam is anything that involves full nudity."

Thank goodness. I mean, who knows how the world would spiral uncontrollably into chaos and violence if someone saw a boob.

Images: Getty Images/iStockphoto (Holding hands); Simon Potter via Getty Images (Readouts); Getty Images (Mark Zuckerberg)