The Meta Oversight Board Has Some Genuinely Smart Suggestions

In its new decision, the board reveals some significant and insightful recommendations for how to improve speech online.

People sitting in a conference room, their faces covered by Facebook logos
Getty / The Atlantic

A little more than 14 months after The Wall Street Journal first published the “Facebook Files”—the reporting series that exposed the inner workings of the site’s content-moderation practices—the Meta Oversight Board has finally released its opinion on the controversial and opaque cross-check program that gave preferential treatment to certain users of the site, even when they openly flouted the site’s community standards.

For months, online-speech experts worried that the board’s decision would fall short by recommending the oversimplified solution of terminating the program entirely, or suggesting nonspecific methods of reform. But the opinion, which lays out, according to the board, more than two dozen concrete steps for which types of entities can qualify for such protections—and who will select them—exceeded expectations, offering serious guidance for one of online speech’s toughest questions.

The problem that Facebook is trying to solve has been central to the internet since its beginning. From very early on, anyone with a keyboard and a modem could suddenly disseminate their ideas to dozens or thousands of people without having to get a pitch approved by a New York Times opinion editor or renting out a town hall.

But one of the uncomfortable truths about social media and user-generated-content platforms is that, though they have democratized individuals’ access to publishing, they have also re-created existing power structures, importing them from the offline world to the online one. This is especially true on the major social-media platforms—including Facebook, Twitter, YouTube, Instagram, and TikTok—where someone who has existing fame can easily attain a massive following and reach many more people with their speech than the average blogger or YouTuber could. When this is just about celebrities hawking their brands or posting uncensored videos of their makeup routines, there’s little harm. But as the world has recently seen with celebrities such as Alex Jones or Ye, bad actors with large audiences can use these platforms to easily spread cruel, hateful messages to millions, causing extensive, real harm with their online speech.

Facebook grappled with this reality early on—as far back as 2013—and over time developed a system of content moderation and speech governance that gave special systems of review and treatment to its most high-profile users and public figures. This was the cross-check program: Facebook’s off-book review process that prioritized certain users for a second look at decisions that came out of the general content-moderation process, which performed something like 100 million content-moderation assessments a day.

The cross-check system was well known in the industry but invisible to the general public until the “Facebook Files.” That reporting detailed for the first time how the system worked and just how pay-to-play—or perhaps more accurately “pay-to-stay”—it really was. Whereas a topless photo posted for breast-cancer awareness by a user in Brazil got that user locked out of Instagram nearly immediately, her appeal languishing for weeks, nonconsensual pornography posted by a celebrity athlete with roughly 100 million Instagram followers, of a woman who had accused him of rape, stayed up for more than one day, enough time for some 56 million people to see it. (The athlete in question, the Brazilian soccer star Neymar, has denied the rape allegation.)

Neymar existed on a “white list” within the cross-check system used by Facebook, which allowed his posts to stay up. Compounding concerns about special treatment, Meta announced in 2021—after the “Facebook Files” had been published and Facebook had admitted that criticism of the cross-check program was fair—that it was putting an economic deal in place with Neymar “to stream games exclusively on Facebook Gaming and share video content to his more than 166 million Instagram fans.”

Following the Neymar exposé and general reporting on the cross-check program, the Meta Oversight Board began an investigation. The board, which is often called the Supreme Court of Facebook, was established in 2020 to start reviewing exactly how—and how well—the speech mega-platform adheres to basic principles of human rights around free expression in both substance and procedure. In the two years since it’s been running, the board has taken on a small but essential set of cases, most notably Facebook’s decision to ban President Donald Trump from the platform because of his posts during the January 6 insurrection, a decision it upheld but critiqued.

In the 15 months since the board began its review, the group has slowly and meticulously held fact-finding meetings with Meta, interviewed former employees who worked on the program, and held meetings with outside groups to ask about their biggest concerns regarding cross-check. (I attended some of these meetings as a researcher who studies online-speech governance, not as a participant.)

As the board deliberated, many with knowledge about the program expressed skepticism that it would be able to develop real recommendations to address the situation. Essentially, although the cross-check program had deep flaws, it was seen as a necessary part of effective content moderation. “If one of the board’s goals is to protect freedom of expression, there’s a real chance that scrapping the program could cause more speech to mistakenly come down, which could very much impact discussion and debate around important issues,” Katie Harbath, the former director of global-elections public policy at Facebook, told me.

But the board did not recommend abandoning the cross-check model. Rather, the decision, announced earlier this week, gives perhaps one of the most complete and exhaustive reviews (it runs to 57 pages) of how the black box of content-moderation appeals works for elite users of the site—and then discusses how to improve it. “I have some quibbles, but overall this is incredibly comprehensive,” Harbath told me.

Central to the board’s recommendations are two planks many former and current employees at Facebook have pushed for years: transparency and separation of powers. But unlike some past decisions that have gestured at general and somewhat subjective ideas of proportionality or equity, the board’s recommendations are very specific, going so far as to illuminate which types of public figures Meta should, may, and should not include in the cross-check program.

High on the list of users who should be protected are journalists, public officials, and candidates for office. But the board is clear that the era of preferential treatment for celebrities and political leaders who are valuable to Meta for financial reasons must end. Instead, the board recommends that any public officials and candidates placed in this group of protected entities should be selected by a team that “does not report to public policy or government relations” teams at Facebook. This splitting of Facebook’s speech oversight from its political interests is a type of church-and-state separation that threatens to end what many at the company call the “Joel Kaplan effect,” after the notoriously politically powerful Facebook adviser who oversees the company’s lobbying efforts in Washington, D.C. “List creation, and particularly this engagement, should be run by specialized teams, independent from teams whose mandates may pose conflicts of interest, such as Meta’s public policy teams,” the board decision reads. “To ensure criteria are being met, specialized staff, with the benefit of local input, should ensure objective application of inclusion criteria.”

Another way to see the board’s recommendations is as a call to establish norms like those that have long existed in journalism, creating a figurative (and sometimes literal) wall between the business and content sides of a newsroom. Although the decision leaves the platform discretion to maintain a second group of “users with commercial importance and business partners,” it is adamant that such a system should never mean that the enforcement of content rules on the site is suspended or delayed for this type of user.

The board makes many other recommendations—tracking core metrics, and increasing transparency about who is on the cross-check list and when it’s being used—but even more than the high-profile Trump decision, this latest opinion from the Oversight Board showcases its potential to bring real reform and accountability to the platform. In an age when speech platforms are behaving lawlessly, this couldn’t be more welcome.

Kate Klonick is a professor at the St. John’s University School of Law and a fellow at the Information Society Project at Yale Law School.