What Even Is ‘Coordinated Inauthentic Behavior’ on Platforms?

No one knows, not even the policy writers or enforcers. And the ambiguity is exacerbating threats to our electoral process. 
Students wait for the arrival of guest speakers at the Turning Point USA Student Action Summit
 The line between “coordinated behavior” and campaign activity, as defined by the platforms, is blurred. Photograph: Saul Martinez/Getty Images

Earlier this week, Twitter and Facebook took action to suspend and remove accounts associated with Turning Point Action, an affiliate of the prominent conservative youth organization Turning Point USA. These takedowns were in response to a report from The Washington Post that revealed that posts from these users were part of a broad coordinated effort led by TPA. According to the report, the majority of the messages were comments and replies to news posts across Twitter, Facebook, and Instagram that generally sought to cast doubt on the electoral process and downplay the threat of Covid-19.

Content aside, this is not the first time supporters, or even social media savvy teens, have worked the platforms to drive up a hashtag or advance a cause. But each time, the platforms seem to respond differently, and sometimes not at all. That’s because the line between “coordinated behavior” and campaign activity, as defined by the platforms, is blurred. This ambiguity and inconsistent enforcement, as well as the haphazard manner in which political speech is moderated, exacerbates threats to the electoral process—not to mention platforms’ own ability to defend themselves to critics on both sides of the aisle.

According to The Washington Post, TPA enlisted—and paid—young supporters to create thousands of posts. Some criticized the highly coordinated effort, likening it to a troll farm. But offline, campaign volunteers use scripts for everything from phone banking and text messaging to canvassing. My recently published research on the 2016 presidential campaign reveals that enlisting supporters in coordinated social media efforts is actually a routine campaign practice. Multiple presidential campaigns described to me practices aiming to, as Twitter described TPA’s effort to the Post, “amplify or disrupt conversations.”

For example, in 2016, the Sanders campaign had a strong if informal working relationship with social media allies, including a large subreddit of Sanders supporters. The campaign would reach out directly to the influential and active supporters in the community and ask them to do things like get a particular hashtag trending. Similarly, the Trump campaign identified supporters who were influential on social media—the campaign dubbed them “The Big-League Trump Team”—and during important events, such as debates, would text them with specific content to share.

“Trump had a big footprint, but then we were behind the scenes kind of putting gasoline on all of that,” Gary Coby, the director of digital advertising and fundraising for Trump’s 2016 general election campaign told me of the strategy.

Of course, TPA’s practices differ in a few key ways from the ones I reveal in my research. First, the participants were paid for their posts. And second, at least some of them were minors. But neither of these two elements—regardless of how disturbing they may be—seem to have factored into Facebook and Twitter’s decision to label these efforts coordinated or inauthentic, according to the statements they’ve given to the media.

Not only are TPA’s practices akin to routine campaign practices, as described to me by the professionals who ran 2016 presidential campaigns, but here again we see platforms drawing a rather arbitrary line around “coordination” that will be nearly impossible to defend and enforce with any consistency.

Some of the posts and comments shared as part of TPUSA’s effort contained misinformation about the voting process, a clear violation of both platforms’ policies designed to protect the integrity of the election. Platforms should have removed those posts—coordinated or not. But virtually all of the accounts remained active on the platforms until the Post contacted the companies as part of their reporting.

In response to the takedowns by Twitter and Facebook, conservatives have again cried foul, alleging anti-conservative bias (despite considerable evidence that conservative views outperform others on social media). But as I’ve argued before, these charges persist in part because companies like Facebook and Twitter do not make clear and consistent decisions based on their own policies.

Like so many of the revelations about content that violates platform policies, the TPA posts were revealed not through the platforms’ own moderators, but through the intrepid reporting of journalists. Platforms’ reliance on the press to police their own policies amounts to whack-a-mole enforcement, with little transparency and even less consistency. And that’s not to mention how much easier this dependence makes it for conservatives to cry censorship, given that many on the right already see the mainstream press as biased toward liberals.

We are now less than 50 days away from what is likely to be a contested election—and one in which social media companies are and will continue to be arbiters of political speech. Platforms’ policies around “coordinated” and/or “inauthentic” behavior are, like most of their policies, vaguely written, flexibly interpreted, and inconsistently applied. In the light of real fears about coordinated foreign efforts to interfere with our elections, the lack of clarity on these policies leaves platforms open to charges of bias in moderating speech, further undermining faith in the democratic process. A focus on coordination and authenticity means moderation is predicated on knowing something about the intent of the users posting the content. But, as you might imagine, that’s incredibly difficult to glean from a feed alone. As my colleagues have argued, our democratic practices would be better protected if platforms focused clearly on electoral misinformation—no matter the source.

This incident with TPA is just the latest example of platforms indiscriminately applying their own imprecise policies. If clear lines around coordinated behavior—and, more importantly, around electoral misinformation—are not drawn and enforced, our election is bound to suffer.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.


More Great WIRED Stories