What Even Is ‘Coordinated Inauthentic Behavior’ on Platforms?


Earlier this week, Twitter and Facebook took action to suspend and remove accounts associated with Turning Point Action, an affiliate of the prominent conservative youth organization Turning Point USA. These takedowns were in response to a report from The Washington Post that revealed that posts from these users were part of a broad coordinated effort led by TPA. According to the report, the majority of the messages were comments and replies to news posts across Twitter, Facebook, and Instagram that generally sought to cast doubt on the electoral process and downplay the threat of Covid-19.

WIRED OPINION

ABOUT

Shannon C McGregor (PhD, University of Texas) is an assistant professor at the University of North Carolina and a senior researcher with the Center for Information, Technology, and Public Life.

Content aside, this is not the first time supporters, or even social media savvy teens, have worked the platforms to drive up a hashtag or advance a cause. But each time, the platforms seem to respond differently, and sometimes not at all. That’s because the line between “coordinated behavior” and campaign activity, as defined by the platforms, is blurred. This ambiguity and inconsistent enforcement, as well as the haphazard manner in which political speech is moderated, exacerbates threats to the electoral process—not to mention platforms’ own ability to defend themselves to critics on both sides of the aisle.

According to The Washington Post, TPA enlisted—and paid—young supporters to create thousands of posts. Some criticized the highly coordinated effort, likening it to a troll farm. But offline, campaign volunteers use scripts for everything from phone banking and text messaging to canvassing. My recently published research on the 2016 presidential campaign reveals that enlisting supporters in coordinated social media efforts is actually a routine campaign practice. Multiple presidential campaigns described to me practices aiming to, as Twitter described TPA’s effort to the Post, “amplify or disrupt conversations.”

For example, in 2016, the Sanders campaign had a strong if informal working relationship with social media allies, including a large subreddit of Sanders supporters. The campaign would reach out directly to the influential and active supporters in the community and ask them to do things like get a particular hashtag trending. Similarly, the Trump campaign identified supporters who were influential on social media—the campaign dubbed them “The Big-League Trump Team”—and during important events, such as debates, would text them with specific content to share.

“Trump had a big footprint, but then we were behind the scenes kind of putting gasoline on all of that,” Gary Coby, the director of digital advertising and fundraising for Trump’s 2016 general election campaign told me of the strategy.

Of course, TPA’s practices differ in a few key ways from the ones I reveal in my research. First, the participants were paid for their posts. And second, at least some of them were minors. But neither of these two elements—regardless of how disturbing they may be—seem to have factored into Facebook and Twitter’s decision to label these efforts coordinated or inauthentic, according to the statements they’ve given to the media.

Not only are TPA’s practices akin to routine campaign practices, as described to me by the professionals who ran 2016 presidential campaigns, but here again we see platforms drawing a rather arbitrary line around “coordination” that will be nearly impossible to defend and enforce with any consistency.

Some of the posts and comments shared as part of TPUSA’s effort contained misinformation about the voting process, a clear violation of both platforms’ policies designed to protect the integrity of the election. Platforms should have removed those posts—coordinated or not. But virtually all of the accounts remained active on the platforms until the Post contacted the companies as part of their reporting.

In response to the takedowns by Twitter and Facebook, conservatives have again cried foul, alleging anti-conservative bias (despite considerable evidence that conservative views outperform others on social media). But as I’ve argued before, these charges persist in part because companies like Facebook and Twitter do not make clear and consistent decisions based on their own policies.

Like so many of the revelations about content that violates platform policies, the TPA posts were revealed not through the platforms’ own moderators, but through the intrepid reporting of journalists. Platforms’ reliance on the press to police their own policies amounts to whack-a-mole enforcement, with little transparency and even less consistency. And that’s not to mention how much easier this dependence makes it for conservatives to cry censorship, given that many on the right already see the mainstream press as biased toward liberals.



Source link

Box adds new features to support secure remote working Previous post Box adds new features to support secure remote working
Nvidia GeForce RTX 3070: release date, price, news and features Next post Nvidia GeForce RTX 3070: release date, price, news and features