Skip to main content

Why did Twitter let horrible people advertise hateful messages?

Why did Twitter let horrible people advertise hateful messages?

/

Self-service ad platforms with automated review are vulnerable to trolls

Share this story

Twitter has been working to reduce the amount of harassment and abuse that takes place in its community, but there are still some glaring cracks in its armor. The latest tactic from hateful trolls uses a platform you might think Twitter would have pretty strict control of: its sponsored tweets.

Earlier this month, Andrew Auernheimer, one of the internet's most malicious trolls, was able to use Twitter's sponsored tweet service to target "women and minorities" with white supremacist messages. Naturally there was some confusion and outrage over this promoted content:

And just today, Twitter let more hateful sponsored tweets slip through to users. Caitlin Roper, a feminist activist who has been the subject of targeted harassment, was impersonated by someone on Twitter. First reported by The Guardian, the impersonator sponsored a tweet using Roper's face and name. The tweet urged trans people to commit suicide.

There's a good debate to have about free expression, but this isn't part of it

Part of the problem with any massive online community — whether we're talking about Reddit, or Facebook, or Twitter — is that at some point the platform seems simply too large to police. For a long time, Reddit cast itself as a highly democratic platform where any kind of speech controls beyond "don't post child porn" would almost be unpatriotic. After a number of unsightly breakdowns in civility, Reddit has since changed its tune under new management and is also pursuing anti-harassment measures. There's certainly a worthy debate about how tight speech controls on big internet platforms should be, but Twitter's latest slip-ups aren't part of it.

Twitter doesn't screen all ads in advance

Sponsored tweets telling people to kill themselves cross an obvious line. There's a good reason you don't see that on a billboard or the side of a bus — there's just nothing defensible about that kind of violent speech. And Twitter's policies about that kind of speech aren't ambiguous; the company's rules on hate content are just as robust as those governing huge systems like Google's Ad Words. Twitter technically prohibits "hate speech, glorification of self-harm," and even messages involving organizations associated with promoting hate. So how did these tweets get through in the first place? It's because Twitter doesn't have human beings screening them all in advance.

When asked about the abusive sponsored tweets, Twitter referred us to its existing policy and said that the company quickly moved to kill the advertisement. "Twitter does not allow the promotion of hate content, including hate speech against a group based on sexual orientation or gender identity," a Twitter spokesperson told The Verge. "Once this instance was flagged, we immediately suspended the account and stopped the campaign."

That kind of delayed response could pose problems in the future if abusing sponsored tweets becomes a more popular tactic, especially considering how easy it is to target certain groups of people using Twitter's automated ad system. An industry source familiar with the matter tells The Verge that Twitter does attempt to filter out inappropriate content in self-service ads, and anything flagged by the system is sent on for manual review. That's not necessarily a weakness specific to Twitter, and there are other types of problems with automated ad delivery. For instance, some Google searches containing racial correlations have been found to trigger discriminatory ads from unscrupulous advertisers.
But there are clearly some troubling holes in Twitter's review process if tweets promoting suicide are able to get through. As it stands, anybody with a Twitter account, a credit card, and some luck bypassing Twitter's automated filters can send a violent or hateful message to the people it will harm the most.