Reporter uses bots to tackle racists

Dancing Racists-2The Washington Post’s Kevin Munger used Twitter bots, one “white” and one “black” to tackle racism and appears to have worked out a strategy which reduces racist slurs.

Munger used Twitter accounts to send messages designed to remind harassers of the humanity of their victims and to reconsider the norms of online behaviour.

He sent every harasser the same message:

@[subject] Hey man, just remember that there are real people who are hurt when you harass them with that kind of language

He then used a racial slur as the search term because it was the strongest evidence that a tweet might contain racist harassment. He restricted the sample to users who had a history of using offensive language, and only included white subjects or anonymous people.

He bought followers for half of the bots — 500 followers, to be specific — and gave the remaining bots only two followers each (see screenshot above). This represents a large status difference: a Twitter user with two followers is unlikely to be taken seriously, while 500 followers is a substantial number.

Only one of the four types of bots caused a significant reduction in the subjects’ rate of tweeting slurs – the white bots with 500 followers.

Generally, though he found it is possible to cause people to use less harassing language and it is more most likely when both individuals share a social identity. Unsurprisingly, high status people are also more likely to cause a change.

Munger thinks that many are already engaged in sanctioning bad behaviour online, but they are doing it in a way that can backfire. If people call out bad behaviour in a way that emphasises the social distance between themselves and the person they’re calling out then telling people off is less likely to be effective.