The Washington Post’s Kevin Munger used Twitter bots, one “white” and one “black” to tackle racism and appears to have worked out a strategy which reduces racist slurs.
Munger used Twitter accounts to send messages designed to remind harassers of the humanity of their victims and to reconsider the norms of online behaviour.
He sent every harasser the same message:
@[subject] Hey man, just remember that there are real people who are hurt when you harass them with that kind of language
He then used a racial slur as the search term because it was the strongest evidence that a tweet might contain racist harassment. He restricted the sample to users who had a history of using offensive language, and only included white subjects or anonymous people.
He bought followers for half of the bots — 500 followers, to be specific — and gave the remaining bots only two followers each (see screenshot above). This represents a large status difference: a Twitter user with two followers is unlikely to be taken seriously, while 500 followers is a substantial number.
Only one of the four types of bots caused a significant reduction in the subjects’ rate of tweeting slurs – the white bots with 500 followers.
Generally, though he found it is possible to cause people to use less harassing language and it is more most likely when both individuals share a social identity. Unsurprisingly, high status people are also more likely to cause a change.
Munger thinks that many are already engaged in sanctioning bad behaviour online, but they are doing it in a way that can backfire. If people call out bad behaviour in a way that emphasises the social distance between themselves and the person they’re calling out then telling people off is less likely to be effective.
Researchers have shown that the lead candidates in both the US elections are relying on Twitter bots to get out their message.
Researchers demonstrated in their analysis of Twitter traffic during the first presidential debate between Hillary Clinton and Donald Trump and found huge numbers of active bots which were being used to amplify support on Twitter.
Samuel Woolley, director of research at Political Bots, said automated accounts were tweeting messages with hashtags associated with the candidates. For example, #makeamericagreatagain or #draintheswamp for Trump; #imwithher for Clinton. The numbers were huge – one third of all tweets using pro-Trump hashtags were created by bots and one fifth of all Clinton hashtags were generated by automated accounts.
Woolley said that the bots were acting as a prosthesis for small groups of people to affect conversation on social media. This bot activity was often picked up by the “real media” to show who had a lot of support online.
“But what we found was that a lot of traffic surrounding Donald Trump and Hillary Clinton is actually manufactured,” he said.
This has been seen in the past. Such bots were used in the 2008 special election to fill Ted Kennedy’s Massachuetts Senate seat in 2008.
A conservative group in Iowa, the American Future Fund, set up nine Twitter accounts that sent 929 tweets and reached more than 60,000 people with messages accusing the Democratic candidate in the race, Martha Coakley, of being anti-Catholic, the researchers found.
“Political actors and governments worldwide have begun using bots to manipulate public opinion, choke off debate, and muddy political issues. Political bots tend to be developed and deployed in sensitive political moments when public opinion is polarized,” Woolley and his colleagues wrote in their report.
“The problem is that a lot of people don’t know bots exist, and that trends on social media or even online polls can be gamed by bots very easily.
It has been estimated that the proportion of bots to humans on the Internet is about 50-50.
A 19 year old British programmer has made himself a law bot which has successfully appealed more than $3 million worth of parking tickets.
Browder’s bot handles questions about parking ticket appeals in the UK. Since launching in late 2015, it has appealed $3 million worth of tickets. Joshua Browder’s robot can help answer questions about parking tickets.
Once you sign in, a chat screen pops up. To learn about your case, the bot asks questions like, “Were you the one driving?” and “Was it hard to understand the parking signs?” It then spits out an appeal letter, which you mail to the court. If the robot is completely confused, it tells you how to contact Browder directly.
The site is still in beta, and the full version will launch this spring.
Since laws are publicly available, bots can automate some of the simple tasks that human lawyers have had to do “manually”.
Beyond parking tickets, Browder’s bot can also help with delayed or cancel led flights and payment-protection insurance (PPI) claims.
This takes most of the expensive leg work out of a court case. Of course it can’t argue a case in front of a judge.
Browder programmed his robot based on a conversation algorithm. It uses keywords, pronouns, and word order to understand the user’s issue. He says that the more people use the robot, the more intelligent it becomes. Its algorithm can quickly analyze large amounts of data while improving itself in the process.
“As a 19-year-old, I have coded the entirety of the robot on my own, and I think it does a reasonable job of replacing parking lawyers. I know there are thousands of programmers with decades more experience than me working on similar issues,” he said.
At the moment the bot cannot give subjective advice because that would mean that they were practicing law, which only humans can legally do.