We started looking into this topic around late March 2020, when everyone was desperate about COVID-19, and when we started seeing news reports of hate speech, physical attacks, and harassment against people of Asian descent. Having spent a lot of time in the domain of online social media, we started looking at this problem of hate speech but on online platforms. In addition, while hate speech was spreading on social media, there were also people who were countering hate speech, in support of people of Asian descent: we had these two competing narratives simultaneously spreading on social media platforms. We then started collecting data on Twitter related to this phenomenon, starting from January 2020: we essentially crawled millions of tweets from hundreds of thousands of users on these topics, and we did one of the first analyzes of anti-Asian hate speech and counterspeech on social media.
First, we created a hand-labeled dataset with around 3,200 tweets to train a classifier. Next, we used our classifier to identify hate speech and counterspeech from the rest of the data. In total, we identified 1.3 million tweets containing anti-Asian hate speech and 1.1 million tweets containing counterspeech. With this large-scale data, we started doing different types of analysis in order to understand how hateful comments were spreading, how users were spreading both hate speech and counterspeech, and how these two narratives were influencing each other. One of the most important findings we had was that the more hate speech you see, the higher the likelihood of you making hateful comments: if a lot of your friends, meaning if a lot of people in your social platform neighborhood, are spreading hate, you are more likely to spread hate as well. In other words, hate speech is contagious! However, there is some hope, as we found initial evidence that counterspeech can slightly prevent hate speech from being taken up by others: there is a small inhibitory effect in terms of counterspeech being able to prevent users from making hateful comments in the first place.
Again, we are seeing the same theme here of regular users being one of the most effective ways to combat malicious actors and activities by speaking up. Essentially, we not only need these computational tools to help us identify these malicious activities, but we also need these community-driven efforts to effectively counteract these issues. We need regular users to be more aware of these issues, and we need them to be more proactive and to speak up – for instance, by simply flagging inappropriate content – when they see bad behavior.