Internet trolling can have devastating psychological impacts on its victims, so what leads someone to engage in cyber violence? We speak to Dr Gareth Tyson, a leading researcher on the dark side of the Internet to find out.

What is a troll?

Dr Tyson defines trolls as “people who try to disrupt natural online interactions”. He elaborates, “Many people think of trolls as being particularly hateful characters who try to spread malicious behaviour, but for most, this isn’t actually what trolling is about.”

Some researchers are suggesting that trolling behaviour is limited to a vocal and sociopathic minority, often emerging among individuals with “Dark Tetrad” personality traits, including psychopathy, machiavellianism, narcissism, and sadism. 

However, researchers from Cornell University suggest that trolls are often ordinary people and that trolling is a situational, rather than an intrinsic characteristic. In other words, anyone can become a troll. Their study outlines two key factors that can make someone more likely to become a troll: their emotional state and the context of the discussion. 

An individual’s emotional state is in constant flux. The research shows that someone is more likely to engage in trolling late at night or at the beginning of the working week. Additionally, if someone sees “troll comments” in a discussion it can have a knock-on effect that makes it twice as likely they will join in and become fellow trolls.

What this suggests is that any person who “wakes up on the wrong side of the bed” or is simply having a bad day can engage in trolling behaviour. This leads individuals to feed off each other’s negativity, causing discussions or entire online communities to become overrun with troll posts. 

The psychology behind trolling: causes and motivations

So where does this behaviour stem from?

Dr Tyson suggests that the motivations for trolling can be separated into two key groups: barriers and incentives. 

“There are very few barriers to engaging in troll-like behaviour online. Things like content, moderation and filtering exist but often moderators are quite slow to react. You can be anonymous and start behaving in a way that you would never do in a physical space. It’s called the online disinhibition effect.

“The incentives for trolling are probably very much aligned with the incentives that exist in the physical world. I’m sure the incentives to bully online are similar to those that lead people to bully in the physical world, like insecurity or problems in other aspects of life. The way to deal with that is to lash out at others.”

Another important motivator is that there isn’t the same pushback in the online world as there is in the physical, meaning that trolls can escape the consequences of their behaviour under the guise of anonymity. Additionally, the ability to “upvote” or “downvote” can exacerbate this behaviour. 

Dr Tyson comments, “It can encourage more and more extreme behaviour because someone may find themselves falling down this rabbit hole where they say some pithy remark about somebody and gain a lot of likes. They then want to re-experience that endorsement. So they may say something even nastier the next time in the hope that it will get even more likes.” In the same way, being “downvoted” can propagate more negativity in a community. 

Another important motivating factor is the sheer scale of the online world. Dr Tyson provides some context: “If you bully another child in the physical world, you might get support and endorsement from three of the children, perhaps in your immediate social group. In the online world, you’ve got millions of people who will potentially support you. And that scale also can rapidly turn into an echo chamber.”

According to Dr Tyson, “Trolling behaviour exists in all of us.” The fact that anyone could become a troll is something we all need to be aware of. If we engage in these online communities, it’s important to recognise when we are having a bad day or feeling vulnerable or angry and to reflect on those feelings before engaging in trolling. 

Although anyone is capable of trolling, it’s important to note the systematic nature of trolls to pinpoint methods of prevention. 

“I think what separates us from a troll is how systematic they are in that process. Trolls do this almost as an occupation, they do it because they enjoy it, and often they’re very much aware that they enjoy it.”

What’s being done to prevent trolling?

Trolling seriously impacts its victims, leading to psychological effects such as depression and anxiety, physical symptoms, and sometimes suicide.  And it is widespread.  An EU survey of 8,000 young people found that two-thirds of 16-19-year-olds engage in some form of online cybercrime or cyber deviance, with one in four having trolled or tracked someone online. 

In 2022, new measures were added to the United Kingdom’s Online Safety Bill to fight against anonymous trolls online. Under the proposed legislation social media companies will have to give users more control over who can interact with them, by allowing them to block people and the ability to choose what they see on their feeds. 

Although this is a step in the right direction, Dr Tyson adds, “In practice, it’s very time-consuming and it’s after the event. So potentially the harm has already been done and it only applies to ourselves. So if I block somebody, it doesn’t stop them from posting abuse to another person. Because of that, the most effective strategies are embedded within the social media companies.”

The main strategy is implementing a team of human moderators to determine what is and isn’t OK to say. After trolling is detected, the information is fed to a team of machine learning specialists who try to build models to automatically learn patterns of misbehaviour and automate the moderation process. 

Dr Tyson outlines a major problem. “These models are often effective in certain situations, but very ineffective in others. For instance, where you have new styles of behaviour emerging it slips through the automated moderation and then has to go to the human moderators and inevitably takes more time.”

He also notes the importance of social media companies finding a nuanced balance when controlling behaviour on public platforms, since one person’s hate speech is another’s free speech. 

Education from a young age also plays a big role in preventing trolling. All people need to be educated about the tools available to them and how they can maximise control over what they see and who they interact with. Additionally, it’s important to remind people about the importance of kindness online.

Dy Tyson adds, “Most of us are hardwired to experience empathy in physical interactions. You can see the person in front of you and immediately recognise the impact you’re having on them, but empathy and context can disappear in the online world.”

With technology constantly evolving and new online spaces and communities developing, the future of trolling is uncertain. Dr Tyson remains hopeful:

“I certainly feel optimistic. I think many people forget just how nascent the online worlds are. The social media world that we operate in has only existed for maybe 15 years, and human beings are particularly good at reapplying prior experiences in completely different contexts.  I hope that as we move forward, the tooling and the design of these websites will improve to discourage that type of behaviour in the same way that society discourages behaviour in face-to-face discourse.”

Dr Gareth Tyson is an assistant professor at Hong Kong University of Science & Technology. He is also a Fellow at the Alan Turing Institute, co-leads the Social Science Lab, and is Deputy Director of the Institute of Applied Data Science, among other roles. Dr Tyson’s research has been awarded numerous accolades including Facebook’s Shared Task on Hateful Memes Prize 2021.

Photo by Mark König on Unsplash