Can we put a stop to cyber harassment?

Cyber harassment via social networks, media, and other online channels is an everyday reality for too many people, and the problem is getting worse.

It might seem inevitable, as people are spending more and more time online, but Matthieu Boutard, Managing Director at Bodyguard, a French technology start-up that protects users against cyber-bullying, hate speech and toxic content online, believes that to understand what is fueling the rise of cyber harassment, we should look at the current social and economic context.

cyber harassment social

“The Covid-19 pandemic’s social restrictions, repeated lockdowns, travel and movement limitations have led to isolation, job loss, increased stress, and anxiety in people. These have, in turn, led to high levels of frustration, as well as a stronger tendency to react in the wrong way (e.g., be more aggressive or hurtful towards others),” he told Help Net Security.

“The second point worth considering is the evolution of internet culture. Users did not act like this when the internet first came around but, gradually, toxic content started to appear: first a hurtful comment here and there, then more and more hateful comments as people saw that posting toxic comments is not ‘punished’ by platforms. From there, things kind of spiraled out of control.”

In this interview, Boutard talks about the issue of cyber harassment and the challenges of blocking/preventing it.

[Answers have been edited for clarity.]

Cyber harassment does not happen only through social networks. What other online/electronic mediums are abused by harassers?

There is a very thin line between harassment and cyber harassment. Back in the day, you could physically “walk away” from your harassers. Now, however, harassers can reach you via a multitude of online channels, and they are getting very creative in finding new ways to reach you.

In some cases, harassers who were not able to reach their victims through usual channels found ways to get through to their victims’ networks – for example, sending personal information on geo-localized platforms based on the victim’s location.

We’ve seen cyber harassment nowadays affecting even sites such as Vinted (an online marketplace for secondhand clothing) and LinkedIn. Basically, where there is the possibility for a communication exchange, harassers will use it to harass their targets, even on platforms which are not primarily designed for socializing.

What are the various challenges when it comes to detecting cyber harassment on a large scale on social media?

In order to have the desired effect (i.e., protect people), cyber harassment detection on social media needs to be preventive and to take place in real-time. It needs to work even for live streaming, for any large volume of content being posted at the same time or continuously.

This is difficult for social media as it was traditionally built around human moderation. The danger is that when you take too long to remove a comment, the damage has already been done, as it has already reached the person or the people it intended to harm.

Another challenge is the fact that people have different sensitivities and moderation needs: what will hurt me could leave you completely unaffected, and vice versa. Because of this, a one-size-fits-all approach does not work. People need the ability to determine what bothers them personally and what their community is sensitive to.

Some people’s livelihood is very dependent on using social networks and media, so they can’t stop using them even if the cyber abuse they are experiencing takes a toll. Your apps come between the target and their harasser and prevent the harassing messages from being delivered. What’s the feedback from your customers?

You are completely right – we’ve seen so many cases where people need to use the Internet as part of their daily work, but just know that they need to brace themselves for the wave of toxic comments coming their way. And this is the main reason why Bodyguard exists – to allow people to focus on their work, without having to dread each time they open their notifications.

Many of our users feel a great sense of relief as they can now focus on doing what they set out to do in the first place: create content and have meaningful exchanges with their community. We’ve noticed that some users create more content after starting to use Bodyguard.

As for the harassers, some just give up as there is no reaction from their victims (who don’t see the toxic comments). Others stop because they realize that their comments are monitored and worry that legal action could be taken against them.

If you’ve succeeded in creating a solution for blocking harassing messages, it seems logical that social networks, media, streaming and gaming platforms should have succeeded, too. In your opinion or to your knowledge, why haven’t they created or implemented similar tools themselves?

Lots of social networks and Big Tech use tools that focus on the technological aspect (i.e., machine learning) rather than the social aspect. They keep trying to enhance their ML models, but the improvement is slow and still not quite there yet in terms of being able to offer a fine-tuned moderation. They do a good job moderating things like racism, homophobia, and detecting widely used toxic keywords, but when it comes to toxic content that is more nuanced, they often fail.

Creating a technology capable of detecting context and nuances, as well as relationships between people, has been an interesting challenge. We also needed to ensure that we are able to include the latest toxic content trends as soon as they appear. If you wait before adding new toxic terms to your moderation solution, you will experience a critical delay where you have failed at protecting people against those terms.

Compared to machine learning, where the algorithm needs lots of data and time to “learn” what to do, we have a team of Natural Language Processing (NLP) specialists who monitor social media and all the latest toxic content trends to make sure we detect everything as soon as it appears.

Which platforms and which languages you aim to support in the next 1-3 years?

We released our business moderation solution recently and have seen a high interest in it, with lots of platforms being aware that they need moderation, and willing to invest in it to solve this problem. Businesses can used out technology with any platform, network, community, or app via an API.

In terms of our solution for individuals, that’s available for now on Twitter, YouTube Instagram, and Twitch. We would love to extend the solution to other platforms such as TikTok in the future but are limited by these platforms’ APIs, which don’t allow us to access the data in order to protect their users.

At the moment our technology can protect people in English, French and Italian. We’re working on adding Spanish and Portuguese in the short term, and then we’d love to focus on expanding into the Asian market.

What’s your opinion on the effectiveness of current laws against cyber harassment (in France and the U.S.)?

In France there are actually no laws specifically targeting cyber harassment. We’re dealing with pre-internet laws which are missing out on a lot of the current context. When you go to court as a victim, it’s a very long and difficult process to win. For example, the first time a cyber harasser was sentenced to prison was 2 months ago. There was a possibility for a new law in France, the “Avia Law,” which didn’t go very far due to political tensions surrounding freedom of expression.

In Europe, there are new discussions around the Digital Services Act (DSA), which aims to focus on ensuring a safe and accountable online environment. I personally think this is a move in the right direction, but we’ll have to wait and see how it evolves.

As far as the U.S. is concerned, I would say that the majority of Americans can see that cyberbullying, cyber harassment, and cyberstalking are real problems that are difficult to solve. This is evidenced by the growing number of states either introducing new cyber harassment legislation or altering existing statutes to include online activity for traditional crimes (usually stalking and harassment).

More and more of the states in America are starting to criminalize cyber harassment but finding consistent and fair consequences remains a tricky situation. I think the intent is clear that the legislation introduced has a secondary purpose of wanting to penalize perpetrators of these cyber crimes, but the primary objective is to try and stop cyber harassment. Research-informed and -defined sections of punitive codes based on the varying instances and severities of cyber cases will go a long way in helping these legislations better achieve these goals.

Don't miss