When does malice on social media become a crime?

In 20 years, the internet has become an integral part of our lives and it’s not going anywhere. The Covid-19 pandemic has only further strengthened the power of the internet.

With the internet has come the heavy use of social media in this contemporary era. Social media serves as an amplifier of the human condition. The good, the bad, and the ugly – you’ve got it all online with receipts.

A topic that needs direct dialogue in this regard is “what happens when you face malice in the virtual world?” Is it as in-your-face like it would be in the physical world? Is it more? Is it less? And at what point does malice online become a crime? The Sunday Morning Brunch spoke to several people to find out the answers.

 

“We can appreciate the tough balancing act that social media companies have to maintain, given their public commitment to freedom of expression. Maintaining this cardinal value is critical to guarding against online discourse being forcibly sanitised”  Media analyst Nalaka Gunawardene

At what point does information shared on social media become malicious?

 

Part of what breeds malice online is the ease of sharing information with no consequences, be it true, false, or simply unverified. We spoke to media analyst Nalaka Gunawardene, who explained the different kinds of malicious information that is often spread online.

Gunawardene explained that there are three main forms of inappropriate information in the online space: (i) disinformation, which is manipulated information (or total fabrications) created and disseminated to mislead or cause harm; (ii) misinformation, which is when such disinformation is received and shared by persons who don’t realise it is false; and finally (iii) malinformation, which is when true information is shared with a clear intent to inflict harm on a person, organisation, or country. This is done either by moving private information into the public arena or using an individual’s affiliations, like religion, against them.

Gunawardene explained that global non-profit organisation First Draft, which works on issues related to what is now called “information disorder”, has shared that all disinformation, misinformation, and malinformation are parts of the same information ecosystem that has been contaminated.

On a personal level, malinformation, cyberviolence, and cyberharassment often overlap, with common examples being a woman whose ex-boyfriend threatens to release compromising photos or videos following the end of a relationship, or a person’s sexual orientation being made public to humiliate or cause harm to them. This can be especially damaging when there is no public interest in doing so. Such disclosure can be particularly damaging in a country where same-sex relations are still illegal.

Sharing malicious information on social media to the point of causing harm is something that is very prevalent because of the anonymous and impersonal nature of the internet. “People don’t realise that social media is a place where you do need to take responsibility for what you say and share,” explained researcher and activist Vraie Balthazaar, adding that being behind a screen makes people say things they would normally never say to another person.

 

“People don’t realise that social media is a place where you do need to take responsibility for what you say and share”  Researcher and activist Vraie Balthazaar

When does malinformation cross the line and become a crime?

 

“That all depends on the harm it can cause and the specific laws in the country concerned, and also the circumstances like existing societal tensions and timing,” Gunawardnene said. He also shared that some types of hate speech and harassment are included under the malinformation category, as people are often targeted because of their personal history or affiliations.

“Some hate speech has various definitions internationally, but what is important to note is that hate speech, as it is understood in general usage, is different to what is prohibited as hate speech in legal terms. The latter requires an element of incitement (and not just the advocacy of hatred). The timing and context of disinformation or malinformation can also make its spread a greater criminal offence. For example, the release of either disinformation or malinformation can cause particular harm to a candidate during an election. Sometimes disinformation can be combined with hate speech, trying to demonise a candidate or political party or even election authority,” he further explained.

Gender rights consultant Sharanya Sekaram was also of the view that intent is the determining factor in malicious information, noting that the minute you are attacking the person and not the idea, or a personal issue, it crosses the line.

“There is always a way to argue and debate respectfully without descending into malice. With women, malice becomes very gendered, including rape threats and insults like b*tch, wh*re, and sl*t, editing pictures of them, making memes insulting their character, and so on. The intent of sharing to cause harm is what makes it a crime,” Sekaram said.

 

How do you respond when you are the victim?

 

“With women, malice becomes very gendered, including rape threats and insults like b*tch, wh*re, and sl*t, editing pictures of them, making memes insulting their character, and so on. The intent of sharing to cause harm is what makes it a crime”   Gender rights consultant Sharanya Sekaram

Gunawardene shared that Article 20 (2) of the International Covenant on Civil and Political Rights (ICCPR) states that “any advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence shall be prohibited by law”, adding that Sri Lanka’s ICCPR Act No. 56 of 2007, which reproduces Article 20 of the global ICCPR, says in Section 3 (1) that “no person shall propagate war or advocate national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence”.

However, in practice, and with social media in particular, Gunawardene explained that all main social media platforms have their own rules that are mandatory for all users, such as Facebook’s Community Standards and YouTube’s Community Guidelines, through which these platforms attempt self-regulation. These are, however, not always successful because policing platforms is a huge challenge, given the amount of content generated each day.

Sekaram commented on responding as a victim, highlighting that with social media platforms, reporting is not always effective. “Most of the time, they don’t understand the context or that it’s an algorithm that you report to. The line is blurred because it’s online, but is still every bit as harmful of offline harm. Our online and offline lives don’t exist separately. We don’t always understand that what happens online is a crime. An idea that we have is that if we get offline, this harm won’t happen, but that is very victim-blaming behaviour. Why shouldn’t we exist online?”

 

Where does the law stand?

 

Yeheliya Foundation Director Attorney-at-Law Tarangee Mutucumarana explained that there are significant gaps in Sri Lanka’s laws that limit how cyberviolence and harassment on social media can be handled, explaining that most often, cyberviolence is fought citing offences in Sri Lanka’s Penal Code, and mostly treating them as physical crimes as opposed to crimes committed online.

“The existing law is, unfortunately, not good enough to counter the new form of attacks. That’s the problem and that’s why we need to amend existing laws to include these new areas or have a completely separate new act that deals with these specific crimes”  Yeheliya Foundation Director Attorney-at-Law Tarangee Mutucumarana

For example, pushing for the punishment of extortion under Section 3272 of the Penal Code which deals with extortion, Section 345 which deals with harassment to pursue sexual harassment online, or Section 284 which deals with obscene publication to fight nude images being shared on social media.

“The existing law is, unfortunately, not good enough to counter the new form of attacks. That’s the problem and that’s why we need to amend existing laws to include these new areas or have a completely separate new act that deals with these specific crimes. We have the Computer Crimes Act, but it’s not directed at these types of crimes – it is directed mostly towards using computers without authorisation. Other than Section 7 of the Computer Crimes Act, nothing can be used to solve crimes like cyberviolence, cyberbullying, and similar,” Mutucumarana explained.

She also addressed forms of recourse, sharing that there are several options, adding that these are not always something victims want to pursue because of added security and the possibility of being blamed by law enforcement or people finding out. Mutucumarana highlighted that even at the court level, proceedings take place in front of hundreds of people, which makes victims very uncomfortable, especially if their families do not know about the crime they’re reporting. There are also issues with evidence being improperly handled and being made visible to parties who are not directly involved with the case.

 

How can we move forward?

 

To address the situation regarding crime and violence on social media, Mutucumarana stressed that there needs to be a centralised act that addresses cyberviolence and cybercrime with adequate punishment outlined, as opposed to laws and penalties spread out all over our Penal Code, along with systems introduced that protect the privacy of victims during and after proceedings. She also highlighted that there are organisations like Women In Need (WIN) and The Grassrooted Trust, that are able to advise victims on their options when they are the victims of cybercrime and help them find the best recourse possible.

While acknowledging the need for a quicker and more effective mechanism of redress, Balthazar explained that education also plays a huge role. “Now that we’re in the digital age, it helps to educate youths on it. We need to talk about online values, morals, and ethics and include this in school curricula. It also helps for people to have knowledge on what happens online and how they can push back,” Balthazar said.

Sekaram too stressed the need for education, noting: “We must teach children how to talk to others and how to debate as much as we teach values like saying ‘please’ and ‘thank you’. The online world needs to be integrated into how we teach children values.”

 

Being careful what we wish for

 

One of the most powerful things about social media is the power that it gives the individual in getting their voice heard, and while this power is often misused, the potential of this power and its role in freedom expression should not be forgotten.

“We can appreciate the tough balancing act that social media companies have to maintain, given their public commitment to freedom of expression. Maintaining this cardinal value is critical to guarding against online discourse being forcibly sanitised. Never forget that free speech includes the right to say things that may shock, offend, or disturb some people,” Gunawardene commented: “I want to echo the caution sounded by the Council of Europe in 2017: ‘The topics of mis, mal and disinformation are too important to start legislating and regulating around until we have a shared understanding of what we mean by these terms’.” 

 

Main pic credit:

Photo © Priscilla Du Preez on Unsplash