Currently, the global trend is for the internet to oversee most of public communication, social discourse, politics, entertainment, and many other social activities, even shaping people’s identities. The further expansion of digital networks inevitably increases the amount of content that needs to be moderated online. At the heart of this worldwide debate lies a contentious issue: if platforms intervene to protect users from abuse, is this ultimately for the benefit of society or at its expense?

The main point here is that controlling and managing information on the internet is primarily conducted not to prevent harm, but to protect consumers of digital services from harm. This contrasts with another, more controversial view, which argues that controlling such information could hinder the right to free expression, regarded as one of the internet’s essential functions. The most appropriate solution depends on which category of justice is more significant, which injustice is greater, and which party has the authority to decide and the extent of that power.

Platform Moderation

Platform Moderation

Content moderation did not originate in the digital space as a political move. Initially, the main issues that online platforms had to handle involved terminating services to control spam, terrorism, and the distribution of sexually explicit content. However, as these services grew in popularity and use, the scope and scale of interventions expanded.

At present, moderation’s scope is not limited to just removing unauthorized material. Content moderation now encompasses a wide spectrum of activities that range from monitoring information exchange between α and β users or personalities to negating vile practices in cyberspace, as well as keeping a check on machine generated activities and related social entities including some form of restriction or restriction on usage of accounts like even alteration or removal of people.

What Counts as Harm?

It’s all about the association of ideas and not letting destructive grey areas take over their understanding. Quite often, the problems we face in these dichotomous arguments relate to the fundamental question of what constitutes harm. Some harms are explicit and easily measured, such as physical harm like beating people or issuing criminal threats. However, many harms are inherently difficult to quantify, such as the broader consequences that arise from exploitation or disinformation during a pandemic. Another complex category of harms is societal harm: the harm that emerges simply because of the existence of certain societal structures and norms. Nevertheless, even these objective types of harms, which justify moderation—such as the prevention of perceived or potential harm—are often open to interpretation.

Suppression of Speech

Speech Suppression

Censorship is the suppression of speech or other communication which may be considered objectionable, harmful, sensitive, inconvenient, or inappropriate; or /and is unnecessary suppression or control of communication for personal reasons. It is a restriction that includes government intervention and direct state control on the majority of written works. If the analysis of censorship takes into account the producers of propaganda and the creation of recreational or commercial contents rather than the audience, they operate within a framework of control.

There is no point in insisting that multifunctional IT services are expected to provide users with information. These are private companies, and each company can decide what kind of content they allow and what not. However, it should be noted that the boundaries between them have blurred because of how large platforms like Meta, YouTube, and X (a rebranded version of Twitter) are. Having a few companies owning the media on a global level opens up the potential for such practices: their moderation policies can facilitate, if not actually serve as a cover for, some sort of discrediting state policies practice.

The Case for Safety

Urgent Problems

However, at the same time, one should not assume that the harmful effects of regulation can go unnoticed. Organised attacks, radicalisation of individuals on the internet, fake news in elections or public health issues, and extremism supported by algorithms are now some of the most urgent problems. Many examples show how social media has been heavily involved — from spreading anti-genocide hate speech in Myanmar to interfering with elections in the US and Europe — emphasising the risks if powerful actors’ online hate campaigns are left unchecked. Those advocating for more restrictions on speech argue that ‘space,’ especially ‘cyberspace,’ is not value-neutral. Lack of regulation often benefits those skilled at exploiting the system. Harmful content, abuse, and discriminatory behaviour online mainly target marginalised groups—women and LGBTQI+ persons, people of colour, and media practitioners. For them, moderation is certainly not censorship; it is critically necessary.

Who Decides?

A central dispute regarding the proposition concerns who has the authority to make decisions. Who can declare that something is harmful? Who can establish the guidelines? Who can enforce compliance? Currently, these key questions appear to require answers from a combination of platform policy teams, government regulatory authorities, judiciary, and public lobbying, where applicable.

Some platforms have introduced review boards or independent auditors to evaluate controversial decisions, but their involvement has been criticised for being overly restrictive and lacking real decision-making power. Practical suggestions include enacting tougher laws to clarify the boundaries of online speech. For example, the EU has developed the Digital Services Act, which enhances the accountability of large platforms and calls for more transparent operation, while other states have proposed fines for platforms that fail to remove dangerous online content. Nonetheless, legal remedies also have drawbacks – especially when such interventions rely on the terminology of ‘public safety’ to justify excessive surveillance or political repression.

Summary

Liberty or Security? The digital age does not present a clear choice. All that exists are increasingly complex questions that must be addressed sooner rather than later. These include issues of authority, goodness and evil, as well as questions of freedom of speech in society. This also pertains to how we regulate speech on the Internet, meaning that the decisions we make now will influence the online community of the future.

The focus of speech is to define how one thinks and where people are encouraged to speak. A person may feel oppressed or scared by actions that do not prevent them from speaking. Therefore, this issue extends beyond intellectual debates and the bravado of clever lawyers in the relationship between the state and society — especially regarding the protection of morals and religion.

Read Articles

Free Speech Online

Digital Platforms and the Limits of Free Speech

In the emergence of the digital era, it has become clear that the right to free speech is essential. However, for individuals, freedom of speech extends beyond merely having the power to share their thoughts, ideas, and views without fear of censorship or reprisal. Unfortunately, it also brings its own challenges, especially as the world becomes increasingly connected. Technology will continue to advance, as will the science of communication.

Read More