LSE - Small Logo
LSE - Small Logo

Michael Russ

August 5th, 2021

Transparency and freedom of reach: maximising the protection afforded to expression online

0 comments | 1 shares

Estimated reading time: 5 minutes

Michael Russ

August 5th, 2021

Transparency and freedom of reach: maximising the protection afforded to expression online

0 comments | 1 shares

Estimated reading time: 5 minutes

As countries around the world consider their approach to tackling online harms, it is worth considering the nuances of how freedom of expression can be protected on online platforms. Here, Michael Russ of the University of Edinburgh discusses the vital importance of transparency in online content curation. 

Over the last few years, there has been much discussion (and various lawsuits) concerning the capacity of internet platforms like Facebook, Twitter ad Google to take away users’ ability to communicate on their services. Such discussion is often framed in terms of platforms’ power to “no-platform” their users, i.e. to remove their ability to effectively communicate. Yet this discussion, while addressing significant concerns with the power of private platforms to curtail expression (or not), has a tendency to miss an important aspect of how platforms interfere with expression.

In addition to the potential to no-platform their users, platforms have a host of “content curation” measures designed to limit user engagement with content, such as “legal but harmful” mis/disinformation. They can do this directly, by limiting the scale of distribution (de-prioritisation) of content or preventing users from commenting on/sharing content. They can also do this indirectly by applying labels to content as well as preventing users from generating revenue from advertisements. As these curation measures do not lead to removal of content, they are generally not considered to be “real” interferences with the right to expression. This is because users are still able to communicate their views and to find information. Such interference is viewed as an invasion of “freedom of reach” and not freedom of speech: “soft” interferences that make it more difficult to impart or receive information.

But these interferences with reach via content curation do limit the ability to impart and receive information. Any efforts made to reduce the distribution of particular content is a prima facie interference with a user’s right to impart information (imagine being told you can only whisper in a noisy pub). Any efforts made to hide content from users, or limit their engagement with particular content, is a prima facie interference with the right to receive information (imagine finding out that your supermarket lies about having your preferred newspaper in stock). The fact that these forms of curation allow users to impart, or receive, some information does not necessarily mean users are free from interference.

The European right to expression protected by Article 10 of the European Convention on Human Rights and Article 11 of the Charter of Fundamental Rights is not typically sensitive to such subtle interference. The question typically asked by the European Court of Human Rights is whether an individual subject to interference has effective access to alternative channels of communication. As the Court made clear in Mouvement Raëlien Suisse v. Switzerland, this channel does not have to be equally as effective for communicating information, it merely has to be effective.

This is in essence what platforms do when they use curation measures on content—they deny users the ability to use their service, the easiest and most effective means to reach an audience. Platforms can routinely deny access to the most effective form of a user’s expression in a way that does not ‘prevent’ users from imparting or receiving information. The question is thus how to ensure that the right to expression is sensitive to this interference? How can we identify and evaluate these curation measures as an interference with expression?

I suggest that the best way to do this is through the concern for transparency. Transparency used in this sense roughly describes: (1) the ability of users to identify an interference with expression and (2) the ability of users to identify why that interference is there. Those curation measures that necessarily give the user greater control over what they see and how they can interact with it (e.g. labelling) are less invasive than those that deny the user such options (e.g. de-prioritisation).

This is significant because it gives us some way of understanding not only curation as interference with the right to expression, but also gives a sense of the proportionality of interference. Those forms of curation that are distinctly non-transparent, such as de-prioritisation, are greater invasions of expression than more transparent measures like labelling. The transparent measures give users the ability to identify the means and ends of the platform, while still being nudged not to engage. In this way, the question that should be asked when a user is subject to curation is not whether it ‘prevents’ the user from exercising their ability to impart or receive information. Instead, the question is how much did the user know about the interference in the first place.

These concerns have been recently raised by the European Commission with regards to content prioritisation and fact-checking labels. The EU has also been given scope to address such concerns through the EC’s proposal for a Digital Services Act (DSA). The DSA includes measures to provide users effective oversight for, and safeguards against, platform moderation techniques that ‘negatively affect’ users. Yet, at present, it is not entirely clear whether such safeguards extend to curation as well as more traditional content removals. Such safeguards for users could mean greater control over what content they see prioritised as well as appeal mechanisms when they have their expression curated (e.g. through labelling or demonetisation).

Furthermore, sensitivity to transparency might suggest a tiered approach to the implementation of curation measures. Those forms of curation that are distinctly non-transparent, such as de-prioritisation, could be restricted for use by platforms to content that has been specifically identified by the EU or Member State (which in turn raises related problems with collateral censorship). This would encourage consistency with the least restrictive means element of the proportionality test inherent in EU human and fundamental rights law.

Such an emphasis on transparency would go some way to updating the scope of the right to expression to ensure that it is sensitive to newer forms of interference, as well as maximising the protections afforded to this right.

This article gives the views of the author and does not represent the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

About the author

Michael Russ

Michael Russ is a 4th year PhD candidate at the University of Edinburgh. Michael’s research explores content curation as a solution to the problem of mis/disinformation and its implications for freedom of expression in Europe.

Posted In: Internet Governance

Leave a Reply

Your email address will not be published. Required fields are marked *