LSE - Small Logo
LSE - Small Logo

Blog Administrator

June 20th, 2017

On fake news, alternative facts and the era of Minority Report

0 comments | 1 shares

Estimated reading time: 5 minutes

Blog Administrator

June 20th, 2017

On fake news, alternative facts and the era of Minority Report

0 comments | 1 shares

Estimated reading time: 5 minutes

Since November’s US presidential election, the issue of fake news has been debated probably more than ever before: where it’s coming from, how it spreads and whether or how attempts to stop it should proceed. Joanna Kulesza, professor of international law and Internet governance at the University of Lodz, Poland, argues that we need careful consideration of the laws around disinformation, and to rethink the role of ISPs in tackling fake news. For more on public policy responses to fake news, please see our policy brief.

Bad “fake news” and good “alternative facts”?

Germany is working on a Network Enforcement Law (Netzwerkdurchsetzungsgesetz), dubbed the law on “fake news”. It is aimed at service providers who fail to disable access to content they know or should have diligently known is “fake” or simply harmful – that which incites hate or prejudice, invokes violence, that falls into the well-known yet still ambiguous category of hate speech. The law itself introduces no new legal category of “fake” information nor does it oblige service providers to decide what is true, false or what the original author (“content provider” in legal “newspeak”) knew or should have known. It refers to well established laws on hate speech, discrimination or inciting violence and takes the e-commerce Directive notice-and-act mechanism a step further, requiring transparent and effective procedures and a seven day deadline for inspecting complaints, introducing fines for those providers who fail to block such illegal content quickly enough.

“Fake news” has become a buzzword in 2017, but the distinction between bad “fake news” and arguably permissible “alternative facts” is thin, if actually existent. Even the first international attempt at a “fake news” soft law seems confused about it. The Joint Declaration on Freedom of Expression and “Fake News”, Disinformation and Propaganda headlined by the United Nations Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information, and authored by Article 19 and Centre for Law and Democracy (CLD) is one of the very first attempts to transform the headlines and debate behind it onto law, however soft it may be at this time. The authors attempted to limit this ambiguous category to only “state sponsored” propaganda.

While the authors of the joint declaration emphasize that “the human right to impart information and ideas is not limited to “correct” statements” but also “protects information and ideas that may shock, offend and disturb, and that prohibitions on disinformation may violate international human rights standards” they imply that the right to right to share ideas and opinions “does not justify the dissemination of knowingly or recklessly false statements by official or State actors.” They go further to imply that “State actors should not make, sponsor, encourage or further disseminate statements which they know or reasonably should know to be false (disinformation) or which demonstrate a reckless disregard for verifiable information (propaganda)”.

The definition and the attempted regulation behind it fail on two levels. First they fail to explain who the “official or State actors” are – is it only the authorities currently in office, the opposition, does it include state funded or sponsored media? The broader the definition, the more difficult the assumed duty to tell the truth is to justify and implement. Secondly, propaganda is not a legal category and as such cannot be considered illegal, as recently elaborated on by the OSCE. Simply put, it is unreasonable to assume people will stop lying and any law implying such duty is deemed irrational and ineffective.

A human right to lie?

One could even go as far as to argue there is a human right to lie, originating from freedom of expression. Only in some circumstances are lies as such punishable by law (for example in the banking sector and unfair competition laws). Most often it’s the consequences of the lies – the harm they cause to another’s reputation for example – that generate liability or criminal responsibility. A legal obligation to tell the truth rests on individual categories of professionals and comes down to them being legally required to show due diligence in doing their job rather than enforces the search for objective, factual truth. Rightfully so, as the truth lies in the mind of the beholder, as proven by centuries of philosophical debates. This is also true for journalists, as broad as this term may be at the time of “new media”, stretching from the BBC to YouTubers and bloggers. We require journalists to be diligent in their search for facts and the way they report them.

This is however not the case for politicians, who at times get to act as “state authorities”. Politics and diplomacy have always allowed for half-truths, and at times simply relied on lies. Propaganda is not illegal. Neither is state funding of the media, done officially or unofficially. Bismarck’s secret “Reptile fund” was started to buy positive media reports on the Iron Chancellor’s politics and the tradition has lived on ever since among world leaders.

What happened to the “undesired chilling effect” of the e-commerce Directive?

When the e-commerce Directive was being implemented, the service providers’ obligation to act as guardians for legal online content was high in media headlines. Human rights advocates warned about the undesired “chilling effect” on free expression that ISPs duty to block access to what they considered illegal content would have on the information society in Europe and beyond. This criticism remains valid today, as it is service providers who act as the first and only instance in defining which speech is allowed and which disallowed on the networks they manage.

But not only has this threat not been mitigated; the German proposal to make the fake news law a European standard takes it yet a step further. While we never managed to ensure human rights guarantees against the unilateral decisions of ISPs acting as freedom of expression advocates, it seems the current policy trend goes the exact opposite way. Rather than mitigating the undesired chilling effect, the “fake news” tide sweep all those due process concerns aside, enforcing an ISP obligation to act expeditiously and effectively, i.e. to block rather than ask further questions. Questions on the limits of free speech, artistic expression and the limits of hate speech have been difficult to answer for national and international courts – why would it be assumed that ISPs would have an easier time facing this challenge and would do a good job while tackling it?  Yet, there we are. We are even allowing for this challenge to be taken to another, automated level, with ISPs introducing algorithms to help them fight hate speech, “fake news” and other “undesired” content.

Fake news algorithms – is it “The Minority Report” yet?

In their fight against online crime, but also populism or discrimination, IT companies are using the tools they know best – automated formulas for detecting and blocking particular content. Facebook and Google already announced that they plan to use algorithms to detect fake news and make it less prominent. While it seems common knowledge that algorithms cannot be trusted in interpreting and enforcing laws, even when supported by a single, snap human decision, as is the case with the virtual sweatshops enforcing “community standards”, this seems to be precisely the ongoing trend. Not only are algorithms unreliable, but they also lack transparency, making the decision behind blocking content not only opaque but also more complex. Various research and policy groups have already looked into the human rights challenge posed by algorithms and not one of them found automated human rights protection to be trustworthy. Why would we then want to go that way?

It’s not too late!

Each policy and law must reflect society’s needs. This is the time of both a growing terrorism threat, and populism. People want to feel safe in their daily lives, whether in Parisian music clubs, in Manchester concert halls or on the Brussels metro. The ease of access to information has also made them susceptible to a sweeping tide of populism, be it in the UK, US or Eastern Europe. There is however no simple, mathematical answer to either of these challenges. The information society was build on human rights guarantees and those foresee sanctions for hate speech, fraud or lack of journalistic due diligence. We might need to tweak the way in which they are to be read and enforced, but we do have legal tools to fight online propaganda. Leaving law enforcement to machines is simply a dangerous idea: in populist times, those of big data and Cambridge Analytica, code becomes law more than ever.

The answer to the current challenge of protecting free speech from populist agendas is as old as human rights themselves – it’s education. Education about the law, but also about the technologies that underpin the global network. We need educated, critical readers, rather than machines that will babysit unaware media consumers. We need educated users who know about algorithms and help develop the best ones. The answer to fake news is not less news through automated blocking, it’s more news, educating and informing users how to read the new media.

This post gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

About the author

Blog Administrator

Posted In: Filtering and Censorship | LSE Media Policy Project

Leave a Reply

Your email address will not be published. Required fields are marked *