LSE - Small Logo
LSE - Small Logo

Blog Administrator

April 22nd, 2016

Algorithmic Transparency and Platform Loyalty or Fairness in the French Digital Republic Bill

1 comment

Estimated reading time: 5 minutes

Blog Administrator

April 22nd, 2016

Algorithmic Transparency and Platform Loyalty or Fairness in the French Digital Republic Bill

1 comment

Estimated reading time: 5 minutes

MelanieMélanie Dulong de Rosnay, Researcher at the Institute of Communication Sciences, French National Centre for Scientific Research (CNRS) and Paris Sorbonne, and Visiting Fellow at the LSE, examines the Digital Republic Bill, a new French Bill on digital rights and the impact it might have on citizens and their data.

On 26 January 2016, the French National Assembly voted for a new Bill on digital rights. The Bill includes provisions relating to algorithmic transparency and the duty of ‘loyalty’, or fairness, of online platforms and algorithmic decision-making, which were inspired by French laws on access to and reuse of public sector information, and by laws on net neutrality and consumer protection.

Automated decision-making has an impact on people’s everyday lives, for instance by assigning students to French universities, matching teachers to posts in public schools, pricing a good or a service, and reviewing access to social benefits or bank credit – decisions that are all based on a predictive assessment of an individual’s behaviour in one way or another. Could this Bill help to mitigate the effect of automated decision-making on citizens’ individual rights and agency?

Are the Bill’s provisions merely declarations of good intentions and not legally binding for the corporations, non-profit or administrative services processing our data? Which lessons from previous experiences of artificial intelligence and the law, most recently the regulation of Digital Rights Management (DRM) schemes, could be usefully adapted to the specificities of algorithmic regulation? DRMs are a precedent for algorithmic regulation, where the automated regulation of the access to (and the reuse of) copyrighted works in digital formats has been criticised by computer scientists and users as being unfair as well as ineffective.

Governmental platforms: decisions based on algorithms

The provisions in the Bill target both governmental and corporate platforms. A new article (L. 311-3-1) may be incorporated into the code that regulates relations between the public and the administration of these platforms (Article 2 of the Bill):

When a decision affecting an individual person is taken on the basis of algorithmic processing, rules defining this processing, as well as the principal characteristics of its implementation, are communicated by the administration to the interested party on her demand. (Translation by the author)

An important caveat to this provision is that the interested party must be themselves the subject of the algorithmic regulation. Computer scientists or consumer rights associations are therefore not able to speculatively request access to the algorithm, despite having the technical competence to decipher and understand the impact of an algorithm, the power to raise awareness about the potentially damaging consequences it might have, or the resources to contest an unfair or arbitrary decision-making algorithm in the courts. Amendments aimed at expanding the scope of this provision to require communication were rejected. It is a regrettable paradox that those who are arguably the best qualified to interpret the code and seek class action are not able to do so, unless they themselves are the subject of the algorithmic regulation. Neither developers nor enforcers of automatic decision-making processes are compelled to explain algorithmic decisions, or to justify possibly unfair decisions. This situation concentrates all the power in the hands of the closed code’s owner of, and denies civil society the possibility to examine and assess it. This is technically unsound and politically unbalanced.

This new Bill can be compared to various laws on access to Public Sector Information. The most basic versions of such laws only grant individuals the right to request access to documents held or produced by their national governments, but not additional rights to reproduce and process these documents. If users are only granted the right of access, copyright law will restrict their ability to effectively use any of the data obtained, such as to perform data mining or to quote the data in an article. More complete versions of Public Sector Information laws, such as the 2013 European Directive, provide users with the right to re-use information. The most advanced open data policies will pro-actively distribute this information in machine-processable formats, which will facilitate further analysis.

Caveats for algorithmic transparency are similar to those for open data: are algorithms’ really communicable? Can the black box be opened? There is a risk of limited or false transparency if non-specialists cannot understand the real impact of the algorithm, and if decisions cannot be appealed.

Article L. 312-1-3 of the French Bill obliges public sector bodies (excluding those with fewer than 50 employees):

To make publicly available, in an open and easily re-usable format, the rules defining the main algorithmic processing used in the accomplishment of their mission when such processing are the basis of individual decisions.

This provision is designed to allow expert citizen and third parties to conduct due diligence on the process, but does not go so far as to oblige public sector bodies to make the algorithm itself publicly available. Nor has any specific process has been mapped out to allow individuals to contest a decision based on the algorithm. In the absence of ad hoc provisions, it might be impossible for civil society organisations – or even for citizens – to contest the legitimacy of an automated decision. It may not be possible to opt out and instead to ask, for instance, for a human re-evaluation of one’s right to social benefits, or to opt out of profiling predictions which can lead to discrimination (as it had also been suggested by Recommendation 66 of the advisory commission).

Corporate platforms: ‘duty of loyalty’ and consumer information

Article 22 of the Bill foresees that:

Any operator of online platforms is obliged to provide the consumer with information that is truthful, clear and transparent on the terms of use of the intermediation service it proposes and on the modalities of search engines results (referencing), ranking and delisting (dereferencing) content, goods or services to which the service provides access.

The operator must clearly and explicitly display the existence of:

  • a contractual relationship in cases where the contract contains stipulations relating to the ranking of content, goods or services
  • capital links which may influence ranking [i.e. if a bigger corporation holds shares in the capital and therefore controls the platform]
  • direct remuneration by named persons or entities which has an impact on ranking [such as advertisers]

This measure applies to platforms based in Europe, and to platforms whose services may be used on French territory, therefore including Google, Apple, Facebook and Amazon, collectively known as GAFA. An administrative body will evaluate and assess the performance of platforms and publish a list of those that are non-compliant, but without sentencing guidelines (an amendment in August 2015 to consumer law article L. 111-5-1 has a similar provision, though that has the capacity to impose fines of up to €375,000). An online comparison site for consumers is supposed to allow users to compare how platforms respect their obligation of ‘loyalty’. Indicators and best practices will be published. However, it appears unlikely that many users will check this website. Powerful intermediation platforms are unlikely to change their practices if pressure only comes from informed citizens who report them as being unethical or discriminatory, for the sole reason that information about how an algorithm ranks users remains unclear. Corporate stakeholders, who may wish to protect innovation and trade secrets, have traditionally been opposed to more powerful and direct regulation, and are able to claim that the current competition law is sufficient to protect consumers from dominant positions (e.g. in the case of access to essential facilities, such as when the price of transportation is set by an algorithm).

The accountability of ‘digital golems’

The obligation for platforms to alert consumers to ranking and data processing modalities can be compared to rights information measures (such as Creative Commons metadata), which inform consumers about how they are contractually allowed to use copyrighted digital works (for example to copy, remix, play on a number of devices, to lend, to use for commercial purposes, and so on).

Precedents for automated decision-making by algorithms should be closely examined to extend the discussion. Over the past three decades, the field of law and Artificial Intelligence has theorised the impact of machine-assisted legal decisions on the rule of law, and techno-legal scholars and activists have highlighted the dangers of copyright DRM schemes for the right to read. In my new book, I call these lines of code ‘digital golems’. The golems, artificial creatures of mythology, represent pieces of computer code that prevent the copy and reuse of works online. Golems are developed by copyright right holders or platform owners to protect their interests and the golems implement decisions automatically and indiscriminately, without control by society, the state or the law, and without identifying legitimate uses. But the golems can turn against their masters and create resistance. Beyond access to information and copyrighted culture, the metaphor is powerful in denouncing the logic of encoding binary rules into the digital devices and algorithms that govern our lives and which make decisions based on big data and the traces we leave on networks, platforms and connected objects.

In this post, I rely on previous laws regulating access to copyrighted works and to public sector information in order to analyse the new regulation of access to information on algorithmic regulation. These fields share interesting legal and technical similarities, since digital documents can be processed and decisions can be subject to appeal. Instead of blind algorithmic regulation, regulation by law should be better integrated with regulation by technology, so that the law can address technology without overly protecting it from tinkering and hacking, which can have legitimate purposes, and reciprocally, where technology can also embed and represent citizens’ legal rights.

This blog gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

This post was published to coincide with a workshop held in January 2016 by the Media Policy Project, ‘Algorithmic Power and Accountability in Black Box Platforms’. This was the second of a series of workshops organised throughout 2015 and 2016 by the Media Policy Project as part of a grant from the LSE’s Higher Education Innovation Fund (HEIF5). To read a summary of the workshop, please click here.

About the author

Blog Administrator

Posted In: Algorithmic Accountability | EU Media Policy | LSE Media Policy Project

1 Comments