LSE - Small Logo
LSE - Small Logo

Ioanna Noula

January 15th, 2021

This is a coup

1 comment | 13 shares

Estimated reading time: 5 minutes

Ioanna Noula

January 15th, 2021

This is a coup

1 comment | 13 shares

Estimated reading time: 5 minutes

In this post Dr Ioanna Noula, Co-founder and Head of Research and Development of The Internet Commission, discusses Shoshana Zuboff’s recent arguments about the regulation of platforms and her call to legislative action against tech giants’ practices of data extraction, interpretation and repurposing. The author responds to Zuboff’s point about the distracting character of the current focus of the policy debate on content moderation. Building on the concept of “procedural accountability” she explains why diving deep into the practices and governance driving platforms’ content moderation cannot be deprioritised and how the regulation of content moderation can be a milestone for increasing the accountability of Big Tech.

 In her recent keynote at the European Parliament’s Panel for the Future of Science and Technology,  Harvard Business School Professor Emerita Shoshana Zuboff shared her insight from years of research on the profit-driven business models of online platforms that capitalise on behavioural data. She encapsulated the reality of rising inequalities, civic disempowerment and erosion of democracy perpetuated by knowledge asymmetries between platforms and society in the notion of an “epistemic (from Greek epistēmē ‘knowledge’) coup” engineered by the “institution of surveillance capitalism” and its economic imperative.  Zuboff’s idea of an epistemic coup refers to the unprecedented accumulation of information (knowledge) through sophisticated processes of data harvesting and interpretation. This growing knowledge capital drives a vicious circle of growing asymmetries between big tech and society as knowledge begets tech companies more power, that in turn further subjugates citizens to the profit-driven tech giants.  The epistemic coup comprises three stages:

  1. the sharp rise in epistemic inequality, or inequality of knowledge, described as “licence to steal” at scale and scope and growing gap between what citizens know and what can be known about citizens
  2. the spread of epistemic chaos, which we are in the midst of and is fuelled by the radical indifference, where companies don’t care what users do as long as they are able to extract their data
  3. the institutionalisation of epistemic dominance, which is already in sight in the form of lucrative prediction models that work towards providing certainties about user behaviours – or I would add, forms of control – a way of knowledge begetting the power for platforms to achieve epistemic dominance.

A moral imperative

Zuboff strongly criticised the narrative that sustains the power outlook of the digital ecosystem comprising ‘tech bros’’ pledges to innovation and development, their commitment to the power of competition that drives capitalist economies, and the delivery of better lives or better economies. She explains that this is an argument premised on the pretext – which I would call the moral alibi – of technologies advancing connectivity and generating global human communities.

Google’s most recent controversy regarding the termination of Dr Timnit Gebru, the company’s Staff Research Scientist and Co-Lead of Ethical Artificial Intelligence (AI) team, is an example of how deeply morally reprehensible business cultures offer a plausible explanation for widely criticized business practices that undermine human dignity. Zuboff argues that the EU’s Digital Services Act and Digital Markets Act present the world with an opportunity for the delivery of a democratic digital future where:

  1. the rights, interests of citizens and institutions are safeguarded,
  2. the vulnerabilities of citizens and democratic institutions are minimised,
  3. justice for future generations is achieved, and
  4. the public sector is recognised as the driver of innovation for taking on the financial risks of research and development and the costs of sustaining the conditions and institutions that produce thought leadership and intellectual and cultural breakthroughs.

In The Age of Surveillance Capitalism, Zuboff proposes new types of human rights emerging from the threats posed by the development of novel technologies and black boxed business practices that generate profit for tech companies. Her conceptualisation of people’s “right to future tense” (human autonomy) crystallises the bleak implications at a civilisational scale that are generated by little understood business models and the emergence of novel harms, which will result in the three staged disaster of epistemic inequality, epistemic chaos and epistemic dominance.

Zuboff opened a window of hope enveloping the guiding principles that should drive legislative action in the idea of a moral compass; children’s perception and expression of the political and their understanding of fairness. Although many childhood experts would object to the essentialism underpinning her proposal or an old-school romanticisation of childhood, we should not lose sight of her call to a moral argument that should drive business practice and regulation. She asks:

“Are the suggestions unrealistic? Probably not. It is that our sense of the possible is constrained by the acid bath of habituation and normalisation. We need to resurrect our virgin responses to the realisation of our tracking and monetisation of our data; our current conditions of existence. How? Discussing with children who have not resigned to the normalisation of the abnormal and they are not tolerant of the intolerable.”

A call to action: does content moderation matter?

 Zuboff’s lecture was a call to action towards increasing the accountability of online platforms, for the development of meaningful, pertinent and agile legislation, and for ensuring conditions of democracy and safeguarding citizens’ rights. She forecast a bleak future for human societies and democracies, unless legislators bring to a halt devouring business models that trade on “human futures”. Drawing on the worst of humanity’s history she argues that -like human or organ trafficking – this form of trade that is based on prediction models that threaten humanity’s civilisational achievements and undermine society’s democratic future, should be outlawed.

According to Zuboff, the axiom that should underpin the relationship of governments with intermediaries is: “trust, but verify”. Governments should not take tech giants at their word; they should audit and ensure that platforms are able to deliver on their commitments. I argue that what Zuboff proposes is moving towards delivering “procedural accountability” and a deep dive into the processes and rationale of data extraction, interpretation and repurposing. What is needed is not just an understanding of whether companies comply with the legislative frameworks of safety and privacy –  that are inevitably informed by traditional approaches and thinking about human rights i.e., right to private life, right to dignity etc. – but a radical process with the power to draw back the curtain that conceals the devouring mechanisms and business models of data extraction and processing.

Zuboff highlights that practices of data extraction and predictive analytics should be the key target of any legislation that aims to regulate digital services. To support her argument and emphasise the importance of lawmakers putting humanity in charge of its own future, she draws attention to the regulation of content moderation practices that is perhaps one of the most highly prioritised topics in the policy debate. Despite the fact that content moderation is at the heart of public and policy debate it is merely a distraction from the root causes of the problem; it is, as she puts it, a “last resort and source of frustration to the public and law makers”. Importantly, she argues that focusing on the accountability of tech giants to curate the quality of human interaction and public debate on their platforms only habituates us to a reality where big tech continues to own the terms of the debate. Following her line of argument, user-generated content is merely “data points waiting for connection”.

I cannot agree enough about the primacy of the issue of data extraction. I do however think that recent history has shown that content moderation is not something society and law makers can afford to de-prioritise. Content moderation is a very good place to start for delivering platform accountability, for several reasons:

  • Content moderation offers a clear, actionable area that is well-researched, well-debated in policy and civil society circles, and related to well-defined harms.
  • The regulation of content moderation can provide a very good example of procedural accountability and demonstrate that this can be delivered.
  • The evaluation of content moderation practices can offer a proof-of-concept and the first opportunity to “look under the hood” into procedures and corporate cultures that drive algorithmic decision-making.
  • An evaluation exercise on content moderation can be the departure point for the development of transferable, standardised ways of exploring adjacent areas of corporate digital responsibility.

We need surgical tools and methods that will allow regulators and policy makers to assess not just practices and legal compliance but the cultures and ethos driving practices. While we think and act on the data crimes against humanity, intermediaries should be held to account about practices that have undermined democracies around the world and fuelled extremism and genocide.

Who holds the key to the digital heaven?

Which institution will ensure the avoidance of an epistemic coup in waiting? Who is to decide what ‘good’looks like? In times of global crises where childhood and its “right to future tense” is eviscerated by educational technology companies that harvest and monetise children’s data, metadata, creativity and thought processes, who do we trust to evaluate the principles and values that should shape the ethos of business cultures, and inform and inspire the law maker?

Any accountability exercise requires empowered institutions that can provide auditing mechanisms that will respond to societal challenges, empower law-makers with evidence and restore citizens’ trust in the digital and confidence in business and government. As stated in Article 28 of the draft Digital Services Act, we need independent auditors to assess platforms’ commitment to a “transparent and safe online environment”, drawing on established standards of due diligence. We need practical solutions driven by evidence based on thoughtfully developed research. We need institutions that will act as custodians of universal values and ethical business practices. We need independent organizations that will promote democratic, multistakeholder, international and intergenerational debate and drive a sustainable democratic future. We need experts who will lead not just by know-how but also by example. We need the ethos of justice, transparency, democracy and inclusivity. Anything less is bound to fail, to disappoint and to erode hope that humanity knows and can do better.


This article represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured image: Photo by dole777 on Unsplash

About the author

Ioanna Noula

Dr Ioanna Noula is a childhood and education expert holding a PhD in Citizenship Education (University of Thessaly) and a MA degree in Sociology of Education (UCL Institute of Education). Her research interests include citizenship education, critical pedagogy and digitalisation. She is Head of Research and Development and co-founder of the Internet Commission a non-profit organisation focusing on advancing digital responsibility. She has worked as a teaching and research fellow at the UCL Institute of Education, the University of Leeds and the University of Thessaly. Ioanna has conducted research for award winning projects on global citizenship education and active citizenship in the UCL Institute of Education and LSE’s Department of Media and Communications. Her current research focuses on citizenship, critical literacy and digital responsibility.

Posted In: Internet Governance

1 Comments