LSE - Small Logo
LSE - Small Logo

Blog Administrator

March 24th, 2016

Accountable machines: bureaucratic cybernetics?

2 comments

Estimated reading time: 5 minutes

Blog Administrator

March 24th, 2016

Accountable machines: bureaucratic cybernetics?

2 comments

Estimated reading time: 5 minutes

Alison PowellAlison Powell, Assistant Professor at LSE, argues that the accountability of algorithms is intrinsically linked to governance structures and citizenship in society. Algorithms should be used to support decision-making for the benefit of society rather than to target individual consumers.

Algorithms are everywhere, or so we are told, and the black boxes of algorithmic decision-making make oversight of processes that regulators and activists argue ought to to be transparent more difficult than in the past. But when, and where, and which machines do we wish to make accountable, and for what purpose? In this post I discuss how algorithms discussed by scholars are most commonly those at work on media platforms whose main products are the social networks and attention of individuals. Algorithms, in this case, construct individual identities through patterns of behaviour, and provide the opportunity for finely targeted products and services. While there are serious concerns about, for instance, price discrimination, algorithmic systems for communicating and consuming are, in my view, less inherently problematic than processes that impact on our collective participation and belonging as citizenship. In this second sphere, algorithmic processes – especially machine learning – combine with processes of governance that focus on individual identity performance to profoundly transform how citizenship is understood and undertaken.

Communicating and consuming

In the communications sphere, algorithms are what makes it possible to make money from the web for example through advertising brokerage platforms that help companies bid for ads on major newspaper websites. IP address monitoring, which tracks clicks and web activity, creates detailed consumer profiles and transform the everyday experience of communication into a constantly-updated production of consumer information. This process of personal profiling is at the heart of many of the concerns about algorithmic accountability. The consequence of perpetual production of data by individuals and the increasing capacity to analyse it even when it doesn’t appear to relate has certainly revolutionalised advertising by allowing more precise targeting, but what has it done for areas of public interest?

John Cheney-Lippold identifies how the categories of identity are now developed algorithmically, since a category like gender is not based on self-discloure, but instead on patterns of behaviour that fit with expectations set by previous alignment to a norm. In assessing ‘algorithmic identities’, he notes that these produce identity profiles which are narrower and more behaviour-based than the identities that we perform. This is a result of the fact that many of the systems that inspired the design of algorithmic systems were based on using behaviour and other markers to optimise consumption. Algorithmic identity construction has spread from the world of marketing to the broader world of citizenship – as evidenced by the Citizen Ex experiment shown at the Web We Want Festival in 2015.

Individual consumer-citizens

What’s really at stake is that the expansion of algorithmic assessment of commercially derived big data has extended the frame of the individual consumer into all kinds of other areas of experience. In a supposed ‘age of austerity’ when governments believe it’s important to cut costs, this connects with the view of citizens as primarily consumers of services, and furthermore, with the idea that a citizen is an individual subject whose relation to a state can be disintermediated given enough technology. So, with sensors on your garbage bins you don’t need to even remember to take them out. With pothole reporting platforms like FixMyStreet, a city government can be responsive to an aggregate of individual reports. But what aspects of our citizenship are collective? When, in the algorithmic state, can we expect to be together?

Put another way, is there any algorithmic process to value the long term education, inclusion, and sustenance of a whole community for example through library services?

The neoliberal, consumer citizenship propped up by algorithmic systems of sorting and service delivery further alienates us from the shared virtues of citizenship, and the construction and nourishment of shared experience.

Machine learning versus bureaucracy

Accountability then must be considered in relation to the whole system itself and its entire undertaking. Some of the proposals discussed at our workshop included having machine learning processes verify the outcomes of algorithmic decisions and provide transparency, and that systems should be designed to permit auditing as well as to audit other related systems. To me this appeared as an especially accountable version of bureaucracy, where results from each system’s accounting dynamically report up through an iterative (but still accountable) chain of command. This is not bureaucratic in the sense of inventing process for its own sake, but it is bureaucratic in the sense that it establishes many processes of accountability that are the responsibility of entities who report to one another through a structure where trust is related to the capacity to validate decisions.

This is not 19th century bureaucracy, but a cybernetic counterpart that is still emerging. Although cybernetic systems have always tried to redistribute responsibility and operate in real time, their expansion has also created some of the issues of accountability we are now trying to solve. In many ways, bureaucracies are useful. They contain processes of decision-making and structured divisions of function. Much like algorithms. They are criticised for becoming voluminous, disappearing under process, and becoming opaque. Much like algorithms. Bureaucracies, in some cases, can efficiently undertake processes that have undesirable or devastating outcomes (including war and discrimination). Much like algorithms.

I am not sure that we should be so afraid of considering the bureaucratic aspects of using machine learning technologies for accountability, not when we have over a hundred years of social science knowledge on the problems to avoid. I also wonder whether bureaucracies might also help to solve the other problem I evoked earlier – whether thinking specifically about how human and computational decisions might move us away from a citizenry of alienated individuals and towards a reconception of the shared virtues of belonging together to a place.

Seeing algorithms – machine learning in particular – as supporting decision-making for broad collective benefit rather than as part of ever more specific individual targeting and segmentation might make them more accountable. But more importantly, this would help algorithms support society – not just individual consumers.

This blog gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science. 

This post was published to coincide with a workshop held in January 2016 by the Media Policy Project, ‘Algorithmic Power and Accountability in Black Box Platforms’. This was the second of a series of workshops organised throughout 2015 and 2016 by the Media Policy Project as part of a grant from the LSE’s Higher Education Innovation Fund (HEIF5). To read a summary of the workshop, please click here.

About the author

Blog Administrator

Posted In: Algorithmic Accountability | LSE Media Policy Project

2 Comments