LSE - Small Logo
LSE - Small Logo

Robin Mansell, Professor of New Media and the Internet, at the LSE, warns that government’s goals for cybersecurity go too far.

In cyberspace, the war has begun as it is often said in the US. Tackling perpetrators of serious crime and terrorist activity using the Internet is seen as crucial in a war that is being lost. More money and technology are seen as the ways to win the cyberwar.

In the UK, the Government wants to introduce legislation to make us safe. Technology is changing and the Government wants to fill gaps in present legislation. In a cyberworld, the dream is that the records of every move citizens make online can be analysed to reveal intentions that threaten our safety. The hope is that intelligence gleaned from sifting communications data will lead to preventative action or a robust response when the threat is real.  This is a seductive dream. Those who believe this are too ready to put data analysts’ claims first and citizen’s rights second.

If the Government moves to supplement its powers under the Regulation of Investigatory Powers Act (RIPA) 2000, it will have to require Internet Service Providers (ISPs) to collect communications data, not just on demand and with warranted authorization, but routinely. Such data would include traces of people’s social networking activity using Facebook, Twitter or online games. The result would be ubiquitous monitoring of everyone, the retention of huge amounts of data (even if on a decentralized basis), and the analysis of data on an unprecedented large scale.

Safeguards are in place …

In response to a flurry of media attention in April 2012, the Government says that civil liberties will be safeguarded. Security Minister James Brokenshire says, “We absolutely get the need for appropriate safeguards”. The Government’s Communications Capabilities Development Programme was set up to look at “how we can preserve communications capabilities to protect the public in the future, as internet-based communications technology becomes increasingly popular”. The website says the Government is not looking to develop new intrusive powers. There are other programmes too – GCHQ has a “Mastering the Internet” investment programme, developing a network of remotely configured Deep Packet Inspection (DPI) probes of online data communications that is claimed to operate within the law.

On 4 April 2012, the Home Office and Justice Ministers said:

“The police and other agencies will have no new powers or capabilities to intercept and read emails or telephone calls and existing arrangements for interception will not be changed. We envisage no increase in the amount of interception as a result of this legislation”.

Data from Instant Messaging services and Voice-over-Internet-Protocol services such as Skype, or Facebook and Twitter, are not available for interception under current law. The use of social media is growing very rapidly. The authorities claim they can access only some 75% of the communications data generated in the UK as compared to 90% in 2006. This is characterized as losing the cyberwar – hence, the need to expand the reach of communications data collection and access.

Citizens often say they favour Government action to tackle serious crime and terrorism, so those in Government may think there will be popular consent to their plan. The Interception of Communication Officer says in his 2010 Annual Report “it is inevitable that some mistakes will be made, especially considering the fact that the Security Service is dealing with large volumes of communication data requests in complex investigations and that there is a degree of automation in the process”. But the report says that although authorizations for access to communications data in 2010 reached 552,550, growing at about 5% per year, there was a reduction of nearly 50% in the number of interception errors recorded over the past three years. Translated into press release language, this will allow the Government to assure the public that safeguards to respect their civil liberties are in place.

Overreliance on technology

Why should citizens question this? There are many reasons to be concerned if the range and quantity of communications data that are routinely collected and available to the Government is expanded.

Many reported errors are due to spreadsheet glitches, software bugs, and human error, including the failure of authorities to communicate with each other, and data entry errors (e.g. people who find they are barred from flights because their name erroneously is entered in a ‘no fly’ order).

Putting in place the expanded capacity for accessing and analysing communication data will be very costly in terms of hardware, software and training at a time of austerity – these costs will be borne directly by taxpayers or passed on in the costs of ISP access or advertising on social media.

The LSE Information Systems and Innovation Group Briefing on the Labour Government’s earlier effort to use technology and routine data analysis of all UK online activity shows that there are many reasons for concern.  The focus then was on the collection of huge centralized databases of information. The recent rumoured proposals were discussed at the Scrambling for Safety 2012 conference hosted at LSE on 19 April.

How far is the UK government following in US footsteps? Why is recourse to more technology and more data to counter serious crime and terrorist threats so seductive?

Following the US lead …

The 2010 Government’s Strategic Defence and Security Review signalled the Coalition Government’s intentions.

“We will legislate to put in place the necessary regulations and safeguards to ensure that our response to this technology challenge is compatible with the Government’s approach to information storage and civil liberties”.

This has been quoted widely in the press.  Less quoted in discussion about citizen monitoring is the Review’s comment on strategic counter-terrorism relationships. These are to be strengthened through a comprehensive Cyber Operations Memorandum of Understanding that aims to “allow us better to share information, intelligence and capabilities to enable joint planning..”.

The UK Government could be preparing to be able to share the results of its analysis of large quantities of communications data with other governments such as the US, a country which leads in computer-based modelling to detect potentially bad behavior.

The UK is heavily dependent on US intelligence gathered by satellite and other remote sensing intelligence gathering because the US has far greater assets of this type. This puts the UK in a ‘deficit’ position in contributing intelligence data.  Could it be that the US has grumbled about this asymmetry and asked for a plan to remedy it? It is only through more principled resistance to routine and ubiquitous communications data collection that civil liberties in the UK are likely to be preserved.

In the US, the Cyber Intelligence Sharing and Protection Act (CISPA) is set to go to a vote in the US House of Representatives (http://intelligence.house.gov/bill/cyber-intelligence-sharing-and-protection-act-2011). Even if it fails, there is strong lobbying for action that may succeed eventually despite opposition from defenders of civil liberties. CISPA is aimed at requiring companies to turn private information over to the government – to the US National Security Agency (NSA). Obama’s spokesperson calls for safeguards to preserve privacy and civil liberties, but companies like AT&T, Boeing, Microsoft and Time Warner Cable are lining up to support CISPA. As US academic analyst Milton Mueller puts it, the question is whether to optimize technologies and their governance so that military capabilities are best served or whether the consent of networked citizens should be paramount.

With cyberwar as the leading theme, the idea is that governments can never really know enough, and they always need to know more.  They need more data so that security authorities can be more effective. In technology ‘arms race’ jargon, there is a permanent need for more investment in communications data analysis. Wired magazine gives us a sense of this. The Utah Data Center is being built for NSA to intercept, decipher, analyse and store vast amounts of global communications data. NSA already has listening posts throughout the US collecting billions of emails and phone calls and other data traces left by our use of electronic networks. James Bamford says that the Steller Wind programme has ‘tapping’ rooms at telecom operator switch locations that use DPI. Public money is going into the development of superfast computers that can attack strongly encrypted electronic messages which cannot currently be cracked. In the US there has been secret monitoring and non-transparent government-sponsored data collection and analysis.

The seduction of computer augmented intelligence

That criminals and terrorists will use digital technology to evade authorities in not a new observation.  If DPI, which is used now by ISPs for traffic management, is extended to use for government surveillance of all citizens, those who are motivated can use encrypted data to hide their identities and what they do and say. This could weaken government attempts to snoop, not make their abilities to protect citizens more effective.

When people know that more data are being collected and stored, even on a decentralized basis, the ‘underground web’ is likely to grow.  Even if authorities in the UK are not to be permitted to monitor real time data, the risk is higher that vast stores of data – including content – will be mined. It is not possible to get communications data from social media websites like FaceBook or Twitter without using techniques that require interception of content, not just information about what web pages have been visited or who texted whom. Government claims that authorities will not be permitted to access content are not persuasive.

Computerized software modelling is supporting augmented intelligence gathering. Governments are becoming captivated by the seductive idea that analysis of communications data will give them more accurate information to protect us. In the past the Tactical Numerical Deterministic Model (Dupuy Institute, Wash DC) was developed to predict who would win a conventional war. Now insurrection and ‘irregular’ warfare is being modelled using vastly greater quantities of digital information.

SCARE (Spatio-Cultural Abductive Reasoning Engine) includes automatic tracking of open source information related to terror groups, indigenous tribes, and socio-cultural-political entities. It uses telephone data and indexes and queries data from social networks. Mason RiftLand is a spatial software agent model for analysing conflict, disasters and humanitarian crises in East Africa. Condor sifts through data from Twitter, Facebook and other social media to predict how public protests will evolve using sentiment analysis. E-MEME (Epidemiological Modelling of the Evolution of Messages) uses sentiment analysis or abduction to follow opinions across populations. The Worldwide Integrated Crisis Early Warning System (W-ICEWS) project led by Lockheed Martin crunches through data from digital news media, blogs and other websites and intelligence and diplomatic reports before disturbances happen.

Some of these models are for military purposes and some are for civilian purposes. They have in common the apparent offer of a technical fix that will protect everyone from serious crime and other threats. But the results of data analysis on this scale can be skewed by anyone (authorities or others) who forges information – this is cyber propaganda in the Internet Age.

Governments seem to believe that the developers of new augmented intelligence models are promising new certainties about the intentions and behaviours of citizens. Academics are usually cautious – they say their models are only as good as the interpretations of the results.  Military and security analysts often ignore this. They hope that more money, more data, and technological progress will reduce uncertainty. They dream that, ultimately, augmented intelligence models will make us safe.

Chris Demchak writes:

“The power of the modern state to reduce the harm of obscure unknown attackers lies in its ability to recognize emerging sources of surprise and to disrupt or accommodate them”.

He says, in the case of the US, that only pseudonymous behavior online should be available to security services for identifying emerging bad actor patterns. This is because the consequence of tipping whole populations in democratic societies into lack of trust in their governments is that social tolerance and community cohesion can be irreparably destroyed. Better models and more data analysed by more powerful computers do not make breaches of trust less likely than when RIPA 2000 was introduced in the UK. New legislation will simply lead to more breaches of trust.

Greater sophistication in analysing anonymous data communications (and content) is being driven as well by commercial interests in mining data for targeted advertising purposes.  For instance, the New York Times reports that Google Ventures, the venture capital entity financed by Google, is beefing up its internal data science team. Commercial efforts to model consumer online records raise equally important issues for civil liberties.

New Paradigms

My work on the UK Foresight Cyber Trust and Crime Prevention Project showed that the idea of a technological ‘arms race’ in the cyberworld is well entrenched. A bolder way to think about the problem of civil liberties and security in the Internet age is to think about new paradigms for understanding an uncertain world and the perceived and actual risks of cyberspace. Pervasive computing, augmented intelligence, and sentiment analysis cannot eliminate the complexity of cyberspace or its consequences. We need creative responses that put civil liberties in the forefront, and at least not behind, the technology-led dreams.

Every extension of Government power in this domain requires justification in detail, not bland assurances of safeguards.  It also needs the consent of the public. Any change needs to respect individual and collective values as Rebecca MacKinnon says in Consent of the Networked.

In the 1980s and 90s, transaction generated information was a concern. It was understood that these data could be more useful in identifying individuals than census data. Now technology has changed, and the focus is on communications data, but little seems to have been learned.  Now as then, the Government in the UK is putting technology, not people, first.

Civil liberties can be threatened by digital data mining and data analysis that identifies statistically defined groups because the results can be used to discriminate against certain groups. This goes beyond the protection of individual privacy as US scholar, Oscar Gandy, has shown. This is because citizens/consumers,

“are largely incapable of exercising meaningful choice with regard to the rapidly expanding array of points within the matrix of networked interactions in which their interests will be placed at risk”.

Survey results in the UK show that people have varying amounts of trust in the Internet depending on many factors. The electorate is unlikely to be able to give an informed view of the consequences for democracy if the Government introduces legislation to permit wide scale monitoring of everything citizens do online. Governments worry about a cyber-security skills deficit in the UK and in the US.  Even though people skills are seen as essential in this area, it is much easier for governments to dream of computerized solutions to cyber-enabled threats to citizens.

A move to routinely collect and analyse communications data of the whole population will drive a proportion of illegal or threatening online activity underground. The dream of a cyberwar that can be won by ‘good actors’ based on technology is emblematic of the hope for a society that can be governed by software and hardware behind the mobile or computer screen. But there may be limits to the extent to which intelligent machines can or should be tasked with making non-transparent choices for human beings.

What pathway should Government follow?

Governments do need to protect citizens from unwanted intrusions into their private online spaces. They do need to ensure that citizen interests in security are met. But any moves to extend monitoring of citizens online need to be subject to ethical scrutiny. There must be debate about how far it is desirable for cyberworld automation to make choices based on delegated authority to software agents. Software agent choices are only loosely connected to their developers. Any new legislation proposed by the Coalition Government in the UK will come with safeguards. But even the safeguards are starting to be seen by military and security analysts as shielding people engaged in activities that are disapproved of or condemned.

The automation of communications data analysis is cumulative. Surveillance, privacy intrusion, software agent misbehaviour, and a lack of transparency are becoming more and more prominent features of society. What is possible today may become excessive in the future and the right to be free from surveillance should not be an unaffordable luxury. Means of limiting the excesses of surveillance are unlikely to be found as long as the Government aligns itself with war metaphors and gives priority to technological solutions. Cees Hamelink and Julia Hoffman point out that people will not speak up and talk about their ambitions if they feel insecure. They will not feel secure if they know their interactions are constantly being monitored.

More spending on the collection and analysis of communications data by Government without deeper consideration of the means by which civil liberties such as the right to freedom of private correspondence can be protected is not a way forward.  Some progress might be made if there is a public debate about the limits to targeted access to communications data and the application of law governing by whom and when these data can be accessed.

The deeper challenge is to start imagining how Government can be responsive to its citizens’ demands for improvements in their lives and livelihoods instead of answering the siren call of accumulating huge quantities of data, investing in superfast computing and creating sentient software on the premise that doing so will make citizens safer.

Some of these arguments are set out in my book – Imagining the Internet – published by Oxford University Press in July.