cyber-attacks-post

The Ethics of Cyber-War: Are we Prepared for Web War One?

Dr Matt Sleat

University of Sheffield

 

If we take history as our guide, war and violent conflict seem to be permanent albeit regrettable features of human experience. But warfare changes; and the recent introduction of cyber-weapons represents one of the most radical and far-reaching changes in how conflicts between states take place at least since the advent of nuclear weapons some 60 years ago (arguably in the history of warfare).

Cyber-space has recently been recognised as the fifth domain of warfare by the United States (alongside land, sea, air, and space) and the UK government has only in the past few months announced that its cyber-strategy now goes beyond simply securing us against cyber-attacks but actually developing our own cyber-weapons for future employment. Though the definition of cyber-war remains a matter of some intense dispute, at the most basic it is possible to say that it is an action carried out by a nation-state, usually via the internet, to perpetrate another nation’s computers or information networks for the purposes of causing damage and disruption to its critical infrastructure.

Though it might sound like science fiction or the preserve of Hollywood blockbusters (it is the focus of the fourth instalment of the excellent Die Hard franchise, for instance), cyber-war has been a part of contemporary conflicts for at least a decade. Cyber-war is here and it is happening. Yet many are beginning to question whether our traditional ethical frameworks for thinking about the morality of warfare, frameworks that were developed after World War Two in response to  conventional kinetic weapons, apply to weapons employed in the cyber realm.

I have just returned from a conference at the Centre for High Defence Studies (Rome) on ‘The Ethics of Cyber Conflict’ dedicated to exactly this issue. The conference was organised by NATO’s Cooperative Cyber Defence Centre of Excellence (CCDCOE) and attempted to take a first step in filling the ethical and policy vacuum that currently exists at the international level in relation to cyber-war, exploring from different practical and theoretical perspectives the numerous ways in which the novelties of cyber-weapons fundamentally challenge or problematise our traditional understanding of the morality of warfare. What motivated this conference was a genuine worry (or maybe fear, if that is not putting it too strongly) that cyber-weapons will play a significant role in any future conflict, yet we currently lack an appropriate moral compass that can help us think through the novel ethical issues that this new domain of warfare gives rise to. Are we prepared for ‘Web War One’?

So what are these distinct issues that we urgently need to think through? Let me just mention a few. One of the biggest issues surrounding cyber-attacks is what is called the ‘attribution problem’. It is well-established in both ethics and law (e.g. Article 51 of the UN Charter) that a state is permitted to defend itself when it is the victim of aggressive actions on the part of another state. In traditional warfare it is more often than not very clear exactly whose tanks it is crossing your borders, whose planes are dropping bombs on your munitions factories, or whose soldiers are shooting at you (indeed such public disclosure of the identity of the aggressive actor is required by international law).

The case is very different in cyber-space. There are two main reasons for this: The first is that the internet was purposefully designed to be security-light in order to enable the greatest degree of access and communication. A consequence of this is that the vast majority of actions in cyber-space are completely anonymous and take place without requiring the declaration of one’s identity. Furthermore, it does not take a computer-expert to be able to re-route their connections through servers located anywhere else in the world, making it seem like the connection is coming from South Korea when the ‘cyber-warrior’ is sat in Skegness. The consequence of these two factors means that when an attack is taking place it is incredibly difficult to track with any high degree of certainty exactly who the aggressor state is.

This is not a theoretical problem; it has happened. For example, just before the first Iraq War (so cyber-attacks have been a feature of modern warfare for decades) the United States’ Department of Defence (DoD) realised that its network security was systematically being breached. Initially the DoD were unable to establish who was behind such breaches, though the fact that tensions between the US and Iraq were high, and that the attacks seemed to be coming from the Middle East, meant that most analysts and military chiefs strongly suspected that Sadam Hussein’s regime was responsible. It turned out that the real culprits were two Californian teenagers, aided by an Israeli man, who had managed to reroute their connections through the Middle East to mask their actions (this event has been code-named SOLAR SUNRISE). All signs reasonably pointed to Iraq. Yet they were completely innocent. And in more recent attacks, while again there is good reason to suspect that it was Russia behind attacks on Estonia and Georgia given the geopolitical situations at the time (in the case of the cyber-attacks on Georgia, the two countries were actually at war), their responsibility cannot be proven to the degree of certainty that many think would justify retaliatory response. For all we know they could indeed be totally blameless, and any military response would hence be unwarranted and potentially immoral. So the crucial question is begged: to what degree of certainty must it be possible to attribute responsibility to a state for perpetrating a cyber-attack in order for a military retaliation to be morally, let alone legally, justified? At the moment, we simply have no answer to this question.

The ‘attribution problem’ is compounded by a second issue. Unlike traditional kinetic weapons, bombs and bullets, cyber-weapons are packets of code that cannot in and of themselves harm human beings. Indeed, it is not clear that such weapons are even physical in the normal sense of the word. Furthermore, most cyber-attacks are very unlikely to cause major harm or injury to human beings. They are much more liable to be disruptive rather than violent in nature. To be sure, cyber-attacks might be seriously disruptive: they have the potential to take down entire power-grids, to significantly damage economic activity, to destroy oil or gas pipelines or uranium enrichment plants (as the Stuxnet worm, discovered in 2010, did in Iran), to cut-off a government’s pathways of communication to its people and the outside world (as happened in the Georgia attacks), and to seriously undermine a military’s command and control facilities and processes. All of these represent serious potential damage to a state’s vital interests, but none of them are acts of aggression in the traditional Clausewitzian notion of violence: in most cases of cyber-attacks no physical damage need be done to any physical object, nor are persons likely to be harmed (at least not directly). So alongside the question of who the aggressor is in any cyber-attack is the equally significant question, in the absence of any physical intrusion into a territory or actual violence to human beings, of whether a cyber-attack can rightly be considered an act of aggression? And if it is not an act of aggression, then how can any military response be justified?

So cyber-war throws into doubt the most basic questions of warfare: are we being attacked and who is attacking us? In academic and legal terms, this requires us to ask whether a cyber-attack provides a casus belli, i.e. a justification for going to war (though there are further interesting questions as to what forms of military response are appropriate to a cyber-attack: Only a retaliatory cyber-attack or could a traditional kinetic attack also be justified?). But cyber-war also poses questions for the morality of actions undertaken during a conflict, what in just war theory is called jus in bello. Most theorists believe that there are moral rules that limit what states can do to one another even when they are at war. One of the most important of these moral laws is the principle of discrimination. In both ethics and international law there is a strong prohibition against killing non-combatants, that all acts of war should be directed towards enemy combatants and not towards those who are not involved in the conflict. The killing of civilians, often classified as non-combatants or innocents, who are playing no role in the war effort is therefore prohibited.

Yet cyber-attacks can be highly indiscriminate. An attack which took down a power-grid, communication services, or significantly disrupted economic activity is likely to have major impact on both combatants and non-combatants. The deeply integrated nature of much of our critical infrastructures ensures that such discrimination would be very difficult to achieve in even the most sophisticated of cyber-attacks. Taking a more global view, once many cyber-weapons have been employed it is often the case that even those who developed it will be unable to adequately track who it will target. This is especially true for computer viruses and worms that are specifically designed to replicate themselves so as to infect numerous computers and networks. As of September 2010, for instance, the Stuxnet worm which was originally employed to infect Iranian uranium enrichment plants had been found to have infected over 100,000 computers in 155 countries across the world! That Stuxnet was designed to damage a specific target, and that the vast majority (if not all) of these over 100,000 computers were not involved in the enrichment of uranium, meant that such widespread and indiscriminate infection caused little harm. Yet a cyber-weapon that was less specific in its target and objective and which spread in a comparable manner would clearly be much more of an issue.

It is one of the ironies of cyber-war that those nations whose populace and infrastructures are most dependent upon and integrated into cyberspace, like ourselves and the US, are most vulnerable to cyber-attacks. Our dependency makes us vulnerable. And hence it is probably right that we prepare ourselves for conflict in the cyber-domain, which must include developing our own cyber-weapons. But we have a moral responsibility to properly consider the ethical ramifications of such conflict before we find ourselves faced with some of the issues I’ve briefly mentioned. Yet this discussion has only just begun.

mandela

Nelson Mandela

Professor Graham Harrison
University of Sheffield

 

At Nelson Mandela’s national memorial, ANC Deputy President Cyril Ramaphosa and Barack Obama hailed Nelson Mandela as a ‘founding father’. This is a powerful tribute to a great man. Ramaphosa was referencing Mandela’s determination only to negotiate when the apartheid National Party agreed to non racial elections for a unitary government; his ability to embody a historic moment of racial reconciliation; and the dignified way in which he maintained himself during his presidency. This is why Mandela presided over a largely non-violent democratic transition embedded in a new constitution and elections in 1994. All of these traits lend to Mandela an image not simply as a head of state or political leader, but also as a ideal of what political leadership should be.

Obama, contrastingly, did what many politicians have done since Mandela’s death, which might be described as virtue by analogy: his struggle was our struggle, he encapsulated a spirit that we all aspire to, Mandela speaks to us all. In particular Obama associated the United States’ constitutional history with that of South Africa’s non-racial constitution: ‘Like America’s founding fathers, he would erect a constitutional order to preserve freedom for future generations’. Of course, America’s Founding Fathers would hardly have agreed with Mandela’s vision of a ‘rainbow nation’; in fact most would not have listened to him as a citizen or countryman. But, the point of having founding fathers is not respect for historical accuracy. What matters is effective myth building. Myths of origin are part of the DNA of so many modern state and nation building projects, whether expressed through the imagery of founding fathers, constitutional assemblies, monarchies, cultural-ethnic ancestor/heroes, or religious texts and places of worship. In this sense, Obama and especially Ramaphosa’s evocation of Mandela as South Africa’s founding father repays some reflection. The political work of his memorialisation offers a rough metric of the current progress of South Africa’s non-racial national  project.

Calling Mandela a founding father in South Africa’s troubled times highlights how badly this country needs strong and stable historical narratives and political ideals. Currently, South Africa suffers from income inequality that might be greater than it was at the end of apartheid. This is evident in its political economy: a lack of housing and basic services, perhaps a quarter of people living on less that $1.25 a day, perhaps 40% of the working population either unemployed or insecurely employed. Historically-entrenched and still significantly racialised inequalities in property and incomes remain. Crime is a massive problem, as is sexual violence, and HIV/AIDS. Issues concerning corruption, xenophobia, the creation of an ANC party-state, the extent to which a ‘black bourgeoisie’ can drive a more inclusive development project have generated many political tensions and battles in South Africa’s highly politicised society. And, above and within all of this, the South African economy remains heavily dominated by international capital and reliant on a small group of mineral and agricultural exports.

Little surprise, then, that South African nation-building – and aspiration that seemed so possible when Mandela was sworn in – now seems to take the form of a fairly weak mantra: ‘we have made some progress, but patience is required…’ or even worse, ‘there is no alternative’. This is not the stuff of strong national purpose, nor is it evocative of the myriad and virile political streams of socialism and social democracy that characterised the struggle for South Africa in the 1970s and 80s. And, before we are entirely socialised into a liberal airbrushing of history, it is worth remembering that, for all of the personal integrity of Nelson Mandela and for all of the concerted struggles of anti apartheid movements around the world, it was the civic associations, labour unions, ANC branches, and other popular organisations that made apartheid unsustainable both politically and economically.

Mandela’s passing and the State Memorial taking place today seems to express the anxieties of South Africa’s progress as each speaker endeavours to connect Mandela’s remarkable image of political virtue with something grander and aspirational for the country. One aspiration shared by all of the South African speakers was to memorialise Mandela, make myths from his history, and revitalise the inclusive national project.

Amongst the speakers, those who participated in the struggle against apartheid shout amandla! (power) and viva! (long live), the core vocabulary not only of resistance against apartheid but also of political radicalism: of socialism, national redistribution, the overthrow of apartheid as a step towards ambitious social justice. As the current President’s image was shown on the big screens, and as he stepped up to the podium, there was a considerable amount of booing. But, there were also cheers, and Zuma was the only person to give a balanced narrative about the extent to which apartheid’s devastation was far greater than Mandela’s incarceration. Indeed, anyone born in the 1980s or after, listening to the international mass media narratives would be forgiven for thinking that apartheid was a brief phase in South Africa’s history, defined by one man’s imprisonment and also seemingly benefitting no-one in particular. For all of his laboured delivery, Zuma did remind those watching of the difficulties of forgiving without retrospectively trivialising aparthied’s history and legacy. But, as the crowd became quieter and the rain continued one could almost imagine this as a national lament rather than a revitalisation.

The fact is that South Africa’s transition to a non-racial democracy has not addressed the powerful historical legacies of centuries of racism and dispossession. To speak of ‘the nation’ in the South African context is to speak aspirationally rather than of a robust working fiction. After nearly twenty years, poverty and inequality remain as severe social blights, racism remains even if implicit or coded, the South African economy’s periods of growth have not powered a process of socially-beneficial development. So, the bigger question – and the reason why Mandela’s memorialisation has been both so prolific and tense – is: founding father of what?

 

vote

Election 2015: ‘Don’t Vote, It Just Encourages the B**tards’

Professor Matthew Flinders

Director of the Sir Bernard Crick Centre for the Public Understanding of Politics

Without a whistle or a bang from a starter’s gun, the 2015 general election campaign is now well under way. Labour’s proposed freeze on energy prices marks a first tentative attempt to seize the pre-election agenda, while the Chancellor’s autumn statement next month looks set to respond by including measures aimed at cutting the cost of living.

Although Russell Brand’s recent interview with Jeremy Paxman is unlikely to be remembered as a ‘classic’ political interview that redefined a debate, or shaped a career, it did draw attention to the fact that the British public is becoming increasingly disengaged with politics. The 2013 ‘British Social Attitudes Survey’ revealed that only a small majority of the public now turns out to vote, and fewer than ever before identify with a political party. The United Kingdom is by no means unique in terms of the relationship between the governors and the governed. A quick glance at the titles of recent books on this topic – Why We Hate Politics (Colin Hay), The Life and Death of Democracy (John Keane), Don’t vote, it just encourages the b**tards (P.J. O’Rourke) — reflects the fact that ‘disaffected democrats’ appear to exist in every part of the world. But what is to be done?

There are, as Bernard Crick emphasised in his classic In Defence of Politics (1962), no simple answers to complex questions. Yet, in the intervening half-century since Crick’s classic book was published, public attitudes to political institutions, political processes, and politicians have become increasingly negative. Today three-quarters of the public feel the political system is not working for them, younger people are less likely to identify with a political party, less likely to believe it a civic duty to vote, and are less likely to have engaged in any conventional political activities. Recent research suggests that only 12% of those aged 18-25 will definitely vote in 2015. The other 88% are seemingly unsure whether it is worth voting at all.

In this context Russell Brand’s arguments appear slightly more sophisticated than some of his critics might have appreciated. The comedian’s position was not that people should not vote but that it was rational for people not to express their democratic right when there was no real choice between the main parties. To do otherwise was simply to participate in a sham that actually gnawed away at the health of democratic politics. The problem with this argument is that it polarises the debate around a set of rather crude options: ‘Vote!’ versus ‘Don’t Vote’, the ‘engaged’ versus ‘the disengaged’, ‘politicians’ versus ‘comedians’. Ok, so I made the last bit up but you know what I mean.

Surely the real question is how to make voting matter? For some people this might involve the reform of the electoral system, or compulsory voting, and these ideas merit consideration (although in Australia, where voting is compulsory, levels of public trust in politics make the United Kingdom look positively healthy). However, a simpler and more effective reform would be the addition of a ‘None of the Above’ option on all ballot papers. In this way citizens could make a formal and recognised contribution to the electoral results without having to demonstrate their frustration through spoiling their ballot paper or simply not bothering to vote. The danger is that the ‘None of the Above’ option might actually win quite a few elections!

A more radical option involves the introduction of time limits for MPs. Let us say – for the sake of argument – a maximum of two or three terms after which the individual would have to leave Parliament and serve the same period ‘in the real world’ before being eligible to stand for re-election. Introducing a term-limit would at least end the current system – the political equivalent of ‘bed blocking’ – by shaking-up the notion of ‘safe seats’. It would give more chances to more people, and would offer a balance between stability and fresh thinking. It would also make voting a far more significant political act. Although I am no ‘Mystic Meg’ my guess is that our current MPs will hate this idea. Protestations and all manner of reasons not to open-up a debate on this matter will inevitably pour forth but my interest lies not with the current generation of politicians but with the future.

An even more radical option involves engaging with the public to solve the pressing political issues. Take reform of the House of Lords for example. How many public consultations, Royal Commissions, and parliamentary inquiries can one issue consume before someone admits that a more radical approach is needed? This might involve the creation of a Citizens’ Assembly on the Second Chamber modelled on British Columbia’s Citizens’ Assembly on Electoral Reform. A broad selection of members of the public drawn from all across the United Kingdom and in a way that broadly reflects society at large (let us say 200 people) would become members of the Assembly. Once selected, the Assembly would work through three main phases. In the first educational phase, all the members would be brought together for a number of lectures and seminars about the role of second chambers and their composition. In the second stage, the focus would be on community engagement as the Assembly Members went back to their localities to organise and lead a number of community discussion groups, with the aim being to gauge the views of the public at large. The final stage focuses on decisions. After debating the various options on the basis of an informed understanding of the topic, and presenting a cross section of views drawn from their community engagement, the Assembly must come to a final recommendation to offer the public.

The campaign could start right here, right now with this blog: the Campaign for a Citizen’s Assembly on Lords Reform. For the canny politician with fixed-elections in mind, this idea has the twin attraction of making them appear a dynamic democrat while also kicking the issue of Lords Reform into the long grass for the length of the 2015-2020 Parliament. Establish the Citizens’ Assembly in 2015 (or 2016 by the time the post-election dust has settled), with the requirement to announce its final recommendation by the end of 2018, add a year to ensure a thorough public information campaign, and then a public referendum on the Assembly’s recommendation can be held in conjunction with the May 2020 General Election. Perfect.

I can already hear the naysayers denying the public’s capacity to deal with such complex issues, or denying that an Assembly could ever come to a clear single recommendation, but international experience leads me to a different conclusion.

'Do you have confidence in national government?' survey

Confidence in Democracy Crumbles

Matthew Wood, Deputy Director, Crick Centre and Visiting Fellow, ANZSOG Institute for Governance

Latest data from the OECD show that trust and confidence in liberal democratic governments has fallen to depressing new lows. The solutions they offer though are more of the same old ‘good governance’ accountability agenda since the ‘90s. Could it be we need more innovative ways of engaging the public to really invigorate our ailing democracies?

This week The Economist published a short but prescient article lamenting ‘crumbling confidence’ in liberal democratic governments across the OECD. The article presents some fascinating data from a report that gives us yet more cause to worry about ‘democracy in crisis’. The headlines are as follows:

  1. Only 40% of citizens in OECD countries trust their governments, down 5 points from 2007.
  2. Trust was hit hard by the financial crisis. In Ireland, Greece and Portugal trust is down by more than 20 points, and in Greece now rests at an astonishingly low 12%.
  3. Trust in ‘emerging’ countries is comparatively healthy, sitting at 54%. Interestingly from a democratic perspective, the Chinese and Indonesian governments (certainly not considered paragons of liberal democracy) come out in the top 3.

Accountability Again

The Economist is certainly right that this data is deeply concerning, even ‘dispiriting’. The questions inevitably raised are ‘why is this happening?’ and ‘what is to be done?’, and here the OECD Secretary General Angel Gurría has been particularly vocal. In a recent speech to the OECD Network of Senior Officials from Centres of Government, he set out a ‘strategy on trust’ that contained some familiar elements: integrity, transparency and engagement:

  • Integrity: ‘Too many citizens believe that corruption in government is widespread. At the OECD we help countries to strengthen trust in the policy making process by developing good practice principles in high-risk areas and monitoring their implementation’.
  • Transparency: ‘Citizens want to know how their money is being spent. Governments must be accountable, and this means publishing and communicating easily digestible budget data’.
  • Engagement: ‘We need to get serious about Open Government as an interactive process that promotes inclusive and responsive policy making through real engagement with citizens. Trust is not only about tackling corruption and putting government data on websites, it is also about giving citizens a voice in the process’.

As much as it’s difficult to find fault in any of these largely agreeable themes, there is a distinct sense of déjà vu here. Since the 1990s, international organisations have promoted ‘good governance’ based on exactly the principles of transparency and openness Gurría advocates. Western states have made reams of government data, reports, accounts, etc. available like never before via Freedom of Information acts other forms of ‘open government’. We’ve also seen a significant growth in consultations, ‘big society’ programmes, and the like. In other words we live in an era of what John Keane (2009) calls ‘Monitory Democracy’:

Monitory democracy is a new historical form of democracy, a variety of ‘post-parliamentary’ politics defined by the rapid growth of many different kinds of extra-parliamentary, power-scrutinising mechanisms … Within and outside states, independent monitors of power begin to have tangible effects. By putting politicians, parties and elected governments permanently on their toes, they complicate their lives, question their authority and force them to change their agendas – and sometimes smother them in disgrace (pp.688-689)

Keane goes on to document a stunning array of ‘over one hundred new types scrutinising institutions’ scrutinising all aspects of political systems including their inputs (elections), outputs (policy evaluations) and throughputs (processes of decision making within government) (p.690). So, if we are to take the word of Keane’s extensive study, as well as the variety of academic literature on accountability, the world of integrity, transparency and engagement Gurría yearns for is already with us. And yet, as the Gallup survey data shows, it has not, and is not, making us trust our governments more.

Arguably, and controversially, this endless quest for accountability may even have paradoxically fuelled disenchantment with democracy. Matthew Flinders, Director of the Crick Centre, warns that too much transparency, openness and accountability can be as damaging as too little, as it fuels public expectations to unsustainable levels. In a similar vein, political philosopher Stephen Bilakovics even suggests that the very principle of openness may be the source of democracy’s problems, because it exists as an unsustainable ideal of absolute citizen power and control against which the reality of messy political compromises can never measure up.

A Way Forward?

If it is the case, then, that yet more accountability and yet more openness has not, and will not, solve our democratic deficit (and may even exacerbate rather than ameliorate the problem), then what is to be done? Besides the already considerable body of academic recommendations on this topic, if we dig a little deeper into the findings we see a very interesting trend that may be overlooked in the headlines, as the OECD itself notes:

Despite diminishing trust in national government, citizens are generally pleased with the many public services they receive locally in their daily lives. For instance, on average 72% of citizens reported having confidence in their local police force. Almost the same percentage considered themselves satisfied with health care, and 66% were satisfied with the education system.

This point could be crucial for understanding the problem we face. Put simply, citizens seem to trust institutions more when they know more about them, understand how they work, and interact with them more. It would hence be unsurprising that trust in traditionally ‘big government’ Nordic countries is generally higher (as shown in the Gallup data), or that governments regularly holding citizen referenda (such as Switzerland) also come high up the list. By contrast, despite numerous accountability initiatives, large and complex democracies like Britain, Germany, France, the US and Australia are almost inevitably more distant than the relatively smaller countries (or the ones with relatively emergent democratic cultures) higher up the table. In these latter countries, since public interaction with government will often be infrequent and complex, trust may be more difficult to build and maintain than merely posting impenetrable accounts and reports on anonymous government websites and pretending this is ‘democratic engagement’.

A different approach, then, might be called for. Could it be that finding ways to help citizens understand government beyond ceaseless ‘accountability’ initiatives might help? We could start by enabling people to understand the political process through citizenship education programmes and public engagement initiatives in public arenas – churches, libraries, theatres, cafes, pubs, shopping centres – thinking outside the box a bit. These events could bring together politicians, public figures and other experts (academics, even?) to build a genuine public sphere and help a cynical public understand why politics really matters. The Crick Centre’s recent event on Britain’s House of Lords was an attempt to begin a wider debate about the future of our democratic institutions. Perhaps this approach is optimistic. What the latest data shows, though, is that if anything the old international trust agenda is not working, and we need ways of improving trust that go further and deeper.