Article

Digital Technology and Government

In the modern era, the work of government came to be seen less as the plaything of unseen forces and more as a field of earthly endeavour. Rational analysis, public reason, and methodical administration gradually replaced religion as the basis of political legitimacy. Revolutions in America, France, and elsewhere were followed by intensive periods of rationalisation and reorganisation. Languages were standardised. Unified weights and measures were rolled out. Codes and constitutions—numbered, systematic, lexically consistent—were introduced in an effort to bring precision and structure to the work of government. By 1922, sociologist Max Weber could hail the “precision instrument” of bureaucracy—“speed, unambiguity, knowledge of the files, continuity, discretion . . . unity”—as the most advanced method of social organization known at the time.

Nearly a century later, we are witnessing another transformation in the way humans live together. Digital technology has begun to alter, irrevocably, the nature of our collective life. New political forms are emerging which have not been seen before. The purpose of this recommendation is to outline three ways in which politics might change. The first concerns the democratic process. The second concerns public administration. The third concerns the enforcement of laws. This is not a comprehensive overview, even of these topics. It is intended simply to illustrate the nature of the issues now facing political scientists and technologists. For a detailed consideration of developments in digital technology and their philosophical implications, see Future Politics: Living Together in a World Transformed by Tech (Jamie Susskind, Oxford University Press: 2018).

THE DIGITAL LIFEWORLD

Three important developments are happening at once:

1. Increasingly capable systems: a growing number of digital systems are able to perform tasks which were previously thought to require conscious, creative human Many of these systems are referred to as being or possessing ‘Artificial Intelligence’ (AI).

2. Increasingly integrated technology: Technology is no longer confined to desktop ‘computers’ or the glass tablets we keep by our Increasingly it is dispersed into the physical world around us. Twenty-first century cities will be dense with sensors, interfaces, and processing power, with billions of (previously inanimate) objects connected to each other and to their human users. The distinction between cyberspace and ‘real’ space will grow less meaningful.

3. Increasingly quantified society: Humans generate roughly as much data every two hours as they did from the dawn of civilisation until 2003—and that rate is increasing In the past, most human activity was forgotten and lost to time as soon as it took place. In the future, more and more of the human experience—what we say, where we go, what we do, what we buy, how we feel, who we know—will be caught and stored as data.

In the first instance, it is not necessary to dwell on the possible challenges posed by artificial general intelligence—those AI systems that have consciousness and creativity, or which can ‘think’ or act laterally across multiple domains like human beings.

Perhaps one day we will elect robots to parliament or submit the general will of the people to a singular superintelligence. But we are not there yet—or even close. It is prudent, instead, to proceed on the conservative basis that AI will continue to develop in a range of narrower domains—with distinct systems increasingly capable of performing discrete tasks. The task for political scientists and policymakers is to discern how such systems might affect the institutions of government and politics which we have inherited from the past—and to develop the concepts and arguments needed to analyse and critique any such effects.

Digital Technology and Politics

Why should we expect political consequences from technological change?

It is tempting to consider that politics might be different from other fields of endeavour being transformed by technology: commerce, entertainment, transport, social life, education, and the like. In reality, however, politics may well be more sensitive to technological change. This is because of the close connection between (i) the way we gather, store, analyse, and communicate our information, and (ii) the way we structure our collective life.

In the past, revolutions in information and communication technologies were usually accompanied by revolutions in politics. In fact, politics as we understand it was impossible before the invention of language. And the first empires rose to prominence shortly after the invention of writing—in its time, the most advanced information and communication technology.

In Empire and Communications (1950) Harold Innis notes that the empires of Egypt, Persia, and Rome were all “essentially products of writing.” Nearly 5,000 years later, the introduction of the printing press was followed by seismic political upheaval in Europe, as new and subversive ideas were disseminated with unprecedented speed and accuracy (see Elizabeth Eisenstein: The Printing Press as an Agent of Change: 2009).

In the twentieth century, bureaucracy and technology developed hand-in-hand, and the apparatus of government grew increasingly reliant on the effective gathering, storage, and communication of information. It is no coincidence that the punch-cards and tabulating machines used to process the 1880 U.S. census provided the technological foundation for what later became the International Business Machines Corporation—IBM. Technological progress is often followed by changes in the political sphere.

What changes can we expect in the future?

THE DEMOCRATIC PROCESS

The Internet has already caused the democratic process to evolve in various ways: how parties mobilize activists, the way analysts aggregate public sentiment, citizens’ means of interacting with politicians and lobbying government, the tools used to monitor political developments, and so forth. But if we examine the two fundamental elements of the democratic process as we currently understand it—deliberating and deciding—it is possible to glimpse more substantial changes in the future.

Deliberation is the process by which members of a community discuss political issues in order to find solutions that can be accepted by all (or most) reasonable people. The Internet has already revolutionised the nature of the forums we use for deliberation. For ordinary citizens, a growing proportion of political news-gathering and debate takes place on digital platforms owned and controlled by private entities. This has its benefits, but the risks are also becoming clear: algorithmic polarisation, social fragmentation, and the proliferation of ‘fake news’. Another source of growing unease is the privately-made determinations about who may participate in the deliberation process (and who is blocked or banned) and what may be said (and what is prohibited). Every time a controversial public figure is exiled from a social network, the prohibition is met with equal choruses of derision and approval, usually along predictably partisan lines.

Looking ahead, it is entirely foreseeable that humans may cease to be the only participants in their own deliberative processes. AI systems—sometimes called chatbots—are increasingly able to converse with human beings using natural language. Most of their political interventions are crude, limited to slogans like “#LockHerUp” or “#MAGA.” And they do not “think” in the way that humans do. But they already have an appreciable impact on political discourse. Around a fifth of all tweets discussing the 2016 U.S. presidential election, for instance, and a third of Twitter traffic relating to the 2016 Brexit referendum, are thought to have been generated by digital systems. In the buildup to the 2018 U.S. midterms, around 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots. In the days after the disappearance of journalist Jamal Khashoggi in October 2018, Arabic-language social media erupted in support for the Saudi Crown Prince Mohammed bin Salman, widely rumored to have ordered his murder. In one day, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets, and “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” The majority of these messages were generated by chatbots.

It is important to recognize that bots in the future will be able to deliberate in ways that rival—and even exceed—human levels of sophistication. Last summer, a bot reportedly achieved a score of 81 percent in the clinical examination for admission to the Royal College of General Practitioners in the United Kingdom. The average score for human doctors was 72 percent. It is not difficult to imagine that AI systems in the future might come to surpass us in our ability to debate, and not just because of the dismal state of political discourse. And tomorrow’s bots will have faces and voices, names and personalities—all engineered for maximum persuasion. Early “deepfake” videos—showing celebrities and politicians speaking words which they never said—show the early potential for the synthesis of persuasive human speech by nonhuman actors.

The obvious risk is that citizens are crowded out of their own public discourse by lightening-fast systems ready to swat aside the feeble contributions of their human creators. Realistically, the most capable of such bots would most likely be owned and controlled by the wealthiest actors, whose interests would inevitably be rewarded with a greater share of the public discourse.

A more positive prospect is that bots could be deployed in a public-spirited fashion, prompting and nudging us toward dialogue that is more constructive, well-informed, and balanced. What ultimately matters, therefore, is (i) how these systems are engineered, (ii) who owns and controls them, and (iii) the uses to which they may permissibly be put. (For instance, the Bot Disclosure and Accountability Bill introduced by Senator Dianne Feinstein in the U.S. Senate seeks to prohibit candidates and parties from using any bots intended to impersonate or replicate human activity for public communication. It would also stop political action committees (PACs), corporations, and labour organizations from using bots to disseminate messages advocating candidates.)

Aside from deliberation, digital technology could change how we decide, i.e., the voting process itself.

The notion of direct democracy—disregarded for centuries because of the size and complexity of modern polities—is no longer a fantasy. It is possible, if not necessarily desirable, that future citizens might be able to vote on several policies each day, using smartphones or whatever replaces them, in an unending process of plebiscitary engagement. It will also be possible for people to delegate their vote on certain issues to others whom they trust—for instance, allowing a consortium of architects and urban planners to cast their vote on matters of city design. This is so-called ‘liquid democracy.’ Looking to the longer term, as Pedro Domingos has suggested (The Master Algorithm, 2015), it is possible to conceive of ‘democracies’ in which AI systems ‘vote’ hundreds or thousands of times a day on citizens’ behalf. How better to represent the people, the argument might run, than by deploying systems which (i) analyse data that offers an accurate portrait of citizens’ actual lives, interests, and circumstances, and (ii) have been told their values and mandated to vote consistently with them? Such a process could make a convincing claim to being more ‘democratic’ than one which merely permits citizens to scratch a tick in a box every few years as a means of choosing between a handful of candidates.

More democracy, of course, is not always better. We would not, for instance, want our choice of cancer treatment to be determined by the crowd rather than by a single trained oncologist. Theorists in the modern liberal tradition have long sought to identify the proper limits to what the people should decide—with human rights and the rule of law carefully curtailing the untrammelled will of the demos. But the issue is not closed. On the contrary, as technology enables more aspects of public life to be democratised, certain voices are likely to claim that more democracy is always better. That claim, in turn, is likely to form the faultline for several new political divides.

PUBLIC ADMINISTRATION

Much of the business of government involves officials making and implementing decisions without immediate democratic oversight. In their totality, such decisions are of great social significance. They shape our interaction with the state and our experience of being a citizen. Sound public administration— efficient and informed decision-making, good record-keeping, appropriate accountability, the absence of corruption, fair allocation of resources, proper exercise of discretion—is integral to the political health of a nation. What might the role of technology be?

In truth, we already trust digital systems with important decisions. Algorithms trade stocks and shares on our behalf. Machine learning systems diagnose our lung cancers and skin cancers. It should not be controversial as a matter of principle that digital systems might play a part in the work of government: if such systems are better able (for instance) to manage a city’s water supplies, regulate its traffic flows, monitor tax compliance, record its property ownership, administer social security benefits and the like, then why would they not be put to use? It would make a welcome change from the application of such technologies solely for the pursuit of profit. It may reasonably be predicted that the ‘precision instrument’ of bureaucracy will, in time, be superseded by the superior system of digital technology.

What about decisions that involve moral or political judgments? Is it desirable for algorithms to be making choices about the distribution of vital social goods or the ambit of individual liberty (such decisions not always being put to the people)? One response is that they already do. For instance, in most modern economies, algorithms play a significant and growing role in determining: (i) whether and on what terms individuals receive insurance, (ii) whether and on what terms people and business can access mortgages and credit, (iii) the distribution of employment opportunities (it is said that 72% of resumés are no longer read by human eyes), and (iv) the appropriate length of prison sentences for offenders. Because some (though by no means all) of these algorithmic usages originate in the private sector rather than the state, they are sometimes mischaracterised as ‘merely’ commercial and therefore apolitical. But the way these algorithms are engineered, the data on which they are trained, and the values they embody, are not, and in any event should not be treated as, matters of mere corporate policy. They determine citizens’ rights and their access to social goods. They are unquestionably of political and moral significance. It is a quintessentially political question whether the operation of such algorithms should be left to the free market (often operating in ‘black box’ obscurity), or wholly adopted by the state, or perhaps merely overseen by the state or public agencies acting in a regulatory capacity.

There is, of course, a legitimate concern that systems should not independently be making moral decisions—in ways which we might not agree with or even understand. Behind every digital system, however, is a human designer, owner, or controller who ultimately decides (or fails to decide) the moral direction which that system must follow, either by the way it is engineered or the data on which it is trained. The substance of such decisions, and the processes by which we make them, will require the closest political scrutiny. We are not yet in a world of morally autonomous AI systems, albeit that is the direction of travel. The need for transparency and accountability will grow in line with the number and importance of the functions assumed by technology. Tech firms and government agencies will need to report, voluntarily or otherwise, on the operation of their algorithms and their use of data, so that citizens are better able to understand their relationship with the forces that govern them. It is sometimes said of some machine learning systems that the decisions they reach are genuinely out of the control or understanding of their human creators. Even the best engineers cannot explain why they do what they do. If that is so, then there is a strong principled argument that such systems should not be used in the work of public administration at all. Likewise, the same may be said of systems whose decision-making processes cannot be adequately explained or described.

One persistent concern is that replacing bureaucracy with technocracy might deprive citizens of the ‘human touch’ in their interactions with the state. This fear is not new. But it is also not necessarily determinative of the issue. First, one may doubt whether bureaucracy (as Weber understood it) is itself particularly humane. Many organs of government, not to mention individual bureaucrats, are unhelpful, inaccessible, and obdurate. Secondly, many citizens would prize efficiency over the human touch in any event: I would rather my social security payments were distributed on time through a faceless blockchain system than late by a friendly but incompetent official. Finally—and more radically—the so-called “human touch” may not be the exclusive preserve of humans for long. AI systems are increasingly able to read our emotions and respond to them in sophisticated ways. “Artificial emotional intelligence” and “affective computing” are developing at an impressive speed.

ENFORCEMENT OF THE LAW

A third domain in which digital technology might be expected to transform the work of human self-government (and the final domain considered in this paper) is in the enforcement of the law.

Much commentary has focused on the problems of constant surveillance and data-gathering—and the problems are no doubt significant—but it often misses a deeper issue for the long term. As we come to rely on digital technology for more and more of our basic daily needs and functions, we will increasingly be subject to the rules and laws that are coded into such technologies. The best early example is digital rights management technology, which has already made it almost impossible to commit certain copyright breaches. Looking further ahead, a self-driving car which refuses to drive over the legal speed limit (or a limit determined by its manufacturer) is a quite different socio-legal construct from a human-controlled vehicle which may be driven over the limit subject to the risk of likely (but not definite) penalty if caught. To use an analogy employed by Lawrence Lessig in a different context, it is the difference between a locked door and a door which says “do not enter.”

Digital technology not only introduces the prospect of self- enforcing laws, but also laws that are adaptive. A self-driving vehicle may well be subject to changeable speed limits depending on the time of day, the weather conditions, traffic, and the identity of the passenger.

It has long been recognised by legal scholars that in “cyberspace”, code is law. The rules contained within the code that constitutes a program or platform stand (usually) as unbreakable constraints on action. A document cannot be accessed without the correct password; a tweet may be no longer than 280 characters. But the precept that code is law must now be updated and expanded to encompass (i) the fact that code is no longer confined to ‘cyberspace’ (see the example of the self- driving car), and (ii) code is increasingly dynamic and “intelligent” rather than just an immutable architecture like in the past.

Code, therefore, constitutes a new and strange form of power which will benefit the state—if laws are embodied in code—but also the private entities who write that code, and who can choose which additional rules they wish to see enforced.

Humankind’s journey into the future ironically marks a reversion as well as progress: to a time when we entrusted our political affairs to powerful unseen forces whose workings we cannot always claim to understand. The consequences cannot with confidence be expected either to be wholly benign or malign: what matters is how the technologies in question are engineered, who owns and controls them, and the uses to which they are put. To what extent should our lives be governed by powerful digital systems—and on what terms? That is the central political question of this century.

This article was originally published in the Berggruen Institute’s Renewing Democracy in the Digital Age Report

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

The Berggruen Institute’s mission is to develop foundational ideas and shape political, economic, and social institutions for the 21st century. Providing critical analysis using an outwardly expansive and purposeful network, we bring together some of the best minds and most authoritative voices from across cultural and political boundaries to explore fundamental questions of our time. Our objective is enduring impact on the progress and direction of societies around the world.