Article

To What Extent Should Our Lives Be Governed by Digital Systems?

Jamie Susskind is an attorney and a past fellow of Harvard University’s Berkman Klein Center for Internet and Society. This piece is adapted in part from his award-winning bestseller “Future Politics: Living Together in a World Transformed by Tech,” published by Oxford University Press.

(Getty Images)

In the past, politics and theology were closely intertwined. Political leaders claimed divine authority. Success in office was considered the product of godly inspiration. Failure was believed to be the cost of divine displeasure. Progress was treated as the gift of deities, spirits and stars.

In the modern era, the work of government came to be seen less as the plaything of unseen forces and more as a field of earthly endeavor. Rational analysis, public reason and methodical administration gradually replaced religion as the basis of political legitimacy. Revolutions in America, France and elsewhere were followed by intensive periods of rationalization and reorganization.

Languages were standardized. Unified weights and measures were rolled out. Codes and constitutions were introduced in an effort to bring precision and structure to the work of government. By 1922, the German sociologist Max Weber could hail the “precision instrument” of bureaucracy — “speed, unambiguity, knowledge of the files, continuity, discretion, unity” — as the most advanced method of social organization known at the time.

Nearly a century later, we are witnessing another transformation in the way humans live together. Digital technology has begun to alter irrevocably the nature of our collective life.

In the past, revolutions in information and communication technologies were usually accompanied by revolutions in politics. In fact, politics as we understand it was inconceivable before the invention of language. And the first empires rose to prominence shortly after the invention of writing — in its time, the most advanced information and communication technology. The Canadian political theorist Harold Innis notes in “Empire and Communications,” that the empires of Egypt, Persia and Rome were all “essentially products of writing.” Thousands of years after the collapse of these empires, the introduction of the printing press was followed by seismic political upheaval in Europe, as new and subversive ideas were disseminated with unprecedented speed and precision.

In the 20th century, bureaucracy and technology developed hand-in-hand, and the apparatus of government grew increasingly reliant on the effective gathering, storage and communication of information. It is no coincidence that the punch-cards and tabulating machines used to process the 1880 U.S. census provided the technological foundation for what later became International Business Machines Corporation — IBM.

More recently, the internet has caused the democratic process to evolve in various ways: how parties mobilize activists, the way analysts aggregate public sentiment, citizens’ means of interacting with politicians and lobbying government, the tools used to monitor political developments and so forth. But if we examine the two fundamental elements of the democratic process as we currently understand it — deliberating and deciding — it is possible to glimpse more substantial changes on the horizon.

**

Deliberation is the process by which members of a community discuss political issues in order to find solutions that can be accepted by all (or most) reasonable people. The internet has already revolutionized the nature of the forums we use for deliberation. For ordinary citizens, a growing proportion of political news-gathering and debate takes place on digital platforms owned and controlled by private entities. This has its benefits, but the risks are also becoming clear: algorithmic polarization resulting in social fragmentation and the proliferation of “fake news.” Another source of growing unease is the growing (and normally privately-held) power to decide who may participate in the deliberation process — who is blocked or banned — and what may be said or prohibited. Every time a controversial public figure is exiled from a social network, the prohibition is met with equal choruses of derision and approval, usually along predictably partisan lines.

Looking ahead, it is entirely possible that humans may cease to be the only participants in their own deliberative processes. Artificial intelligence systems — chatbots, for example — are able to have conversations with human beings using natural language. When they engage in politics, their input is mostly limited to slogans like “#LockHerUp” or “#MAGA.” And they do not “think” like humans. But they already have an appreciable impact on political discourse. For example, about 20 percent of the tweets discussing the 2016 U.S. presidential election and 30 percent of Twitter traffic relating to Brexit leading up to the vote was generated by nonhuman entities. In the months before the 2018 U.S. midterms, around two-thirds of the chatter on platforms such as Twitter relating to the “caravan” of migrants moving north through Central America was initiated by chatbots.

It is important to recognize that bots in the future will be able to deliberate in ways that rival — and even exceed — human levels of sophistication. Last summer, a bot reportedly achieved a score of 81 percent in the clinical examination for admission to the Royal College of General Practitioners, the largest medical association in the United Kingdom. The average score for human doctors was 72 percent.

It is not difficult to imagine that A.I. systems in the future might come to surpass us in our ability to debate — and not just because of the dismal state of political discourse. Tomorrow’s bots will have faces, voices, names and personalities all engineered for maximum persuasion. Early “deepfake” videos depicting celebrities and politicians “speaking” words they never said, though not yet entirely persuasive, show the potential for the synthesis of human speech by nonhuman actors.

The obvious risk is that citizens are crowded out of their own public discourse by lightning-fast systems ready to swat aside the feeble contributions of their human creators. Realistically, the most capable of such bots would most likely be owned and controlled by wealthy actors whose interests would inevitably be rewarded with a greater share of the public discourse.

A more positive prospect is that bots could be deployed in a public-spirited fashion, prompting us toward dialogue that is more constructive, well-informed and balanced. What ultimately matters, therefore, is three things: how these systems are engineered, who owns and controls them and the uses to which they may permissibly be put. Senator Dianne Feinstein’s Bot Disclosure and Accountability Bill, for example, seeks to prohibit candidates and parties from using any bots intended to impersonate or replicate human activity for public communication. The bill would also stop political action committees, corporations and labor organizations from using bots to advocate for candidates.

Aside from deliberation, digital technology could change how we make decisions — the voting process itself. The notion of direct democracy — disregarded for centuries because of the size and complexity of modern polities — is no longer a matter for hypothetical debate. It is possible, if not necessarily desirable, that future citizens might be able to vote on several policies each day, using smartphones or whatever replaces them, in an unending process of plebiscitary engagement. It will also be possible for people to delegate their vote on certain issues to others whom they trust — for instance, allowing a consortium of architects and urban planners to cast their vote on matters of city design. This has been called “liquid democracy.”

Looking to the longer term, as the machine-learning researcher Pedro Domingos suggests in his 2015 book “The Master Algorithm,” it is possible to conceive of “democracies” in which A.I. systems “vote” hundreds or thousands of times a day on citizens’ behalf. How better to represent the people, the argument might run, than by deploying systems that analyze accurate data on citizens’ actual lives and values and vote consistently with them? Such a process could make a more convincing claim to being “democratic” than one which merely permits the fraction of the whole population that actually turns up to vote to scratch a tick in a box every few years as a means of choosing between a handful of options.

More democracy, of course, is not always better. We would not, for instance, want our choice of cancer treatment to be determined by the crowd rather than by a trained oncologist. Theorists in the modern liberal tradition have long sought to identify the proper limits to what people should decide — with human rights and the rule of law carefully curtailing the untrammeled will of the demos.

But the issue is not closed. On the contrary, as technology enables more aspects of public life to be democratized, certain voices are likely to claim that more democracy is always better. That claim, in turn, is likely to form the fault line for several new political divides.

**

Aside from deliberating and deciding, the work of government also involves administration and enforcement.

Turning first to administration, officials regularly make and implement decisions without immediate democratic oversight: record-keeping, lawmaking, processing welfare distributions, and resource allocation, for example. In their totality, such decisions have great social significance. They shape our interaction with the state and our experience of being a citizen. Sound public administration — efficient and informed decision-making, appropriate accountability, the absence of corruption, fair allocation of resources, proper exercise of discretion — is integral to the political health of a nation. What role might technology play in these activities in the future?

In truth, we already trust digital systems with important decisions. Algorithms trade stocks on our behalf. Machine-learning systems diagnose illnesses. It should not be controversial as a matter of principle that digital systems might play a part in the work of government. If such systems are better able to manage a city’s electricity supplies, monitor tax compliance, record property ownership, administer social security benefits and the like, then why would they not be put to use? It would make a welcome change from the application of such technologies solely for the pursuit of profit. It may reasonably be predicted that the “precision instrument” of bureaucracy will, in time, be superseded by the superior system of digital technology.

What about decisions that involve moral or political judgments? Is it desirable for algorithms to be making choices — which are not always put to the people — about the distribution of vital social goods or the ambit of individual liberty? Again, they already do. For instance, in most modern economies, algorithms play a significant and growing role in determining whether and on what terms individuals receive insurance, whether and on what terms people and business can access mortgages and credit, the appropriate length of prison sentences for offenders, and the distribution of employment opportunities.

Because some of these uses of technology originate in the private sector rather than the state, they are sometimes mischaracterized as merely commercial and therefore apolitical. But the way these algorithms are engineered, the data on which they are trained and the values they embody, are not, and in any event should not be treated as, matters of mere corporate policy. They determine citizens’ rights and their access to social goods.

There is, of course, a legitimate concern that systems should not independently be making moral decisions in ways we might not agree with or even understand. Behind every digital system, however, is a human designer, owner or controller who ultimately decides (or fails to decide) the moral direction the system must follow, either by the way it is engineered or the data on which it is trained. The substance of such decisions, and the processes by which we make them, will require the closest political scrutiny. We are not yet in a world of morally autonomous A.I. systems, but the need for transparency and accountability will grow in line with the number and importance of the functions assumed by technology. Tech firms and government agencies will need to report, voluntarily or otherwise, on the operation of their algorithms and their use of data so that citizens are better able to understand their relationship with the forces that govern them.

It is sometimes said of certain machine-learning systems that the decisions they reach are genuinely out of the control or understanding of their human creators. Even the best engineers cannot explain why they do what they do. If that is so, then there is a strong and principled argument that such systems should not be used in the work of public administration at all.

One persistent concern is that replacing bureaucracy with technocracy might deprive citizens of the “human touch” in their interactions with the state. This fear is not new. But it is also not necessarily determinative of the issue. First, one may doubt whether bureaucracy (as Weber understood it) is itself particularly humane. Many organs of government, not to mention individual bureaucrats, are unhelpful, inaccessible and obdurate. Second, many citizens would prize efficiency over human touch. I would rather my social security payments were distributed on time through a faceless blockchain system than late by a friendly but incompetent official. Finally — and more radically — the so-called “human touch” may not be the exclusive preserve of humans for long. A.I. systems are increasingly able to read our emotions and respond to them in sophisticated ways. “Artificial emotional intelligence” and “affective computing” are developing at an impressive speed.

**

A further domain in which digital technology might be expected to transform the work of human self-government is in the enforcement of the law. Much commentary has focused on the problems of constant surveillance and data-gathering — and the problems are no doubt significant. But a deeper issue for the long term is that as we come to rely on digital technology for more and more of our basic daily needs and functions, we will increasingly be subject to the rules and laws that are coded into such technologies.

The best early example is digital rights management technology, which has already made it almost impossible to commit certain copyright breaches. Looking further ahead, a self-driving car that refuses to drive over the legal speed limit (or a limit determined by its manufacturer) is a quite different socio-legal construct from a human-controlled vehicle that may be driven over the limit subject to the risk of penalty if caught. To use an analogy employed by Lawrence Lessig in a different context, it is the difference between a door which says “do not enter” and a door which is simply locked.

Digital technology not only introduces the prospect of self-enforcing laws but also laws that are adaptive. A self-driving vehicle may well be subject to changeable speed limits depending on the time of day, the weather conditions, traffic and the identity of the passenger.

It has long been recognized by legal scholars that in “cyberspace,” code is law. The rules contained within the code that constitutes a program or platform usually stand as unbreakable constraints on action. A document cannot be accessed without the correct password; a tweet may be no longer than 280 characters. But the precept that code is law must now be updated and expanded to encompass several things: the fact that code is no longer confined to “cyberspace” (as with self-driving cars), and code is increasingly dynamic and “intelligent” rather than just an immutable architecture, like in the past.

Code, therefore, constitutes a new and strange form of power that will benefit the state — if laws are embodied in code — but also the private entities that write that code and can choose which additional rules they wish to see enforced.

**

Looking ahead, humankind’s journey into the future is going to mark a reversion as well as progress. We are simultaneously moving forward and returning to a time when we entrusted our political affairs to mysterious forces whose workings we cannot always honestly claim to understand. The consequences will not be either wholly benign or malign. What matters is how the technologies in question are engineered, who owns and controls them and the uses to which they are put.

To what extent should our lives be governed by digital systems? On what terms? That is the central political question of this century.

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

The Berggruen Institute’s mission is to develop foundational ideas and shape political, economic, and social institutions for the 21st century. Providing critical analysis using an outwardly expansive and purposeful network, we bring together some of the best minds and most authoritative voices from across cultural and political boundaries to explore fundamental questions of our time. Our objective is enduring impact on the progress and direction of societies around the world.