Report

Berggruen Seminar Series: AI Ethics – Designing for Responsibility

The Berggruen Seminar Series recently held a public event at Peking University titled “AI Ethics: Designing for Responsibility.” The talk was conducted by Dr. Jeroen van den Hoven, who is a Dutch Ethicist and Philosophy Professor at Delft University of Technology, Editor-In-Chief of Ethics and Information Technology (Springer Nature), and Scientific Director of the Delft Design for Values Institute, placed thematic emphasis on how responsible design should be an indispensable part of AI innovation.

With consideration for the existing ambivalence around how humans should engage with artificial intelligence, the event, titled “AI Ethics: Designing for Responsibility,” placed thematic emphasis on how responsible design should be an indispensable part of AI innovation.

Dr. van den Hoven began his discussion by showcasing the worldwide issue of how regulations, principles, and standards for technology tend to lag behind its actual development and use. However, the conventional framework of “do first, ask for forgiveness later” is too dangerous to apply to AI; Dr. van den Hoven believes that clear lines must be drawn concerning responsibility, and that the role of ethicists and engineers are analogous in this regard.

Dr. van den Hoven’s argument is unambiguous—responsibility for any unintended or negative consequences of AI should not fall onto AI, but rather onto the humans in charge of designing them. He provided two examples. First, Dr. Frankenstein: Blame shouldn’t be placed on Frankenstein’s monster, but on Dr. Frankenstein himself. Second, he likened the argument that “AI should be responsible for its crimes” to how the 11th century European Church had a court to declare artifacts and objects guilty, and how 18th century Europe did the same for animals. He stressed that modern humans should not be immured in such archaic thinking.

How then should society think about designing AI responsibly? Dr. van den Hoven believes we should move from the repetitiveness of “what if AI displays human qualities” to controlling the design-phase as much as possible. The famous trolley problem demonstrates distinctions in the thought processes between engineers and philosophers; engineers would suggest developing a braking system that would preclude the train from ever reaching such a dangerous theoretical extreme, while a philosopher would doubtlessly be exasperated at the engineer for failing to understand the trolley problem as a thought experiment.

With the advent of smart cities, AI is being deployed en masse, raising the question of what Dr. van den Hoven calls “second order responsibility,” or holding people responsible for the responsibility of others. He demonstrated the importance of second order responsibility by comparing AI development with nuclear power. In order to account for the safety of nuclear power in society, responsibility lies not only on the chief scientists, but also on the logistics staff in charge of shipping the construction materials, the manufacturer of the Geiger teller, and even the security guards. He encouraged the audience to consider AI similarly, arguing that algorithm is but a small part of the equation. In addition to nuclear power, the aviation, food, pharmacy, and water industries are full of regulations; these standardizations are contingent on understanding second order responsibility.

Dr. van den Hoven then discussed the importance of normative context in order to properly define AI responsibility. Due to the radically disparate uses for AI (from medical purposes to weaponry), the scope of regulations must be flexible and robust within the legal, ethical, and governance arenas.

He identified four primary “defenses” against accepting accountability for actions, before delving into how society should design AI to avoid eschewing responsibility. The first primary defense is knowledge: the idea that one cannot accept responsibility because they did not know. Second is control: one cannot accept responsibility because they had no control over the circumstances. Third is freedom and choice: one cannot accept responsibility because they had no other choice. Fourth is privacy and moral capacity: one cannot accept responsibility because they were not in a private setting and only carried out their actions according to public interest. Dr. van den Hoven believes that AI should be designed as such that human agency in these four domains are magnified.

First, Dr. van den Hoven broached “knowledge” and identified the black box, in which the output of AI is not clearly understood, as the reason why responsible AI design is essential for society. However, the epistemic insecurity that stems from the black box should not be the end of the discussion around AI accountability. Algorithms should not be treated as alchemy—engineers and ethicists should strive to implement and redirect human biases (de-biasing) from causing undue impact on society (image recognition, surveillance, etc.). Dr. van den Hoven also strongly believes that algorithmic recourse should be a strong consideration for reducing plausible deniability, in that AI should be smartly designed to reverse the harmful decisions of extant models.

Second, in discussing “control”, Dr. van den Hoven isolated accidents caused by self-driving cars and crashes by Boeing 737 MAX models as the primary impetus for responsible AI design. The same logic used for shopping algorithms is being used for autonomous lethal weaponry—meaningful human control needs to be established before using such impactful technology. Regarding concerns for “freedom and choice,” he described the behavioral psychological phenomenon of nudging, in which indirect suggestions and reinforcement mechanisms are utilized to influence the behavior and decision making of individuals and groups. AI forcibly creates an echo chamber that at face-value may appear innocuous but can influence entire economic cycles and political processes (Cambridge Analytica). Dr. van den Hoven believes that without responsible design, big data and machine learning will inevitably mean big nudging.

Last, on “privacy,” Dr. van den Hoven asserted that social media is now the strongest indicator of individual behavior and identity, even when compared with different rigorous, scientific approaches. Due to the prominence of “surveillance capitalism,” privacy is being treated as an archaic feature in the new digital age, rendering responsible AI design all the more critical. Dr. van den Hoven believes that engineers and ethicists must work closely together to extract the massive social benefits of AI while protecting human agency in all four aforementioned areas of responsibility. He concluded by highlighting the importance of a multidisciplinary approach to responsible AI design to enhance human knowledge, control, freedom, and privacy.

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

About The Berggruen Institute

The Berggruen Institute’s mission is to develop foundational ideas and shape political, economic, and social institutions for the 21st century. Providing critical analysis using an outwardly expansive and purposeful network, we bring together some of the best minds and most authoritative voices from across cultural and political boundaries to explore fundamental questions of our time. Our objective is enduring impact on the progress and direction of societies around the world.