Ethics in Digital Governance: Forum on the Ethics of Artificial Intelligence from a Global Perspective

Co-hosted by the AI Ethics Committee of Chinese Association for Artificial Intelligence and the Berggruen Research Center at Peking University | June 6 | Hangzhou City and Virtual

Since 2017, governments and think tanks in many countries—as well as multinational tech firms such as Google—have put forward AI governance principles, among which “privacy,” “safety,” and “transparency” have come to be widely accepted. Meanwhile, the application of AI technologies is spreading at an unprecedented pace in society, affecting every corner of human life and the natural world.

How can we apply universally accepted governance principles at both the technical and cultural level? As technology continues to develop and shape human society, what are the new ethical issues that we will face? Digital governance has played a key role in COVID-19 response efforts. However, there are risks that personal information will be over-collected and misused. Where are the legal and cultural boundaries when it comes to privacy and safety?

To address these and other questions, on June 6, the AI Ethics Committee of the Chinese Association for Artificial Intelligence and the Berggruen Research Center at Peking University hosted the third annual “Ethics in Digital Governance: Forum on the Ethics of Artificial Intelligence from a Global Perspective” in Hangzhou as part of the “Global Artificial Intelligence Technology Conference.”

Panel I: It is Imperative to Implement Ethics and Digital Governance: Intelligent Technologies and Digital Governance

In combination with scenarios in the application of AI technologies, this panel sought to address issues in the application of AI ethics; the use and misuse of AI to tackle major social problems; privacy protection in the big data and post-Covid era; deeper applications of AI in future societies; and explorations of AI product development.

Panel II: Beyond Morality—AI Ethics and Human Society

This panel approached the ethical, social, and cultural connotations of AI technology from first principles to address the long-term challenges AI technology poses to humankind; technology ethics in a foundationally digital era; sociocultural implications of AI technologies; and the outlook for global digital governance.


Panel I: It is Imperative to Implement Ethics and Digital Governance: Intelligent Technologies and Digital Governance

1. How can we improve making machine learning fair?
Yao Xin, a member of IEEE and professor at the Southern University of Science and Technology, has long been concerned about the ethical and social issues caused by technology. He first approached the topic of AI ethics from the perspective of technological research and development.

Ethics can be broadly divided into three categories: metaethics, normative ethics, and applied ethics. Technology ethics is a branch of applied ethics that discusses ethical issues in the process or consequences of applied technologies; major fields include nanoethics and information ethics. AI ethics is part of this field.

In the decade between 2010 and 2020, governments, academic institutions, and corporations across the world have issued 101 guiding documents regarding AI. “Transparency” is listed as a keyword in 86 of these documents, while the phrase “justice and fairness” is listed in 82.

How can we ensure compliance with AI ethics? Professor Yao believes that this can be achieved through technology, governance, and laws. We have already seen examples of unfair AI practices; for instance, Google Photos mistakenly labelled humans as gorillas in 2015, while Amazon’s recruitment algorithm caused an uproar in 2018 based on its evident bias against women and other groups. In recent years, the fairness of machine learning has drawn widespread attention; over 100 peer-reviewed papers on the subject were published in 2020 alone. This perhaps implies that fairness can be enhanced through technological means. Indicators for measuring fairness include indicators based on: (1) predicted outcomes—namely definitions of fairness; (2) predicted and actual outcomes; (3) predictive probability and actual outcomes; (4) similarity; (5) causal inference.

For example, data a company uses in the recruitment process distinguishes between sensitive and non-sensitive attributes. Number of years of work experience is a non-sensitive attribute, while gender is a sensitive attribute. Classification of data based on sensitive attributes may result in unfairness. Meanwhile, judgments of fairness face two challenges–contradictions between the accuracy and fairness of models and discrepancies among different indicators of fairness. Lastly, Professor Yao argued that fairer machine learning can be achieved through multi-objective learning that stresses balance, reduces loss, diversifies understandings, and advances parallelism.

2. AI Ethics in North America and Corporate Introspections
Liu Xiao, Berggruen fellow and assistant professor at McGill University, gave an overview of AI ethics in North America and discussed her thoughts on the practice of AI ethics.

At present, major North American actors in AI ethics principles include governments, corporations, academic institutions, industry associations, and non-profit organizations. They cooperate closely on government purchases of AI technologies, applications of facial recognition, and the operations of non-profit organizations sponsored by corporations. Most familiar to those in the AI industry is the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. To ensure that AI developers receive adequate training in ethics, the Initiative established the IEEE7000 program and emphasizes geographical diversity in conceptions of ethics. Non-profit organizations include well-known international organizations such as the Centre for the Fourth Industrial Revolution, a branch of the World Economic Forum based in San Francisco that works with governments and corporations around the world to develop AI governance frameworks; Data & Society, an organization based in New York that studies issues in AI applications through anthropological fieldwork; and Montreal AI Ethics Institute, which summarizes and analyzed publications regarding AI ethics. Partnership on AI, another organization based in San Francisco, has gained the support of several major tech corporations, while DataEthics4All focuses on youth outreach and education. There are also many research institutes affiliated with institutions of higher learning.

Today, corporations like Google, Microsoft, and IBM have published “responsible AI principles” and hired specialists to research and speak on the subject. However, much of the public still has doubts: are these big companies truly motivated to put AI ethics into practice? Two incidents from recent years speak to these concerns. The first is Google’s dismissal of Timnit Gebru, an AI ethics researcher at the company, due to her disagreements with management regarding a paper she wrote on natural-language processing, which did not pass Google’s internal review process. The other incident is related to the article “How Facebook got addicted to spreading misinformation” by Karen Hao, published in MIT Technology Review, which argued that Facebook lacks sincere motivation to effectively prevent AI algorithms from recommending false content or extremist views, as that would contradict its business model of increasing and sustaining user engagement.

Therefore, corporate AI researchers play a dual role. On one hand, they are advocates of AI ethics; on the other hand, they are employees of their companies and must consider their organizations’ culture and incentive structure. In September 2020, Data & Society published a report on corporate ethics that used the term “ethics owners” to describe corporate employees that research AI ethics whose duty it is to ensure their companies’ AI applications are reasonable and lawful. The report discusses their roles in corporations and the problems they face in their work. It also provides a few recommendations for them, such as establishing case databases and sharing experiences within the company or with other corporations. In addition, the report calls for ethics owners to cooperate with civil rights organizations and consider performance assessment methods that go beyond quantifiable standards. Recently, Partnership on AI has also published an article pointing out that most enterprises do not clearly define the functions of AI ethics and lack effective performance assessment methods. The article argues that ethical principles should apply and be enforced throughout entire product lifecycles. It is clear that many complex issues in AI ethics lack established guidelines. To make AI ethics work, there must be an actual corporate organizational culture.

Lastly, Professor Liu discussed some of her thoughts on the practice of AI ethics. She believes that AI ethics should be paired closely with governance, and that we should explore the diversity of data governance models with respect to problems of AI data and economic models. This would break the monopolization of data by big corporations and incubate innovative AI companies that are even more vibrant while also giving users more control over their data. Next, with respect to non-Western perspectives of AI ethics and governance and emerging markets, we should take into account geopolitics, the realities of economic development in non-western countries, and the state of digitization and the application of internet technologies in emerging markets.

3. Personal Information Protection Law: Seeing the Human in the Algorithm
Xu Ke, associate professor at the University of International Business and Economics, discussed how algorithms can control human behavior and the legal implications of the Personal Information Protection Law.

Marx pointed out in The German Ideology that the human essence is not an abstraction inherent to each individual. Rather, it is a definite form of activity of these individuals, a definite form of expressing their lives, a definite mode of life on their part. As individuals express their lives, so they are alive.

Algorithms exercise control over people by affecting that expression of life.

We can divide AI control mechanisms into three steps:

The first step is profiling. By gathering and analyzing personal information, an algorithm analyzes and predicts the occupation, finances, health, education, personal preferences, credit, and behavior of an individual. This practice seeks to form a cache of personal information that can answer the question: “who are you?”

The second step is personalized recommendations; namely, algorithms determine how to best recommend content and present users with products and services (and their prices) based on personal information such as web browsing histories, hobbies, spending records, and habits. The algorithm caters to your tastes and imperceptibly further influences your preferences.

The third step is automated decision-making, where algorithms make decisions without human intervention. Automated decision-making can be divided into individual automated decision-making (such as photographing and fining drivers who run red lights) and automated decision-making based on personal profiles (such as health QR codes). This decision-making has a substantial impact–it can either bring benefits or deprive or curtail rights.

Embedded in the Personal Information Protection Law are three types of legal responses to these attempts by AI to control human behavior.

Firstly, the legal response to personal profiling includes: ensuring personal informed consent; ensuring “minimalization” (i.e. profiling must not more detailed than necessary); description of personal characteristics must not include content that is obscene, pornographic, or related to gambling, superstition, terror, violence, or discriminatory based on nationality, race, religion, disability, or disease; and special protections for minors must be enforced (for example, a complete ban on the profiling of minors).

Secondly, the legal response to personalized recommendations includes: ensuring the transparency of algorithms with clear labels distinguishing between personalized and non-personalized recommendations; providing individuals with the right to choose by providing the choice to opt into non-personalized recommendations or to opt out of personalized recommendations; safeguards against monopolies and unfair competition by preventing companies with market dominance from abusing their powers and treating different customers in similar circumstances differently without proper reason; and enforcement of special protections for minors and prohibition on personalized recommendations for minors.

Finally, the legal response to automated decision-making includes risk assessments with respect to personal information to ascertain if the purposes and approaches in the processing of personal information and the degree of risk imposed on the individual are legal, appropriate, and necessary, and whether the safety measures are legal, effective, and responsibly consider all possible outcomes. In addition, people should be given the right to know, the right to an explanation (of algorithm outputs), the right to refuse, and the right to human intervention.

4. Practicing Governance with AI Ethics
Cao Jianfeng, senior research fellow at Tencent Research Institute, discussed approaches to AI ethics governance in the industry context.

Firstly, large corporations like Google, Facebook, IBM, and DeepMind have set up ethics committees to advance ethics research. Google has established a Responsible Innovation Team to carry out ethical reviews on its AI-related products and transactions and implement its AI principles. Microsoft has formed seven different teams under its AETHER Committee. Issues are broken down into distinct components, which are addressed by different working groups conducting ethical reviews. Since the 2018 Cambridge Analytica incident, Facebook also set up relevant departments to conduct research on its algorithms. These institutions focus primarily on ethical reviews of technological applications within the company and business collaboration with outside parties. The operations of these ethics committees can be viewed from four angles: first, establishing a community with diverse participants; second, establishing timely mechanisms for delivering results; third, establishing internal protocols, including for sensitive cases and tool development; fourth, acting as a governance body and becoming a database of institutional knowledge. These processes can accumulate expertise relevant to potential legislation and governance approaches regarding ethical issues in applications of AI and other technologies.

The second trend is the development of ethical tools to solve problems such as explicability, fairness, security, and privacy. For example, associative learning can efficiently address data privacy issues as organizations develop algorithms together without sharing data. In terms of algorithm transparency, extant policies do not require disclosure of the source code or algorithm training datasets since that could result in major risks such as cyberattacks and breaches of user privacy. Furthermore, that will not help the public understand the algorithm. The industry is exploring mechanisms like Model Cards and AI Fact Sheets, which focus on addressing issues in the transparency and accessibility of AI algorithm models. How can firms better achieve algorithmic transparency and provide comprehensible explanations of their models? Tools can be designed to achieve this goal and, ultimately, attain “ethics by design”—just like privacy by design.

The third trend is building ethics software as a service (SaaS), providing ethics services through the cloud, algorithms, and other platforms. Trusted AI and Ethics as a Service are the latest trends in the AI industry, with Al ethics venture companies constantly emerging and building service systems in the cloud, algorithms, and other platforms, providing technological solutions to issues in AI ethics and contributing to the realization of Trusted AI.

The fourth is the formulation of AI ethics standards and promoting ethics certification, such as the AI Ethics Certification Program launched by IEEE.

The fifth is developing AI ethics curricula to enhance ethics training. For example, Google has developed ethics classes and training programs for their employees, including AI ethics education materials and courses, high-level ethics courses, deep training to solve issues in AI principles, technology ethics training, and training in the practice of AI principles. Businesses should strengthen collaborations with academia to develop stronger foundations in AI ethics for their employees.

Lastly, AI governance cannot be achieved without collaboration between technology and the humanities. The intersection between technology and ethics, attempted by initiatives such as Ethics by Design and ethics SaaS, is only one area of cooperation. Legal enforcement is another. There must be a balance between privacy and data use, transparency, explicability, efficiency, and safety.

5. Can AI Deliver the Changes Humans Need?
Professor Chen Xiaoping, director of the Robotics Laboratory of the University of Science and Technology of China, opened his remarks by discussing issues in AI ethics and governance. He argues that the changes that AI has brought to humanity are not entirely determined by AI applications themselves, but rather by multiple factors in human society. As a result, innovation systems are in need of support.

Ethics, broadly speaking, refers to standards for personal conduct, namely the obligations that a person has toward society and others. The role of ethics is twofold: it tells us what should be done and what should not be done. Similarly, the role of AI ethics should also be twofold: it should specify what AI should and should not do.

The basic problem in AI ethics and governance surrounds the fundamental ethical demands human beings make of AI. According to Chen Xiaoping, the better question is: can AI bring about the changes that humanity needs?

For this discussion, we first need to distinguish between “research” and “application.” Technological application is the practical application of knowledge, while technological research is the scientific study and use of mechanical arts and applied sciences. For example, the study of deep learning algorithms is a form of technological research, as the algorithms are not end products that can be used by ordinary users. Neural networks, on the other hand, are trained by deep learning algorithms and data from applied fields are products that can provide services needed by end users. Product R&D is a form of technological application.

Based on this classification, we can make a basic judgment that AI technological research and its direct outcomes are usually value-neutral, while AI technological applications (R&D and products) are not. Therefore, we should advocate for value-neutrality to be the basic ethical principle of AI research. Accordingly, we should carry out category-based regulation of AI research and applications.

The changes brought about by AI are not entirely determined by its applications; these applications rely on support from innovation systems. Schumpeter’s innovation system, which is currently implemented, is a commercialized combination of different market factors. It is an efficient system that has sufficient commercial benefits and can generate tax revenue and satisfy the needs of users. Furthermore, it does not have any other ethical constraints. As a result, it has certain limitations. For example, though it is applied around the world, it is ill-suited in the context of social needs where the commercial benefits are inadequate. These include areas like population aging (elderly care) and flexible manpower lines (long-term employment). It may also bring about negative effects like digital divides, class solidification, and a “low desire society.” Therefore, Schumpeter’s innovation system brings about a higher number of more severe ethical issues than technology does.

Adding ethical constraints to Schumpeter’s system can solve simple ethical problems like algorithm fairness and data security, but doing so has no effect on deep ethical problems related to inherent conflicts between commerce and ethics. The real solution is to achieve an “upgrade” from Schumpeterian innovation to innovation for the sake of public justice. That form of public justice means combining market and non-market factors in a way that is just. Here, “just” refers to a refinement, integration, and upgrading of effective elements in the market and human nature—creating a larger innovation space to realize the changes needed by humankind.


Panel II: Beyond Morality—AI Ethics and Human Society

1. Artificial Intelligence and Human Nature
According to Professor Zhu Jing, dean of the College of Humanities of Xiamen University, reflections on human nature have long been part of philosophical inquiry. For example, Mencius believed that human nature is inherently good and humans are born with innate moral inclinations toward the four virtues of benevolence, righteousness, propriety, and wisdom. Conscience is universal to human beings. In contrast, Gaozi, one of his contemporaries, held that “human nature has neither good nor bad allotted to it, just as water has neither east nor west allotted to it.” Xunzi argued that human nature was inherently “bad,” but that man can “transform his nature and initiate conscious activity” and thus strive for goodness.

There are also many debates on human nature in the Western philosophical tradition. For example, Hobbes believed humans are selfish and combative by nature, which is why powerful external restraints are needed. Rousseau believed that humans are by nature pure and unsophisticated, and that all conflicts and competition are caused by society. Hume thought sociability was part of human nature, which encompassed elements of goodness and sympathy—as well as selfishness and conflict.

Debates on human nature are still ongoing. Philosophers like Foucault, Chomsky, Habermas, and Isaish Berlin have all written on the topic. Modern sciences, including biology, psychology, social sciences, genetics, neuroscience, and bioengineering, have caused inquiries concerning human nature to transcend pure philosophical speculation to become a transdisciplinary field of study.

So, what is human nature? According to Professor Zhu, human nature is a unique and universal biological and psychological mechanism possessed by humanity, which has corresponding modes of behavior. Human nature is stable but also evolvable and malleable. The molding of human nature is affected by both natural evolution and culture. The sublimation of human nature means enhancing the good and discarding the evil in pursuit of civilizational progress.

Professor Zhu also argued that the development, application, and governance of AI should be based on a deeper, well-rounded understanding of human nature, as AI is after all a projection and extension of human nature. A good-oriented AI should aid in overcoming the flaws of human nature. He explained that public perceptions of AI tend to overly anthropomorphize it. This practice may share similar cognitive and sociopsychological roots with the cognitive origins of religion and the human tendency to buy into conspiracy theories–a manifestation of human nature. Furthermore, social psychology research has found that people tend to associate with those with whom they share similar views and attack those who do not. People naturally apply different standards and attitudes toward those who they perceive as different.

So, can AI help to overcome the defects of human nature and reduce the segregation and conflicts caused by innate biases, prejudice, and distrust, to build a global community with a shared future? This question is worth asking in this era of globalization.

2. AI and The Human: On the Insufficiency of Ethics, Or on the Difference Between Ethical and Philosophical/Conceptual Problems
How does AI threaten our humanity? How can AI not be understood within the framework of the current human-nature-technology differentiation? By carefully analyzing the challenges posed by machine learning to human exceptionalism, Tobias Rees, director of the Berggruen Institute’s Transformations of the Human program, addresses the insufficiency of ethical inquiry on AI and the human in his talk.

To begin with, Tobias defines an ethical problem as it emerges when our sense of wrong and right is violated. The goal here is to politically reinforce our norms and make sure that they’re no longer violated. In that sense, ethical problems are also always political or legal challenges. While for philosophical/conceptual problems, they occur when the way we have been thinking thus far no longer works. That means we cannot continue to think the way we thought before new realities emerged, and our old conceptual assumptions fail or new insights render them inadequate.

The thesis Tobias presents is that AI is a philosophical-conceptual problem first. AI, in its form of Machine Learning (ML), undermines the conceptual presuppositions that organized the space of possibility from within which we have thus far been thinking: especially, when it comes to the human. The consequence is that the ethical norms meant to provide a regulatory framework for AI may miss target.

From his perspective as a historian of thought, the human is not a given, but a concept of recent origin. Rees quotes Levi-Strauss: “we know that the concept of humanity as covering all forms of the human species, irrespective of race or civilization, came into being very late in history and is by no means widespread.” For tens of thousands of years, no one attempted to articulate a general, all-inclusive concept of the human that would cover all people that ever lived.

Therefore, the human had to be invented; this happened in Europe in the early 17th century. Scholars of the time said that humans were more than nature, and that humans were irreversibly different from mere mechanisms. The criterion for them was reason (intelligence). Humans have it, while nature (e.g., animals, plants) may have instinct or sentience but not intelligence or reason, and machines are just mechanisms. It can be called a great event of differentiation, where humans are separated from nature and away from machines. The birth of the human exceptionalism gave rise to a world that is composed of three ontological realms: human things, natural things, and technical things. The human is a conceptualization of what humans are (more than nature and other than machines), of what nature is (the realm of origins and non-technical), and of what technology is (secondary and artificial).

Tobias argues that the emergence of ML has interfered greatly with the human-nature-technology differentiation. Traditionally, it was believed only humans possessed the intelligence to manipulate symbols. However, with the help of ML, scientists have discovered that neural architectures can exist in many forms. Intelligence can occur without the cerebral cortex, central nervous system, or any other feature unique to humans. It thus undoes the exclusive link between human and intelligence and transforms intelligence into intelligences, a family with many entries: humans, animals, plants, machines. This series cuts across and thereby undermines the ontology of the human. As the human is also conceptualization of what nature is and what technology is, ML is a philosophical event of enormous proportions. It indicates one possible end of human exceptionalism and is a massive opportunity for East and West to think together.

When it comes to the ethical/political stakes of ML, traditional critiques of AI seem to assume only humans have ethics and politics. Almost every ethical and political critique of ML Tobias knows of, relies on the classical modern concept of politics, which is deeply indebted to the human. In fact, it seems as if ethical problems occur whenever the old ontology that differentiated humans from nature and technology is questioned.

Tobias thinks this genuine philosophical research question is one of the gifts brought to us by machines that think and learn. He calls on philosophers and ethicists to think further about whether there are ways to address the ethical challenges posed by AI that neither rely on nor re-inscribe the old format of human exceptionalism, that do not go against the new realities produced by AI but are actually conducive to them, and that can help us to build a better world, after the Western modern epoch of the human.

3. Ethics and the Risks of Intelligent Technology: The Algorithmic Threat to Freedom of Attention
Presentation given by Peter D. Hershock, East-West Center

Big data and intelligent technology are transforming the human-technology-world relationship in ways that are complex, recursive, and minimally governed. Considerable attention has been directed toward superintelligence and the risks of human obsolescence or extinction. But long before such a technological singularity becomes scientifically possible, Peter notes, intelligent technology that is fueled by big data is likely to precipitate our ill-prepared arrival at an ethical singularity: the point at which the opportunity space for further human course correction collapses.

Before analyzing how to avoid such a collapse, Peter presents some conceptual preliminaries for understanding risk, technology, and their relations.

Risks are not naturally occurring phenomena that can be located precisely in time and space. Unlike actual dangers, risks are virtual dangers that may or may not congeal into events that prevent us from realizing the futures we want. As such, there is an unavoidably subjective dimension to any calculation of risk, and risk assessments are thus fundamentally value-laden and perspective-dependent.

Technologies are often identified with tools. But tools are localizable things that have been designed and manufactured to extend or augment human capacities. With tools, individuals enjoy “exit rights.” We can simply choose not to use them. Technologies are relational systems of material and conceptual practices that embody and deploy both strategic and normative values. In actuality, we neither build nor use technologies. They are “environments” in which we participate. This distinction enables the realization that accidents of design and misuse by design, which are the focus of most AI ethics guidelines, are, in fact, tool risks. Technological risks are structural and relational.

To further illustrate his point, Peter shows that claiming “guns don’t kill; people do” is a purposeful misdirection of critical attention toward tools and their users and away from weapons technology, which is a system of relational dynamics and practices aimed at scaling and structuring human intentions to inflict harm with minimal vulnerability and maximal power. While we can choose not to use guns personally, none of us has “exit rights” from the ways that weapons technology affects social and political dynamics. The overwhelming emphasis in AI ethics on establishing technically-viable design goals and legally-viable use standards for “smart tools” has similarly had the effect of fostering inattention to the structural and relational risks that arise when technologies emerge in alignment with uncoordinated or conflicting values.

The tool/technology distinction also allows for clearly distinguishing between the risks associated with what data is used for and the risks associated systemically with how data is acquired. For Peter, the latter is more concerning. The evolutionary successes of machine learning have been proportional to the expansion of a digital infrastructure that maximizes connectivity throughput, continuously deriving data from digital search, social life, and commerce—data that affords insight into individual human desires, fears, values, and intentions. The systematic attraction and exploitation of attention is simultaneously increasing the precision with which algorithmic intelligences predict what we can be induced to acquire, do, seek, and avoid. According to Peter, the attention-nourished epistemological power of intelligent technology is also a kind of ontological power. They are powers to transform the human experience from the inside out.

For our human minds, the basic work is not discovering what things are, but rather discerning how things are changing. One dimension of this is forecasting: accurately predicting how things are changing. A second is preparedness: attentively imagining relational likelihoods to enhance adaptability. Finally, anticipation can consist of affective concern about the interplay of what could and should occur: determining what matters most in seeking and securing what matters most. Algorithmically engineered attention capture is conducive to the atrophy of all three dimensions of the anticipatory mind. As morally freighted decisions are outsourced to AI tools and as the digital tailoring of the human experience makes it increasingly hard to make and learn from our own mistakes, we are at risk of losing our most basic freedom: freedom of attention.

Robust codes of professional conduct and legal institutions may suffice to mitigate now anticipated design and risks of the misuse of intelligent tools. But mitigating the structural and relational risks of intelligent technology will require both ethical intelligence and ethical diversity. In his ending remarks, Peter thinks we need a global ethical ecosystem sustained by intercultural and intergenerational commitments to equity-enhancing deliberation and improvisation. He believes nothing less than the future of ethical human agency depends upon it.

4. Apprehension under the gaze of data: Sociotechnical imaginaries and ethical strategies for artificial intelligence
Professor Duan Weiwen, Berggruen Fellow and researcher at the Institute of Philosophy, CASS, led a discussion titled “People Under the Gaze of Data.” The phrase “gaze of data” implies that, in some sense, we are data. Records and digital traces transform our behaviors into data, such that we become measurable new objects of knowledge in the Foucaultian sense. However, not everyone has the right to gaze at and measure others. Most people have little idea how data-based power operations take place or where the data comes from.

Firstly, he discussed how user behavior data develops via surveillance capitalism. Shoshana Zuboff, author of The Age of Surveillance Capitalism, argues that surveillance capitalists like those behind Google generate behavioral data by studying and gathering the behavior and characteristics of users. These data are then transformed into behavioral surpluses to gain insights about, and to even influence, users in order to generate profit. As a result, Google gives little thought to the content it provides during this process as it is primarily focused on mining and calculating user data.

Intentions behind this calculation notwithstanding, it appears to be related to the philosophical question of how we can understand “the other.” However, when guessing what goes on in another person’s mind, we often have no need for such philosophical argumentation. Entities that wield data, such as data platforms, are actually gauging users’ minds. Their approaches are more akin to Hans Vaihingerd’s “as if” philosophy, which emphasizes that we form views of the world not to create a facsimile of reality, but to make it easier to navigate with the general conceptions we acquire. When a specific subject believes that they may encounter unknown risks, they will take preemptive measures within their power to gain control over future uncertainties.

Professor Duan Weiwen then turned to the “soft biopolitics under the gaze of data” and put forward the concept of the “data scar.” Data is our new “skin;” every individual will leave traces of data behind after contact with places or people. Unfavorable or suspicious data will be noticed once traces are left behind. Neither of these can be removed, making them scars on our “data skin.”

Duan Weiwen believes that we are now in what Gilles Deleuze has described as the “society of control” rather than Foucault’s “disciplinary society.” Modes of social administration and governance are transitioning from hard biopolitics to soft biopolitics. For example, from a data perspective, if there are more males than females who like watching Argentinian soccer games, data systems will probably believe a user is male if his or her data indicates that he or she likes to watch Argentinian soccer. Regardless of the user’s actual gender preference, the system will label their data signature as male.

While operating within soft biopolitics, we should pay attention to the fundamental structural changes arising from the role that AI plays in data insights and algorithmic decision-making. For example, some people worry that their data—for instance, data from their phone calls or chat histories—will be exposed. In reality, data insights mostly use metadata like the time, length, and geographical location of phone calls, which AI can deal with easily. Similarly, the real challenge presented by social media platforms like Facebook is not their influence on specific individuals, but rather that these new platforms may discover mechanisms that allow them to effectively control human emotion and public opinion.

The disruption that arises from technological development is in fact the “culture lag” proposed by sociologist William Ogburn. Culture lag occurs when material culture develops too quickly, resulting in a relative lag in the non-material culture that regulates material culture. These elements of non-material culture are also known as adaptive culture, which includes the technologies, religions, ethics, laws, beliefs, and other social components that regulate material culture. When elements of material culture—such as the “intelligent data surveillance culture” driven by data intelligence—develops too rapidly because of advances in technology, there will be conflicts between values, ethical decision-making, and laws. These conflicts are especially likely if the prevailing ethics and laws cannot keep up and the gap between the two kinds of culture continues to widen.

Therefore, humanistic philosophy and the ethics of science and technology should focus on whether harm to humans can be mitigated before technology, laws, and ethics reach a new normal. For example, could we have avoided the tragedy caused by the lack of trust between a ride-hailing driver and the female passenger who leapt out of his vehicle? We should seek a deeper understanding of the changes in material culture caused by disruptive technologies such as artificial intelligence.

Professor Duan concluded by arguing that, given the prospect of a future society that will become ever more permeated by intelligent technology, the time has come for discussions on certain fundamental problems. These include structural topics that may lead to fierce debates. For example, can we comprehensively apply “possibility management”—wherein we decide people’s current and future opportunities based on prior data indicators? These problems tend to be complicated, with no answers available from current legal, ethical, or social frameworks. Taking the data safety disputes caused by Tesla as an example, the development of AI-aided driving and autonomous driving technology in the future will be based on information surveillance. This will require real-time collection and surveillance of highly specific data on road environments and conditions inside vehicles. But current laws and ethics are far from equipped to handle the challenges concerning data safety and privacy brought up by these practices. If it is certain that the society of the future will be one of data and AI surveillance, we will need to find a new social contract to strike a new balance between technological progress and risk management.
Professor Duan advocates for confronting the anxieties in ethics, laws, and social culture that exist under the gaze of data. We should use the resulting apprehension to influence sociotechnical imaginaries of the future and encourage every person to think about what sort of future he or she wants. In this process, we need to fully utilize the empirical knowledge and wisdom acquired through the social application of technology.

Professor Duan argued that the most fundamental wisdom is management and governance empowered by technological progress, as it might be able to address many human and social issues more efficiently. Pure reliance on technology and appeals to technological solutionism will not work. While embracing the higher living standards brought about by progress in advanced technologies like AI, we must stay vigilant against the “machine bureaucracies” and techno-solutionism that may arise.