On the afternoon of May 7, 2021, the Berggruen Research Center at Peking University held the first workshop in its series, “Living with Machines: Future Perspectives and Analysis.” The event featured three guests, Ge Jianqiao, Lu Qiaoying, and Zeng Yi, discussing possible means for evaluating human and non-human cognitive and affective capacities. Topics included both relevant theoretical models and practical strategies for the lab and daily life. Ge Jianqiao explored the differences among cognitive neural strategies used to understand human and artificial intelligence. Lu Qiaoying discussed the relationship between individuality and minimal cognition from the evolutionary perspective. Zeng Yi spoke about brain-inspired, self-conscious AI models and their development blueprints and current progress in research. Finally, special guests Duan Weiwen and Li Wenxin concluded with a conversation about relevant issues with the three speakers.
Ge Jianqiao’s remarks pointed out that humans use different cognitive neural strategies to understand human intelligence and artificial intelligence. As social animals, humans are highly sensitive to the behaviors of other life forms. The mere presence of another person will lead to a change in a person’s behavior. However, the presence of machines or robots does not affect one’s behavior in the same way. For instance, an experiment by Kilner et al. in 2013 found that subjects exhibited behavioral changes only when their movement was incongruent with that of other persons. In the control group, behavior was largely unaffected when the other “person” was a robot or when the subject’s behavior was congruent with the other person.
fMRI brain scans have shown that the medial prefrontal cortex (MPFC), the “mind-reading” part of the brain, is activated only when subjects are observing other human beings (rather than machines, cartoons, or animals). Inspired by this, an experiment conducted by Ge and her colleagues in 2008 identified four functions of the MPFC: mental inference (MI), deductive reasoning (DR), perspective taking (PT), and perception (PC). The experiment compared differences in subjects’ brain activity when attempting to understand the reasoning process in a human being versus that of a machine. The researchers discovered that human brains would inhibit self-related processing in order to correctly understand another person’s beliefs or perspectives. But similar activity does not occur when we are processing or interpreting AI activity. In terms of individual differences, people with higher PT and self-inhibition capacities can understand other people more effectively. However, these capacities were not important in understanding AI.
It seems that human brains have evolved to allow people to use special mechanisms to understand other people, but such capabilities are not activated in understanding artificial intelligence. Thus we might infer that Ke Jie, a professional Go player, uses different strategies when playing against AI than when playing against human players. In addition, this phenomenon may explain why some people with spectrum disorders characterized by inhibited PT interact easily with machines while struggling in communication with other people; a high capacity of PT is an essential component of understanding other humans, but is not required to interact with computers.
This brings up two interesting questions: when, and to what extent, do people think that a machine is a machine? We would regard the humanoid robot Sophia as a machine, but would that apply to robots of the future that bear greater resemblance to human beings? Experiments to date have shown that the brains of 10-year-old children do not have the ability to distinguish between machines and humans in this regard. When does the human brain become aware of the differences between human and other forms of intelligence?
More interesting questions arise when we think about the future. At present, we have the fields of social psychology and social neuroscience. What fields of research will arise in a future hybrid society comprising humans, robots/artificial intelligence, and other animals? Obviously, our children are already growing up in this emergent hybrid society. According to the social brain hypothesis, the human brain evolved to process social information. The last “genetic update” of the human brain took place about 5,800 years ago. How will the future evolution of the human brain be affected by and caused by a hybrid society?
Lu Qiaoying analyzed the relationship between individuality and minimal cognition from the evolutionary perspective.
In terms of methodology, inquiries into cognition can usually be divided into two approaches: anthropogenic and biogenic. The anthropogenic approach starts with human cognition and extends toward general cognitive concepts. This traditional approach is still predominant in today’s AI research. The biogenic approach starts from the basic activities of life and gradually extends towards human cognition. Peter Godfrey-Smith’s work is representative of recent advances in this approach. Traditional philosophy pays particular attention to the “explanatory gap,” which refers to the difficulty in explaining how a particular physical system can produce a first-person point of view or subjective experience. The biogenic approach treats all cognitive phenomena as biological phenomena. This allows it to bypass the explanatory gap and attempt to explain the evolution of subjectivity from a third-person perspective.
Subjectivity in a system implies that it has a true “internal-to-external” point of view, which requires the system to have a boundary that delineates internal elements from external elements. It is generally believed that biological individuality – the ability for living entities to distinguish between self and other – is a characteristic of life. Lu Qiaoying defined individuality as “any connected collection of matter [that] has a location and an input-output profile.” Lu argued that minimal cognition arises with the emergence of individual organisms.
Take the chemotaxis of the unicellular organism E. coli as an example. The E. coli bacterium can control its movements to adapt to its environment. Lu Qiaoying argued that this is an example of minimal cognition. In the process of achieving chemotaxis, sensing is the input and motion is the output, which reflects the individuality of unicellular organisms. When multiple cells come together to form a multi-cellular aggregate, the first-person point of view or subjectivity needs to be realized through different means. Furthermore, it is not easy to identify the boundaries between individuals in some multi-cellular aggregates, the most typical examples being bee colonies and symbiotic gut bacteria. Lu Qiaoying provided a further explanation, arguing that biological individuality is not delineated by exact physical boundaries. Instead, it constantly strengthens the boundaries of the system through dynamic cognitive functions. In other words, biological individuals should be regarded as functional individuals.
Lu Qiaoying’s ideas on biological individuality and minimal cognition also have consequences for machine cognition. In biology, for any biological trait, we can ask two types of questions: “why” and “how.” “Why” relates to functional explanations (the history of evolution), and “how” relates to the explanation of mechanisms (how it works). Because the same function can in principle be achieved by different structures, and because individuality is a functional definition, Lu believes that we cannot rule out the possibility that a machine has true subjectivity or a first-person point of view, even if its subjectivity may be very different from our familiar conceptions.
Subsequently, Zeng Yi discussed brain-inspired artificial general intelligence and the concept of human-machine coexistence behind it.
In reality, there are risks in the current development of artificial intelligence through behavior simulation. The apparent behavior of biological organisms and the mechanisms behind their behavior may be very different. As such, remaining in the stage of behavioral simulation is not an effective means for achieving true AI. Machine learning is not equivalent to AI – intelligence needs to evolve in an interactive environment, and only paying attention to the machine “learning” process is far from adequate. Humans gained their present intelligence through adaptation to changing environments and eons of evolution, but machines lack this self-adaptative capacity. AI achieved through machine learning may exceed human capabilities in certain respects, but this would create huge risks for human-machine interaction if developed into artificial general intelligence – AI does not understand the true intentions of human beings. In China, AI is currently often regarded as a tool for humans. In Japan, many people regard AI as a possible partner. Meanwhile, AI is seen as a competitor or enemy in many science-fiction novels. There is one problem here: there should be different theories and algorithms for developing AI as tools and AI as partners. In reality, the algorithms used in Japan are similar to those in the West insofar as they regard machines as tools; while AI designed to be a partner should be based on a capacity to “understand” the “self.”
The generality of AI is useful, but true artificial general intelligence is not necessary and may even be dangerous. For example, human-like negative emotions will pose a major risk to human society. Furthermore, the present standard for distinguishing between different AIs is often “generality”, which should not be the case. Instead, distinctions should be based on the relationship that an AI has with other AIs or with human beings. Whether or not an AI has “selfhood” and “empathy” should be an important marker in distinguishing between different levels of artificial intelligence. Zeng Yi also discussed AI reflexivity, a concept proposed by Zhao Tingyang. Zeng expressed the worry that reflexive AIs have already begun to emerge in fields such as the autonomous evolution of network structures and the automatic generation of machine languages, even though the AI tech community is barely prepared for the risks involved.
In addition, Zeng Yi pointed out that AI with self-awareness and empathy will not create more risks to human beings. Instead, the real threat lies in AIs that cannot generate “true understanding” as they lack self-awareness and empathy. The development of brain-inspired self-conscious AI should progress from the possession of physical perception, the acquirement of self-experience, the acquirement of motivation and values, the generation of personal goals and wishes, and the distinguishing between self and others, and then to the ability to regard other subjects as similar to oneself and the emergence of empathy towards others. This is a process whereby a “self” progresses from an objectified “me” to a subjective “I”, in other words, from having a set of ideas about oneself to having the experience of being a self.
In a study by Zeng Yi’s research team, robots exhibited seemingly self-aware behavior through the mirror test. However, Zeng does not believe that this implies self-awareness on the part of these robots. Admittedly, this does create a challenge for cognitive science – passing the mirror test is no longer the gold standard for determining self-awareness. Still, this can be regarded as a form of feedback for neuroscience.
Zeng Yi said that his lab hopes to explore the origin of intelligence through research on brain-like artificial intelligence, discover the principles behind the evolution of intelligence, and predict and realize the future evolution of intelligence. Altruism has become a vital element of this development process. Zeng demonstrated that his lab has preliminarily achieved altruistic behavior in AI entities through self-experience and cognitive empathy brain-like intelligence models.
Finally, Zeng Yi proposed that future superintelligence will likely be achieved through the humanization of machines or the mechanization of humans, moving from questions of “who they are” and “who we are” to jointly answering the question of “who the new us will become.” How we treat future robots with selfhood and empathy will become highly important. Will scientists treat the robots they develop like human infants? Perhaps, symbiosis between humans and AI can be achieved only through harmony between human and machine. We look forward to a future where humans, animals, and AI can live in harmony in a sustainable, symbiotic society.