Living With Machines: Future Perspectives and Analysis

On October 27th, 2021, the Berggruen Research Center at Peking University held the second workshop in its series, “Living with Machines: Future Perspectives and Analysis.” The event featured three guests: Sebastian Sunday Grève, Assistant Professor at the Department of Philosophy at Peking University; Dr. Hao Jingfang, novelist and researcher; and Wu Tianyue, Tenured Associate Professor at the Department of Philosophy at Peking University. Each speaker discussed questions related to the future of machine learning, consciousness, and morality. Professor Sunday Grève explored whether machines can ever be conscious; Dr. Hao discussed the possibility of digital personalities and other imaginings of the future; and Professor Wu delved into the connection between machines and personhood.

Sunday Grève began his opening remarks by bringing the audience’s attention to the event poster–a gigantic “crying” robot, which he thought was very fitting with the theme of the event and the title of his presentation, “Can Machines be Conscious?” He then moved on to quote Alan Turing, the father of modern computer science:

“Can machines think?… I believe that at the end of the century, the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

Now, AI technology has become so developed that we are already discussing the notion of thinking machines. Sunday Grève invites the audience to “understand the question” first by trying to define the meaning of “machine” and “consciousness.” There are many definitions of machines depending on one’s background, whether from science, philosophy, or other disciplines. Turing gave one in terms of what in the end was a new technology, namely the type of electronic, digital computer that is at the basis of all the computing technology that we use today. However, Sunday Grève asks us to delve into the question of consciousness by drawing on merely the familiarity that we find in phenomena. In the case of consciousness, it seems so familiar. Yet, we struggle to explain it – we have the ability to touch, feel, and experience something – an exceptional subjective quality that perhaps even the best possible machine or robot lacks.

Such a pre-theoretical understanding is enough to proceed to the second step: to look at science and engineering. Sunday Grève gave a promising result showing “prosthetic limbs that are bidirectionally-connected, thus enabling enhanced motor control and perception” and derived three assumptions from this concrete example of “agonist-antagonist myoneural interface.” This interface is successful in restoring connections between two complementary muscles that work in tandem to help a person feel and move his or her body, and thus is a direct link between motor control and perception. The three assumptions are:

  1. The nervous system is constitutive of consciousness. If you lose your nervous system, you lose your consciousness.
  2. Conscious states include limb-based sensory experiences; for example, one knows when he or she is moving their hands or legs or when touching the computer.
  3. As demonstrated in the agonist-antagonist myoneural interface example, the prosthetic limb-based sensory experience can sometimes constitute one’s conscious states.

From these assumptions, Sunday Grève reaches the preliminary conclusion that “a machine will be partly constitutive of an individual’s consciousness if their conscious states include prosthetic limb-based sensory experience.” On this general principle, he proposes a thought experiment:

A hundred years from now, there has been steady technological progress. At some point in this period, Booboo, while still young, began to suffer from a new disease of the nervous system. Booboo has been lucky, though, insofar as new implant surgery has been available to her whenever she needed it. She has been lucky too in that the intervals between surgeries have been long enough that new parts of her nervous system could always be properly integrated – thanks to various kinds of therapy as well as her system’s continued neuroplasticity – before another part needed to be replaced.

Suppose we can continuously restore and replace parts of the nervous system and assume that the disease moves from Booboo’s periphery, inching closer toward her central nervous system. In that case, this approach is not to engineer consciousness or build a conscious robot, but to take something conscious and reapply a method of restoring part of what constitutes its consciousness. This is a method in which we know consciousness can be retained, but more machine parts are added to the body until the percentage of artificial and biological components becomes confusing. The conclusion: the nervous system can be replaced by alternate parts of different material so that a human may slowly be turned into a machine while retaining consciousness.

Hao was fascinated by Sunday Grève’s thought experiment, but questioned whether it is really feasible to substitute neurons with something non-biological, and whether it is humane to carry out such experiments. She offered another thought experiment, such that if we input programming code into AI like AlphaGo that shapes their choices, and if the number of options, information, and decision processes becomes both dense and complex enough to determine choice on its own, perhaps at some point, they will be able to analyze situations and think like humans.

Hao proceeded to give her presentation about “The Possibility of Realizing Digital Personality and the Imagination of the Future” by discussing the life of humans 50 years from now. The first possibility is the coexistence of AI and humans. Artificial intelligence will answer, memorize, and take care of everything for us, to the extent of knowing when and how to make a cup of hot water when we need it. Our mental thought and signals are constantly being sent to a machine, and perhaps some people might worry about the issue of privacy. However, Dr. Hao argued that humans are actually more adaptive than previously thought. Today, we’ve become adapted to using our mobile phones and the internet, which is run by super AI algorithms that constantly collect information and analyze our habits and behaviors. She suggested that we will most likely adapt to the interaction between the human mind and machine over time.

The second and third possibilities are the ideas of living in a metaverse in the future and increasing longevity. In the end, we might have a digital personality capable of doing things representative of us, so much so that it becomes indistinguishable from our human form or mind. Hao’s friend Zeng Yi had recently scanned the neurons of little mice and monkeys and used them to develop a new form of AI program that shows many similarities to a biological brain. If, in the future, brain and machine interactions become so well designed that we can scan all our neurons and record the data into a machine system, then perhaps we can develop cloned programs of ourselves – AI that can respond to simple questions, attend meetings, talk to people, and buy things for us because we trust them to make the right decisions. “Perhaps when we die, our digital personalities can continue to live forever,” she said. It is also expected that with the advancement of medical science, the average life expectancy will continue to extend and increase.

Hao concluded her presentation with many questions that we ought to ask both in our business world and political and social life: Who has the right to define our future digital world? Should digital information be shared freely, or should it be assets or resources to be priced and purchased? If we have digital personalities in the metaverse, is this new world utopian or dystopian? Who will be the regulators of a new world where the boundaries of the digital world, machines, and humans have become so vague and blurry?

For those who are interested in futuristic thinking and the worldview of Hao Jingfang, please check out her newly published book titled “China’s Frontier,” which documents many of her findings from interviews of ten leading scientists from various fields in China who are working on cutting-edge science and technologies such as artificial intelligence, space travel, and medical care.

Our final presenter, Wu Tianyue, discussed the possibility of recognizing the personhood of machines. He suggested that we might have taken for granted the definition of machines as “a piece of equipment with several moving parts that uses power to do a particular type of work” automatically. In the past, people and machines belonged to two entirely different categories. The former decided on the job, while the latter could only function as determined by a human, no matter how complex the system. However, today the situation has changed; AI appears to be able to plan, make unique decisions, and think like humans. This raises important questions: Can Machines think? What are practical implications of this? How should we treat them? And most importantly, can and should machines become like us?  Dr. Wu provides two perspectives:

  1. Are machines endowed with basic rights?
  2. Are machines responsible for their decisions? For example, should a self-driving car be punished or jailed if it crashes and kills its passengers?

There are already legal frameworks under consideration in response to these questions, as well as controversial policy actions. On February 16th, 2017, the European Parliament adopted the Civil Law Rules on robotics, which recommended the following action:

“Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently” (P8_TA(2017)0051, §59,f.)

On the other hand, Saudi Arabia recently took the bold move of granting citizenship to the robot Sophia. Not everyone likes the idea of giving personhood to AI, as we see in this quote taken from an open letter to the European Commission signed by 285 AI and robotics experts, industry leaders, law, and medical and ethics experts from the EU.

“[W]e believe that creating a legal status of electronic ‘person’ would be ideological and non-sensical and non-pragmatic.”

One of the letter’s central ethical concerns was that the electronic personhood that entitled robots to human rights such as dignity, integrity, remuneration, or citizenship are likely to impinge on the rights of human beings. Concerning this, Wu presented the “three theories of moral personhood” by Cf. J.S. Gordon to determine whether machines qualify as persons.

The first theory is “the human dignity approach,” which states that only a being with dignity has full moral status and therefore qualifies for moral personhood and that only human beings have such dignity. Dr. Wu argues that such a definition leads to narrow-minded speciesism and puts humans as self-centered beings on the top of an ecosystem.

The second theory is “the rationality/autonomy approach.” A moral person should be held responsible for her moral actions and only when she decides to act rationally or autonomously according to the ethical principles she accepts. His criticism for this approach is that it illegitimately deprived animals and people with intellectual disabilities of moral personhood from the very beginning. By laying an unjustified emphasis on human cognitive capacities, it does not pay sufficient attention to other aspects of human beings, such as our emotions. Finally, it wrongly identifies moral personhood with moral agency.

The last theory is the relational approach, which emphasizes the social context in which a moral person emerges. He cites a passage from M. Coeckelbergh: “We may wonder if robots will remain ‘machines’ or if they can become companions. … Is not moral quality already implied in the very relation that has emerged here? For example, if an elderly person is already very attached to her Paro robot and regards it as a pet or baby, then what needs to be discussed is that relation, rather than the ‘moral standing’ of the robot.” Dr. Wu puts forward two potential challenges to this approach: unwelcome social exclusion and the danger of falling into cultural relativism.

Wu suggested that we can learn from the medieval conception of personhood by studying the work of Boethius (477-524) and William of Auxerre (1150-1231). From there, he derives that a person is by definition taken to be the only owner of his existence extending in time and revises Gordon’s personal dignity approach to moral personhood. However, machines powered by AI are currently controlled by computer programs designed to function in the same manner in a given type of machine. Moreover, they do not have a personal life unique to themselves; for example, a thinking machine’s life or history can be digitalized, saved, and replicated on another machine. Therefore, he concludes that a machine cannot be a person unless it can become a unique, unrepeated life.

Original article in English by Jin Young Lim
Edited by: Christopher Eldred and Sarah Gilman


composed by Arswain
machine learning consultation by Anna Tskhovrebov
commissioned by the Berggruen Institute
premiered at the Bradbury Building
downtown Los Angeles
april 22, 2022

Human perception of what sounds “beautiful” is necessarily biased and exclusive. If we are to truly expand our hearing apparatus, and thus our notion of beauty, we must not only shed preconceived sonic associations but also invite creative participation from beings non-human and non-living. We must also begin to cede creative control away from ourselves and toward such beings by encouraging them to exercise their own standards of beauty and collaborate with each other.

Movement I: Alarm Call
‘Alarm Call’ is a long-form composition and sound collage that juxtaposes, combines, and manipulates alarm calls from various human, non-human, and non-living beings. Evolutionary biologists understand the alarm call to be an altruistic behavior between species, who, by warning others of danger, place themselves by instinct in a broader system of belonging. The piece poses the question: how might we hear better to broaden and enhance our sense of belonging in the universe? Might we behave more altruistically if we better heed the calls of – and call out to – non-human beings?

Using granular synthesis, biofeedback, and algorithmic modulation, I fold the human alarm call – the siren – into non-human alarm calls, generating novel “inter-being” sonic collaborations with increasing sophistication and complexity. 

Movement II: A.I.-Truism
A synthesizer piece co-written with an AI in the style of Vangelis’s Blade Runner score, to pay homage to the space of the Bradbury Building.

Movement III: Alarmism
A machine learning model “learns” A.I.Truism and recreates Alarm Call, generating an original fusion of the two.

Movement IV: A.I. Call
A machine learning model “learns” Alarm Call and recreates A.I.Truism, generating an original fusion of the two.


RAVE (IRCAM 2021) https://github.com/acids-ircam/RAVE