COVID-19 and Care Robots: What Should Care Robots Care About?

Written by Lan Tianmeng, intern, an undergraduate at Renmin University of China

In a two-hour Berggruen Seminar on August 6, Berggruen Fellow Wu Tianyue delivered a talk on the philosophy of the body and life ethics questions triggered by advances in cutting-edge technology. A tenured Associate Professor in the Philosophy Department at Peking University, Wu’s talk also offered reflections on the ethical consequences of care robots from the theoretical perspectives of physical existence, social resources, data carriers, and artificial agents. Using the examples of intelligent protheses and robots, Wu explored the future of human-computer symbiosis. The seminar was hosted by Duan Weiwen, Professor at the Institute of Philosophy and Director of the Science Technology and Social Research Center at the Chinese Academy of Social Sciences, and current Berggruen Fellow.

“A holistic understanding of health and care reflects the basic value orientation of contemporary ethics of medicine and caregiving, and should be the starting point of our ethical reflection on care robots”.

Wu opened by expounding on the denotation of the differences between the seminar’s Chinese and English titles. The word “care” is complex in English; it can both refer to specific actions of looking after someone, or to a more general emotional attitude of fondness and love, which makes it difficult to translate faithfully into Chinese. The word “care” in the Chinese version of the event’s title is similarly broad but is defined to include social and ethical relationships in addition to specific acts of care. Wu believes these discrepancies in translation affect not only our theoretical orientation with respect to robot classification, but also our ethical practices. With this in mind, the aim of Wu’s talk was to uncover the complexity and explore the ethics of care and caretaking activities.

Wu played a video showing how care robots have been widely used throughout the COVID-19 pandemic. Care robots in hospitals can reduce the risk of viral infection and ease the impact of medical personnel shortages. Many believe that the use of care robots in medical facilities will become one of the most widespread trends in technology adoption. The use of care robots during COVID-19 has provoked a wide range of responses.

Care robots of all shapes and sizes, from robots for diagnosis and treatment to robotic animals to keep patients company, have been used during the pandemic. Wu discussed a lack of attention to psychological trauma caused by COVID-19 in current reporting–some studies show that recovered patients are more prone to depression, and social isolation caused by the pandemic is widespread– and suggested that care robots could improve the present situation.

Source: R. Murphy, V. Gandudi, Texas A&M; J. Adams, Center for Robot-Assisted Search and Rescue, CC BY-ND

At present, the International Standardization Organization (ISO) broadly classifies robots as either industrial robots or service robots then further divides them between personal and professional use. The use of care robots, however, straddles the boundary between personal and professional use, as they are utilized by both families and professional medical and health institutions. Van Wynsberghe proposes that “a care robot can be defined as a robot used for satisfying any care need in a care practice. It can be used by the care-provider, the person being cared for, or both in hospitals, nursing homes, hospice centers, or at home (Wynsberghe 2015, 62).

The classification of care robots involves the two core concepts of health and caregiving practice. The concept of health does not only refer to normal physical functions; the broad definition of health advocated by the World Health Organization (WHO) covers both mental health and social health, including psychological state and social ability. This definition transcends the traditional division between medical and non-medical domains; health is not only governed by medical institutions, but must also be appropriately maintained in one’s personal life. Accordingly, caregiving includes attending to all vulnerable groups, covering all practices aimed at meeting the needs of the elderly, feeble, sick or disabled. This comprehensive perspective on health and care extends across myriad specific life scenarios.

Wu believes this holistic understanding of health and care reflects the basic value orientation of contemporary medical and caregiving ethics, and should be the starting point of our ethical reflection on care robots.

Care ethics, also known as the “ethics of care,” is an ethical theory that emerged from feminist theory in the 1980s. It emphasizes that ethics don’t spring from the independent moral reasoning of an isolated agent, but from understandings gleaned from practical experiences that illustrate our interdependence. Before a person is capable of her own rational planning, she is a child in need of care; only through receiving care does she undergo the growth and education needed to possess her own reasoning capacity. It naturally follows that an ethically living person cannot ignore those who need care themselves, such as people with physical and mental disabilities, or patients with temporarily impaired functions.

The importance of care ethics is in how it prompts us to explore the foundations of an ethical life. Both utilitarian and deontological ethics assume that a rational actor is one who makes decisions based on their resources and set of potential actions. Care ethics does not deny one’s autonomy as a rational subject; rather, it encourages us to reflect on whether such autonomy is unconditional and independent of any relationship. It argues that the diverse and complicated relationships between those who give and receive care constitute the starting point of an ethical life. This clarification helps us discern flaws and prejudices in past ethical reflections; for example, Aristotle argued that women and children were not actors capable of planning, and were therefore excluded from the possibility of living ethically.

Pointing out that there is still disagreement concerning the definition of “caring” among ethicists, Wu offers two points on the definition. First, caring must be realized through concrete physical existence. This is also a characteristic of many systems of feminist ethics; an opposition to discussing general or abstract ethical rules, replaced instead by a desire to discuss ethical practices in specific environments. According to care ethics, caring is always realized through specific external activities and work–a practice with intrinsic value. Second, caring is oriented toward the other. Caring with intrinsic value must be realized through other people and caregivers must rely on others in caring.

Based on recent thinking and his proposed definitions, Wu provides a theoretical framework for the ethical discussion on care robots that clarifies problems at different levels. First, the care robot is a physical actuality. All problems it may encounter as a machine can be handled by traditional ethical principles regarding machines. Second, care robots are a social resource, an element of the medical and public health system. Due to their high costs, they must be constantly maintained. As such, decisionmakers should carefully consider the allocation of public resources and the burden of expenses. Care robots can have adverse effects on employment, but the application of care robots can also create new jobs, such those tasked with oversight. Third, care robots can be considered data carriers. The data they carry is closely linked to personal identity and personal privacy, so it should be protected accordingly. Finally, Wu has placed a particular emphasis on the fact that, as an artificial agent, the care robot has a certain degree of autonomy. Although care robots are not independent moral agents, they should still be regarded as independent moral elements. The integration of care robots into ethical life can bring great challenges; they must be appropriately constrained.

Mode of existence Basic characteristics Ethical considerations
Physical existence Humans consciously make tools with specific functions that can assist or replace humans to complete tasks Operational safety and environmental sustainability
Social resource An element of the medical and health system, an integral part of future public medical resources The allocation of care robots as scarce medical resources; the impact of care robots on employment in related industries
Data carrier Care robots must acquire, store, transmit and utilize personal, physical, mental, and social data Confidentiality, privacy, transparency, algorithm bias, informed consent
Artificial agent “Machines that can feel, think and act in the world,” that have basic characteristics such as interactivity, autonomy, and adaptability. In the ethical network composed of robots and people, robots should be regarded as independent moral elements Can we preemptively embed care robots with ethical codes? Should care robots be allowed to form independent decision-making abilities through in-depth learning? How do we  ascribe responsibility to care robots? Should we encourage people to establish emotional ties with robots?


During his talk, Wu offered two examples to illustrate ethical concerns about care robots. Intelligent prostheses can collect signals relaying the intention to walk with a sensor, convert the signals into instructions that can be recognized by the machine, and even transmit bidirectional feedback to the nervous system. Unlike traditional industrial robots, the prosthetic body as a wearable device interacts with the wearer and the external environment on a deep level. Of course, stable performance and safety in complicated environments are necessary requirements in designing intelligent prostheses, as any failure is very likely to seriously injure the wearer.

It is particularly important to consider that intelligent prostheses are closely linked to the wearer’s body. Regardless of one’s view on the relationship between body and mind, it cannot be denied that intelligent prostheses affect the soul. Some wearers have stated that intelligent prostheses with natural limb movement capabilities have become an integral part of their bodies, with the physical feeling of the prostheses having been integrated into their minds. Because of the completely different ways in which our bodies and our property are said to be “ours,” the body, in particular, is of special value in ethical practice due to our inherently intimate relationship with it. By way of illustration, injuries to the body are legally categorized differently than damage to other objects.

Wu also elaborated on another use for care robots: as pets. Various ethical concerns appear in the interaction between patients and robot pets, the first of which is emotional dependence. The behavior pattern of the robot pet—such as the Paro seal robot—is based on imagination, thus imbuing it with an anthropomorphic mode of thinking that real seals do not have. The robot pet user may invest her emotions in something that is neither conscious nor alive. Just as  fetishism is allowed as a personal choice, the elderly should be allowed to invest their emotions in robot pets. But robot pets also present problems related to deception. The way in which they interact with people is different from their actual mode of existence, which may cause trouble for users. Furthermore, it should be noted that man-machine interaction may have negative effects on interpersonal communication. When interacting with robot pets, for example, autistic people may engage in stereotypic movement and repeatedly mimic the robot’s actions, which may affect their relationships with other people.

Wu concluded the seminar by stating that the complicated circumstances we encounter in practice necessitate that we all consider the following questions: Can unconscious, emotionless robots contribute to the care field? Does the care provided by robot pets spring from moral values? These unanswered issues demand our engagement via reflection and exploration.


composed by Arswain
machine learning consultation by Anna Tskhovrebov
commissioned by the Berggruen Institute
premiered at the Bradbury Building
downtown Los Angeles
april 22, 2022

Human perception of what sounds “beautiful” is necessarily biased and exclusive. If we are to truly expand our hearing apparatus, and thus our notion of beauty, we must not only shed preconceived sonic associations but also invite creative participation from beings non-human and non-living. We must also begin to cede creative control away from ourselves and toward such beings by encouraging them to exercise their own standards of beauty and collaborate with each other.

Movement I: Alarm Call
‘Alarm Call’ is a long-form composition and sound collage that juxtaposes, combines, and manipulates alarm calls from various human, non-human, and non-living beings. Evolutionary biologists understand the alarm call to be an altruistic behavior between species, who, by warning others of danger, place themselves by instinct in a broader system of belonging. The piece poses the question: how might we hear better to broaden and enhance our sense of belonging in the universe? Might we behave more altruistically if we better heed the calls of – and call out to – non-human beings?

Using granular synthesis, biofeedback, and algorithmic modulation, I fold the human alarm call – the siren – into non-human alarm calls, generating novel “inter-being” sonic collaborations with increasing sophistication and complexity. 

Movement II: A.I.-Truism
A synthesizer piece co-written with an AI in the style of Vangelis’s Blade Runner score, to pay homage to the space of the Bradbury Building.

Movement III: Alarmism
A machine learning model “learns” A.I.Truism and recreates Alarm Call, generating an original fusion of the two.

Movement IV: A.I. Call
A machine learning model “learns” Alarm Call and recreates A.I.Truism, generating an original fusion of the two.