Dr. Osamu Sakura, Principle Investigator of the Science, Technology and Society Team at the RIKEN Center for Advanced Intelligence Project, the University of Tokyo opened the evening session of the Forum on the Ethics of Artificial Intelligence from a Global Perspective hosted by the Berggruen Research Center at Peking University and the Chinese Association for Artificial Intelligence opened with perspectives from academia.
Sakura posited the question of why culture was important in studying the interactions between emerging technology and human society. He asserted that the Ethical, Legal, and Social Implications (ELSI) framework was insufficient because society could now influence the direction of technological development. In envisioning the impact of technology on society, Sakura believes cultural questions are key to predicting the trajectory of future technology, including mankind’s ethical and social position concerning robotics and AI.
He discussed the overall cultural implications of human-AI relations through an analysis of artistic compositions across societies. Sakura believes AI and robots may serve as a bridge between humans and pure machines. He articulated that Japanese perception towards AI is very much rooted in the East Asian View to Nature (EAVN), in which East Asian cultures treat nature in a holistic, harmony-oriented framework, ultimately with the view that humans should be considered part of nature, while in the west, robots are portrayed with humans in a closed microcosm in which relations cannot be ascertained as either equal or ‘dominant-subordinate.’
Next, Counsellor Pierre Lemonde, the Scientific and Technological Counsellor for the Embassy of the French Republic in the People’s Republic of China, spoke from the policy angle. He began his remarks introducing two broad foundations to the French approach to AI: first, he indicated that French and EU policies towards AI are nearly indistinguishable. Second, he stated the EU and France believe AI cannot be developed without a focus on ethics as a fundamental part of all AI research and application.
Counsellor Lemonde then spoke about France’s international cooperation concerning AI, including a bilateral event with China in February 2019 in which AI was identified as one of the priority areas through seven core values: explicability and transparency, non-discrimination and fairness, security, safety and awareness, privacy and data governance, and accountability. Counsellor Lemonde also advocated for global adoption of the EU’s general posture towards AI. Such a stance includes emphasis on technical expertise and academic research to meditate on a future human-AI civil society and environment.
He also discussed the need to establish international, intergovernmental panels of experts that would readily tackle the issue of developing ethical AI, including an overall cultivation of AI education related to all stages of its development. Counsellor Lemonde spoke about the importance of involving political scientists into the AI conversation, such that discussion of AI within an electoral and democratic context is also researched. He ended his remarks by emphasizing the need for an ethical framework that stimulates, not stifles, innovation, rendering strict supervision of AI in policy tools over race, gender, and other security characteristics a must.
The final speaker was Dr. Xiaopeng Chen, Professor and Director of the Robotics Lab, University of Science and Technology of China, who spoke representing the Special Committee on Ethics of the Chinese Association for Artificial Intelligence. He revealed that an official booklet containing the special committee’s work on AI would soon be published. The booklet articulates three main risk areas concerning future development of AI. First, the risk of technological ‘discontrol,’ or the idea that humanity will not be able to control technological development before it causes incalculable social consequences. Second, the abuse of AI, which already happens through synthetic media, bias, etc. The final risk concerns the broader impacts of AI on human society, such as technological unemployment.
Chen discussed the status quo in the context of these three main challenges. Regarding technological ‘discontrol,’ he spoke to philosophical concerns should AI develop consciousness. Who will take responsibility for AI discontrol and who will manage it? Concerning the abuse of technology, he averred that throughout human history, such abuse has been rampant whenever a new industry of innovation emerged. Lastly, concerning human society, Chen focused on implications of AI for China, where there is still vast amounts of factory work, ergo, he advocated for a long-term approach to AI as opposed to a parochial focus on efficiency increases.
He also believes AI will be the solution to other global and societal issues, such as climate change, the demographic ageing crisis in East Asia, rural-urban migration, and medical care and resource distribution. Chen structured his approach to AI with emphasis on promoting human wellbeing and societal harmony. He laid out two principles in this framework: first, AI must be regulated such that they are pushed to do societal good, and second, AI must be restricted from causing harms. To this end, Chen stated that the Special Committee has constructed the “Dynamics Framework of the AI Ethics System,” which fulfills these requirements.
Chen also believes that while there is general international consensus over AI ethics principles, there needs to be enforcement measures and compulsory regulations. He raised the Chinese National Standards Committee’s technical standards as one such model. Chen challenged the audience to conceptualize the costs and benefits of AI differently than other technologies. Autonomous driving, for example, would be supremely more effective and efficient if human drivers were outright removed from the road. According to Dr. Chen, this demonstrates that the biggest bottleneck to innovation and development of AI is humanity.
He concluded the panel by discussing the role of specialized research in order to construct the groundwork for stable AI ethics principles. For example, should AI develop consciousness, there needs to be input from scientists and academics from various disciplines. Ergo, Chen proposed two disparate strains of research concerning AI ethics: first, focused ethics research within the context of AI, and second, technological research that concentrates on maximizing AI utility for human benefit.