Dr. Jun Huan, Head of the Big Data Lab at Baidu Research opened the morning session of the Forum on the Ethics of Artificial Intelligence from a Global Perspective hosted by the Berggruen Research Center at Peking University and the Chinese Association for Artificial Intelligence. He discussed Baidu’s multi-layer approach to developing AI, stressing the importance of the Baidu platform and working towards developing AI with cognition.
Huan also discussed Baidu’s ‘open ecology strategy,’ which provides open-source codes for many of Baidu’s R&D projects. This approach, he affirmed, would help frontier industries, smaller groups, and start-ups the ability to catch-up with the increasingly vertical gap with larger corporations.
Huan also stated Baidu believes in automating the development of AI, and the company is working to develop an ‘Automated Neural Network Architecture Design’ that would automatically incorporate Baidu’s AI servicing platforms and products. He also highlighted the importance of making open-access AI and Deep Neural learning in order to protect against adversarial attacks in both Whitebox and Blackbox settings. In the future, he said Baidu will strive to construct a more philosophical ‘constructivist’ learning model, which includes a two-step assimilation and accommodation process, to neural learning. He ended his discussion by asserting the challenges of reconciling IT and inclusive AI principles and solving privacy concerns (should AI models become mass-produced) as the defining problems of the ‘fourth industrial revolution.’
Jake Lucchi, Head of Content on Artificial Intelligence and Public Policy and Government Relations at Google Asia Pacific spoke next as a representative of the industry. Lucchi said that Google believes AI will have a very positive impact on society, and that the company is working to address problems such as security risks, problems of synthetic media, bias, and fairness concerns. Google, according to Lucchi is working towards developing AI that prioritizes the equality of opportunity through programs such as the ‘People + AI Research Initiative’ (PAIR) which is also an open-source visualization project.
Lucchi also introduced Google’s firm principles concerning AI: the belief that AI should be socially beneficial, should avoid creating or reinforcing bias, be built and tested for safety, be accountable to people, incorporate responsible privacy design, and uphold scientific excellence. With regards to accountability, he spoke about Google’s commitment to design AI systems that provides appropriate opportunities for feedback, relevant explanations, and an appeals process. In this way, he expressed hope that the development of AI elsewhere also integrates appropriate levels of human direction and control. Mr. Lucchi then expressed solidarity with Dr. Huan and Baidu’s focus on open-source culture over AI, especially when considering the dangers of dual-use. Lucchi then concluded his remarks concerning the role of international governance, he believes sector-specific measures will likely fill in current gaps to completing a list of AI ethics principles, with emphasis on sharing good practices and accountability while preserving space to innovate.
The panel then transitioned into discussion from the perspective of policy makers. Wu Tong, Researcher of the China Robot Industry Alliance, introduced his thoughts on AI ethics principles as someone involved in the factory and robotics manufacturing industry. He isolated key challenges posed by the impending construction of ‘smart robots’: the central question to answer is how one reconciles functional security with ethics. How does one evaluate AI through big data if the intentions of big data are impossible to know? Mr. Wu emphasized the challenges in keeping mathematical models true to reality, due to the speed of development.
Wu also raised several key questions concerning ethical AI: first, should robots be held to the same ethical standards as humans? Second, how do we contextualize human accountability for the decisions made by AI? And third, how do we supervise ethical AI decision-making? He offered a hypothetical to demonstrate the philosophical scope of these challenges: if a robot becomes cognizant of its existence, how should a human manager react to a robot that refuses to work? Wu believes in establishing an AI ethics framework that is more personified, with moral rules and processes that can evaluate ethics in the context of security, and with effective monitoring at all stages of an AI’s development and work. In other words, he envisions a stricter, far more adaptable version of Isaac Asimov’s Three Laws of Robotics. Mr. Wu concluded his discussion accentuating the need to develop anthropogenic ethics standards from a technological perspective, as “the freer the robot, the more pressing the need for ethical standards.”
Next, Dr. Nathalie Nevejans spoke regarding regulation of AI ethics from the perspective of the European Union. Citing the European Parliament’s “European Civil Law Rules in Robotics” as a non-legally binding framework, she believed universal AI ethical principles needed to be grounded in human freedoms and values. She underscored the important of utilizing AI to promote human quality of life; to this end, Nevejans believes the “Ethics Guidelines for Trustworthy AI” developed by the European Commission is the right direction to foster innovation and economic growth in parallel. Subsequently, she identified four key areas of tension ethics principles must reconcile: human autonomy, prevention of harm, fairness, and explicability.
To supplement the EU’s “human-centric” approach, Nevejans discussed the importance of ‘trustworthy’ AI—the EU Commission established a high-level experts group that invited the opinions of the public to determine what trustworthy AI and ethics principles would be like. According to Nevejans, the EU believes humans should always maintain sovereignty over AI, and that trustworthy AI must comply with international charters in a lawful, ethical, and robust manner. She stressed that trustworthy AI would be a manifestation of principles of democracy and the rule of law and would allow for public intervention and appraisal of the AI at any point of its use or development.
Nevejans concluded by emphasizing how any AI ethics guidelines must be developed in close collaboration with stakeholders across public and private sector, and believes the EU Commission framework represents a feasible global model.
Following the forum’s first set of panels, an academic round table discussion was held, in which brief remarks were made by Dr. Zeng Yi, Dr. Wendell Wallach, and Dr. Jeroen van den Hoven. Dr. Yi, concerning the main theme of his panel, asserted that AI would most certainly speed up human development. He warned of overaccentuating R&D of AI without ethical principles stating it’s fine to slow down AI development so long as moral questions are exhaustively answered; he highlighted the importance of forums such as this event to ensure ethical considerations of AI are kept in the academic spotlight.
Wallach believes consideration of AI ethics would slow innovation but doesn’t necessarily have to. To minimize negative impacts of unethical AI, he said society should strive to establish a globally uniform platform over AI ethics, both in vision and action. Because developing AI ethics are not necessarily the obligation of government or corporations, mechanisms need to be placed that are international in scope, where there is extant consensus over values.
Wallach isolated the key problem was reconciling concepts that have readily different interpretations in different areas, such as US and Chinese discussions over human rights. Lastly, Hoven believes innovation would come in parallel with the development of AI ethics: citing the EU human-centric model as discussed by Nevejans, he expressed optimism that solutions to societal problems, such as privacy, would be discovered without compromising on values.