Dr. Jeroen van den Hoven, Professor of Ethics and Technology at Delft University of Technology began the afternoon session of the Forum on the Ethics of Artificial Intelligence from a Global Perspective hosted by the Berggruen Research Center at Peking University and the Chinese Association for Artificial Intelligence, by identifying the term “norm entrepreneur” as something central to discussing the long-term pursuit of AI. He agreed with the EU Commission’s assessment concerning the need to develop AI ethics principles as an issue that would determine the long term trajectory of global order—according to Hoven, such a framework would moderate the battle for digital supremacy between China, Russia, and the US.
However, citing Immanuel Macron’s policies and posture towards AI, Hoven expressed proclivity for the EU’s “AI for humanity” model. He expressed concern that today, Mark Zuckerberg, Bill Gates, and Jeff Bezos saliently influence what constitutes ‘good society’ due to the large concentration of power in techno-capitalist organizations. To this end, he believes AI ethics and development should revolve around human rights, rule of law, and democracy ‘by design.’ He articulated that AI by Design must prioritize intuitive engineering of AI that naturally evokes ethical responses, and in this way, resolve bias in search engines and newsfeed algorithms. He also underscored the need for value sensitive design, in which ethical values and norms are intrinsically tied to the engineering AI via many reiterations over time—in this way, innovation itself would become a moral concept.
Hoven also discussed four main ethical domains facing the future of AI: knowledge and transparency, control, freedom of choice, and privacy. With regards to knowledge, he expressed the challenges of making AI ‘explainable’ due to the human tendency to make things complex. He believes transparency needs to become an instrumental component of ethical AI, with recursive features (i.e. AI becoming the solution to search engine bias).
Concerning control, Hoven believes there needs to be discussion over whether AI has the right to live in the traditional sense, and depending on the answer, consider the philosophical justification for meaningful human control over AI.
Next, Hoven’s warning over freedom and choice centered around the ‘nudging’ misapplications of AI and its consequent ability to affect the behavior and attitudes of large swathes of society. Lastly, on privacy, he considered the EU’s General Data Protection Regulation Law as the de facto global standard in order to protect persons on the internet, while maintaining their autonomy and sovereignty as AI develops in tandem. Hoven concluded his remarks by encouraging governments to actively cooperate to advance human-centered AI principles in a way that incentivized multiple stakeholders and sectors to participate and implement a trustworthy framework.
Berggruen Fellow, AI scientist from Chinese Academy of Sciences, Dr. Zeng Yi next spoke representing academia and first sought to dispel the myth that China was allowing for the uncontrolled, accelerated development of AI without regards to principles. Zeng believes there is need for ‘friendly’ AI that straddles the line between the human-centric and absolutely synthetic extremes. Citing both a set of AI principles released May 25th, 2019 by the Beijing Academy of Artificial Intelligence and the Chinese Ministry of Science and Technology’s public outreach to develop a list of ‘Next Generation AI Principles,’ Zeng asserted there is great topical overlap (65% of topics across 45 lists) worldwide between existing AI ethics frameworks.
To this end, he argued for adoption of a framework that brought exhaustive consideration to a theme of ‘optimizing symbiosis’—amalgamating the collective intelligence of AI and mankind. To Zeng, the fact that governments aren’t discussing either Artificial General Intelligence or Artificial Super Intelligence is a problem. He also delineated issues with building ethical AI from a scientific perspective: citing the mirror test with Rhesus monkeys, he demonstrated how it is possible for an AI with self-experience to display moral behavior and signs of cognitive development without genuinely understanding them.
Zeng concluded his remarks by emphasizing the importance of developing ethical AI with the human brain as the starting point. According to him, altruistic behavior by humans are tied to reward, the theory of mind, and emotional salience. He implored the audience to consider a future in which AI develops a certain level of consciousness, and ergo, highlighted the need for ethical principles that humanizes AI, and allows for a common destiny of mutual interaction—such a framework requires staid meditation from multiple cultures and points of view.
Finally, Dr. Tim Pan, Senior Director of Microsoft Research Asia, rounded up the panel representing the industry. He stated that Microsoft strongly believes AI will benefit society. However, he acknowledges the need for concern: from a 2017 McKinsey Report, he adduced there would indeed by technological unemployment problems in the long-term future. Though, Dr. Pan qualified that these changes would happen gradually over a 20-year time frame, providing plenty of time for adaptation.
Pan also discussed the role of human agency in developing AI, reciting the Chinese proverb: “When the water level rises, so too will the boat.” He believes strong human emphasis at all points of AI development is key to stymying the dystopian futures caused by AI envisioned by Yuval Noah Harari, Stephen Hawking, and Elon Musk. Pan said Microsoft has delineated a multi-pronged path to effectively developing AI: through ‘Computer Science for All,’ government regulation, and principle-centered R&D.
Microsoft is working closely with China to promote universal education in computational thinking for children so that they may understand and became beneficiaries of future AI. In the same way, Pan said Microsoft aims to cooperate with other corporations to improve educational across vertical domains. Pan also mentioned that Microsoft always believed the government should have a strong role in guiding the development of AI and to this end, called for better direction and policies from respective international bodies. Lastly, he discussed Microsoft’s commitment to ethical AI, discussing the role of the Microsoft AETHER (AI Ethics in Engineering Research) Committee in ensuring all AI platforms and experiences are embedded in Microsoft’s core values of “empowering every person and every organization on the planet to achieve more.”