On November 17 and 18, more than 25 experts from Peking University, Cambridge University, Australian National University, the National University of Singapore and others joined the conference to share their perspectives on how to build trust between humans and AI and to ensure safeguards for humanity. A network of experts from different disciplines will be established to effectively employ research resources globally.
Professor Huw Price from Cambridge University pointed out, “People are coming to the realization that we are dealing with a technology which is going to be huge in the long run in its impacts on human society. The challenge of how we ensure a beneficial change requires lots of different perspectives.”
Key Takeaways:
• Scientists provided a most up to date overview of AI technology and they sought suggestions from philosophers on ethical and trust-related issues when developing such technologies. One of the questions raised was how we can define a human being with the technological improvement of cyborg systems.
• Experts suggested the government should be a user of AI in public services, a regulator, a promoter for economic growth, an enabler of technology infrastructure, a leveler of impacts, and a protector from harmful effects.
• Philosophers agreed that we should aim to build human-centric and responsible AI, however, it may not be feasible to program non-biological AI to follow ethical principles derived from human experience and human knowledge.