Yi Zeng

Yi Zeng

Professor of Brain-inspired Intelligence; 2018-2020 Berggruen China Center Fellow


Yi Zeng is a Professor and Deputy Director at Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, a board member for the National Governance Committee for the New Generation Artificial Intelligence, Ministry of Science and Technology China, and is also the Director for the Research Center on AI Ethics and Governance, Beijing Academy of Artificial Intelligence. He is also a member for the World Economic Forum Global Future Council on Values, Ethics and Innovation. His major research interests focus on technical models for Brain-inspired AI, AI Ethics and Governance. He leads the efforts on Brain-inspired Cognitive Engine (http://bii.ia.ac.cn/braincog/), and the Linking AI Principles (http://www.linking-ai-principles.org/) platform. During the fellowship year, he will contribute to forming and analyzing the global landscape of AI Principles and how these considerations could be well incorporated in the whole lifecycle of AI models and services.



composed by Arswain
machine learning consultation by Anna Tskhovrebov
commissioned by the Berggruen Institute
premiered at the Bradbury Building
downtown Los Angeles
april 22, 2022

Human perception of what sounds “beautiful” is necessarily biased and exclusive. If we are to truly expand our hearing apparatus, and thus our notion of beauty, we must not only shed preconceived sonic associations but also invite creative participation from beings non-human and non-living. We must also begin to cede creative control away from ourselves and toward such beings by encouraging them to exercise their own standards of beauty and collaborate with each other.

Movement I: Alarm Call
‘Alarm Call’ is a long-form composition and sound collage that juxtaposes, combines, and manipulates alarm calls from various human, non-human, and non-living beings. Evolutionary biologists understand the alarm call to be an altruistic behavior between species, who, by warning others of danger, place themselves by instinct in a broader system of belonging. The piece poses the question: how might we hear better to broaden and enhance our sense of belonging in the universe? Might we behave more altruistically if we better heed the calls of – and call out to – non-human beings?

Using granular synthesis, biofeedback, and algorithmic modulation, I fold the human alarm call – the siren – into non-human alarm calls, generating novel “inter-being” sonic collaborations with increasing sophistication and complexity. 

Movement II: A.I.-Truism
A synthesizer piece co-written with an AI in the style of Vangelis’s Blade Runner score, to pay homage to the space of the Bradbury Building.

Movement III: Alarmism
A machine learning model “learns” A.I.Truism and recreates Alarm Call, generating an original fusion of the two.

Movement IV: A.I. Call
A machine learning model “learns” Alarm Call and recreates A.I.Truism, generating an original fusion of the two.

RAVE (IRCAM 2021) https://github.com/acids-ircam/RAVE