Imagining Futures

In July 2021, the Berggruen Research Center at Peking University launched the project “Imagining Futures” with the first workshop “AI, Robots, and Human Society in Thirty Years.” Experts in AI, biotech, and international relations as well as forward-thinking philosophers, sci-fi writers, and artists, were invited to share their thoughts about how their fields would evolve in the next thirty years and how they predicted developing technologies would alter society.

This report is based on the content of the workshop. It attempts to establish three future scenarios to help us formulate a basic consensus and forecast the future of technology and society. In this report, AI researchers offer different appraisals of the current and future development of artificial intelligence, covering topics such as the emergence of consciousness and general intelligence in machines, as well as whether or not artificial general intelligence (AGI) will be realized within the next thirty years. Biotech researchers highlight some of the ethical considerations of gene editing and offer innovative interpretations of what life is and what it actually means to be alive. Philosophers incorporate knowledge of science and neuroscience to explore the possible existence of logical limits to artificial intelligence, whether or not general intelligence can be programmed, and ethical challenges posed by technology, such as the “data gaze” and gene enhancement. International relations experts incorporate reflections on contemporary nation-building as they look to the future. Sci-fi writers and artists build upon the implications of philosophical concepts and hard science, depicting imaginative spaces where our hopes and fears about the future are laid out in the open.

Thirty years is just the starting point for this project. Life has existed on this planet for more than 3 billion years, and humans have been evolving for hundreds of thousands of years—our eventual aim is to imagine the future on even larger time scales. We look forward to engaging in deeper discussions and strengthening our understanding of humanity, technology, life, and existence from a multidisciplinary, multicultural perspective. Doing so can help us deal with the challenges of our rapidly changing world.

Fullscreen Mode

composed by Arswain
machine learning consultation by Anna Tskhovrebov
commissioned by the Berggruen Institute
premiered at the Bradbury Building
downtown Los Angeles
april 22, 2022

Human perception of what sounds “beautiful” is necessarily biased and exclusive. If we are to truly expand our hearing apparatus, and thus our notion of beauty, we must not only shed preconceived sonic associations but also invite creative participation from beings non-human and non-living. We must also begin to cede creative control away from ourselves and toward such beings by encouraging them to exercise their own standards of beauty and collaborate with each other.

Movement I: Alarm Call
‘Alarm Call’ is a long-form composition and sound collage that juxtaposes, combines, and manipulates alarm calls from various human, non-human, and non-living beings. Evolutionary biologists understand the alarm call to be an altruistic behavior between species, who, by warning others of danger, place themselves by instinct in a broader system of belonging. The piece poses the question: how might we hear better to broaden and enhance our sense of belonging in the universe? Might we behave more altruistically if we better heed the calls of – and call out to – non-human beings?

Using granular synthesis, biofeedback, and algorithmic modulation, I fold the human alarm call – the siren – into non-human alarm calls, generating novel “inter-being” sonic collaborations with increasing sophistication and complexity. 

Movement II: A.I.-Truism
A synthesizer piece co-written with an AI in the style of Vangelis’s Blade Runner score, to pay homage to the space of the Bradbury Building.

Movement III: Alarmism
A machine learning model “learns” A.I.Truism and recreates Alarm Call, generating an original fusion of the two.

Movement IV: A.I. Call
A machine learning model “learns” Alarm Call and recreates A.I.Truism, generating an original fusion of the two.


RAVE (IRCAM 2021) https://github.com/acids-ircam/RAVE