Templeton World Charity Foundation Awards Berggruen Institute with More Than $225,000 to Support Transformations of the Human Program “AI and the Human” Fellowship

Project Summary

The vocabulary we have available to think about ourselves as humans—what it means to be human and to live a human life, what sets us apart, and what defines us as a group—increasingly fails us. The novel understanding of the human that emerges from recent advances in engineering and bio-sciences trouble the comprehension of the human that is silently transported by our terms, words, and concepts. Fields like AI research, microbiome studies, or gene editing, for example, undermine the distinction between the human and nature or between humans and machines.

Directed by Tobias Rees, the Berggruen Institute’s Transformations of the Human program emerges from a recognition of these issues. More specifically, it seeks to make the human questions at stake in AI part of AI design itself. One avenue of investigation is a research fellowship that gives junior researchers, philosophers, and social scientists an opportunity to work in AI laboratories on a daily basis. Their task is to discover philosophical questions—instances of the transformation of the human—that silently reverberate in the concrete everyday labor of the engineering researchers. This partnership with the Berggruen Institute places junior scholars in key AI labs (e.g., MIT) to observe and investigate AI programmers.

Jacob Browning’s research is particularly aligned with the project’s goal, addressing the following question: What would it mean to program a computer that cares? His research will engage engineers at MIT and NYU in the programming phase by observing them in the laboratory and capturing how issues of caring—in this case, self-expression—are discussed and formulated, even if ultimately rejected as technologically unfeasible. The aim is to collect answers over time, tracking how the work of AI labs transforms concepts that are often definitive of our humanity—creativity, cognition, personality, caring.

For more details and project details, please click here


composed by Arswain
machine learning consultation by Anna Tskhovrebov
commissioned by the Berggruen Institute
premiered at the Bradbury Building
downtown Los Angeles
april 22, 2022

Human perception of what sounds “beautiful” is necessarily biased and exclusive. If we are to truly expand our hearing apparatus, and thus our notion of beauty, we must not only shed preconceived sonic associations but also invite creative participation from beings non-human and non-living. We must also begin to cede creative control away from ourselves and toward such beings by encouraging them to exercise their own standards of beauty and collaborate with each other.

Movement I: Alarm Call
‘Alarm Call’ is a long-form composition and sound collage that juxtaposes, combines, and manipulates alarm calls from various human, non-human, and non-living beings. Evolutionary biologists understand the alarm call to be an altruistic behavior between species, who, by warning others of danger, place themselves by instinct in a broader system of belonging. The piece poses the question: how might we hear better to broaden and enhance our sense of belonging in the universe? Might we behave more altruistically if we better heed the calls of – and call out to – non-human beings?

Using granular synthesis, biofeedback, and algorithmic modulation, I fold the human alarm call – the siren – into non-human alarm calls, generating novel “inter-being” sonic collaborations with increasing sophistication and complexity. 

Movement II: A.I.-Truism
A synthesizer piece co-written with an AI in the style of Vangelis’s Blade Runner score, to pay homage to the space of the Bradbury Building.

Movement III: Alarmism
A machine learning model “learns” A.I.Truism and recreates Alarm Call, generating an original fusion of the two.

Movement IV: A.I. Call
A machine learning model “learns” Alarm Call and recreates A.I.Truism, generating an original fusion of the two.

RAVE (IRCAM 2021) https://github.com/acids-ircam/RAVE