Xenobots and Futures of Life

July 13, 2023

3pm Norton's Woods Conference Center at the American Academy of Arts and Sciences

“Embodied intelligence pushes against the world and observes how the world pushes back.”
—Josh Bongard

Xenobots and Futures of Life” is a Public Talk that will unfold as a series of stories told by the four inventors of xenobots, programmable lifeforms: Michael Levin (Tufts University), Josh Bongard (University of Vermont), Doug Blackiston (Tufts University), and Sam Kiregman (Northwestern University).

The group will narrate the theory behind xenobots’ creation, the computational and craft practices that molded them, and consider the philosophical and cultural significance of these entities in shaping the futures of AI, simulation, and embodiment — all concepts, the inventors contend, that should aim to promote human flourishing.

Future Humans Associate Director Claire Isabel Webb will moderate a public discussion following the talk by the four inventors.

Time: 3 PM – 4:30 PM
Date: Thursday, July 13th, 2023
Location: Norton’s Woods Conference Center at the American Academy of Arts and Sciences
200 Beacon Street, Somerville, MA 02143

Although we have relaxed COVID-19 policies, Future Humans politely requests that you consider wearing a mask for this event.



Douglas Blackiston is a Senior Scientist in the Allen Discovery Center at Tufts University, and a visiting scholar at the Wyss Institute at Harvard, where his research program examines the relationship between developmental events and organism level behaviors.  His work encompasses many diverse questions and models, from the ability of memory to survive metamorphosis in moths and butterflies, to the capacity of transplanted eyes to restore vision in blind vertebrates. As part of the team that created computer-designed organisms, he envisioned and developed the biological components of the work, including the techniques, protocols, and methods to bring the simulated designs to life.

Josh Bongard is a Computer Science Professor and Graduate Student Advisor at the University of Vermont. Josh Bongard’s research centers on evolutionary robotics, evolutionary computation and physical simulation. He runs the Morphology, Evolution & Cognition Laboratory, whose work focuses on the role that morphology and evolution play in cognition. In 2007, he was awarded a prestigious Microsoft Research New Faculty Fellowship and was named one of MIT Technology Review’s top 35 young innovators under 35. In 2010 he was awarded a Presidential Early Career Award for Scientists and Engineers (PECASE) by Barack Obama at a White House ceremony.

Sam Kriegman is an assistant professor of computer science, chemical and biological engineering, and mechanical engineering at Northwestern University. His research seeks general theories of life, in which the details of carbon-based organisms would represent a special case. As we have yet to invent a time machine or the means of interstellar travel, Sam and his students design, build and breed robotic lifeforms to catch a glimpse of life as it may have arisen here on Earth or as it might exist elsewhere in the universe. 

Michael Levin, a Distinguished Professor in the Biology department at Tufts, holds the Vannevar Bush endowed Chair and serves as director of the Allen Discovery Center at Tufts and the Tufts Center for Regenerative and Developmental Biology. Recent honors include the Scientist of Vision award and the Distinguished Scholar Award. His group focuses on using computer science, developmental biophysics, and cognitive science to understand, engineer, and ethically relate to a wide range of embodied minds, whether natural, artificial, or hybrid. Applications of their work range across regenerative medicine, evolution, synthetic bioengineering, and AI.

Claire Isabel Webb directs the Berggruen Institute’s Future Humans program. She earned her Ph.D. from MIT’s History, Anthropology, and Science, Technology, and Society (HASTS) program in 2020. An internship at the Search for Extraterrestrial Intelligence (SETI) Institute in 2008 sparked the topic of Webb’s dissertation: Technologies of Perception: Searches for Life Beyond Earth. Informed by her ongoing work with the SETI group Breakthrough Listen at U.C. Berkeley, Webb’s book project historically and ethnographically tracks how scientists have investigated extraterrestrial life forms—both microbes and beings—since the Space Age.

composed by Arswain
machine learning consultation by Anna Tskhovrebov
commissioned by the Berggruen Institute
premiered at the Bradbury Building
downtown Los Angeles
april 22, 2022

Human perception of what sounds “beautiful” is necessarily biased and exclusive. If we are to truly expand our hearing apparatus, and thus our notion of beauty, we must not only shed preconceived sonic associations but also invite creative participation from beings non-human and non-living. We must also begin to cede creative control away from ourselves and toward such beings by encouraging them to exercise their own standards of beauty and collaborate with each other.

Movement I: Alarm Call
‘Alarm Call’ is a long-form composition and sound collage that juxtaposes, combines, and manipulates alarm calls from various human, non-human, and non-living beings. Evolutionary biologists understand the alarm call to be an altruistic behavior between species, who, by warning others of danger, place themselves by instinct in a broader system of belonging. The piece poses the question: how might we hear better to broaden and enhance our sense of belonging in the universe? Might we behave more altruistically if we better heed the calls of – and call out to – non-human beings?

Using granular synthesis, biofeedback, and algorithmic modulation, I fold the human alarm call – the siren – into non-human alarm calls, generating novel “inter-being” sonic collaborations with increasing sophistication and complexity. 

Movement II: A.I.-Truism
A synthesizer piece co-written with an AI in the style of Vangelis’s Blade Runner score, to pay homage to the space of the Bradbury Building.

Movement III: Alarmism
A machine learning model “learns” A.I.Truism and recreates Alarm Call, generating an original fusion of the two.

Movement IV: A.I. Call
A machine learning model “learns” Alarm Call and recreates A.I.Truism, generating an original fusion of the two.

RAVE (IRCAM 2021) https://github.com/acids-ircam/RAVE