Seminars
January 14, 2025: (4PM)- Ned Wingreen, Princeton University
Capillary Attraction Underlies Bacterial Collective Dynamics. “Water Is The Driving Force Of All Nature.” — Leonardo da Vinci.
Host: Eric Siggia
Collective motion of active matter occurs in many living systems, such as bacterial communities, epithelial cell populations, bird flocks, and fish schools. A remarkable example can be found in the soil-dwelling bacterium Myxococcus xanthus. Key to the life cycle of M. xanthus cells is the formation of collective groups: they feed on prey in swarms and aggregate upon starvation. However, the physical mechanisms that keep M. xanthus cells together remains unclear. I’ll present a computational model to explore the role that capillary forces play in bacterial collective dynamics. The modeling results, combined with experiments, show that water menisci forming around bacteria mediate strong capillary attraction between cells. The model accounts for a variety of previously observed phases of collective dynamics as the result of a competition between cell-cell capillary attraction and cell motility. Finally, I’ll discuss the large-scale self-organization of bacterial populations and highlight the importance of capillary force in this process. Together, these results suggest that cell-cell capillary attraction provides a generic mechanism underpinning bacterial collective dynamics.
January 28, 2025: (4PM)- Jason Kim, Cornell University
Generating Interpretable, Reliable, And Quantitative Models Of Emergent Behavior From High-Dimensional Data.
Host: Eric Siggia
Natural systems with emergent behaviors often organize along nonlinear low-dimensional subsets of high-dimensional spaces. For example, despite the tens of thousands of genes in the human genome, the principled study of genomics is fruitful because biological processes rely on coordinated organization along lower dimensional subspaces of phenotypes. To uncover this organization, many dimensionality reduction techniques embed high-dimensional data in low-dimensional spaces by modeling local relationships between data points. However, these methods fail to directly model the subspaces in which the data reside, thereby limiting their ability to infer the biological processes that globally organize the data, and to generalize out-of-distribution. Here, we address this limitation by directly learning a nonlinear subspace that is well-behaved not only in regions where there are data, but also in regions where there are no data by regularizing the curvature of manifolds generated by autoencoders, a method we coin “Γ-autoencoder.” We demonstrate its utility in a wide range of datasets, including bulk RNA-seq from healthy and cancer tissues, single-cell RNA-seq from cell differentiation, and neural activity from the mouse hippocampus. We discover the global biological programs that emerge as relevant variables, demonstrate superior predictions on data from completely unseen out-of-distribution classes, and consistently learn the same nonlinear subspaces across different random initializations. Broadly, we anticipate that direct modeling of the low-dimensional subspaces that generate and organize data through regularizing the curvature of generative models will enable more interpretable, generalizable, and consistent models in any high-dimensional system with emergent low-dimensional behavior.
February 4, 2025: (4PM)- Yoav Soen, Weizmann Institute of Science
Reorganizations In Complex Systems – Adaptation By Natural Improvisation.
Host: Orli Snir
Traditional view of adaptation focuses on selection of adaptive variations without regard to how these variations come about in the first place (no consideration of emergence). And yet, every single animal is constantly undergoing newly forming variations in its epigenome, microbiome and even in its somatic genome. Many of these variations appear in novel combinations that are unique to the individual. Since every new variation is potentially harmful, it is not clear how every individual can tolerate large numbers of novel variations that are forming during their lifetime. We have previously hypothesized that every individual acquires new adaptations by undergoing stochastic variations under (existing) mechanistic constraints that suppress the likelihood of undergoing non-viable changes (i.e. reaching non-viable states). Our group is testing this hypothesis using experimental models of coping with severe conditions of stress mimicking unforeseen challenges. Experimental work-in-progress provide substantial evidence in support of emergent adaptation by constrained exploration (“improvisation”) during the lifetime of individual flies, as well as during the generation time of individual cells in culture. The feasibility of emergent adaptation of this kind is further supported by theoretical models of coping with “unforeseen challenges” presented to specific classes of complex systems. I will describe the conceptual problem of emergent adaptation and its hypothesized solution, present the experimental findings, and discuss the implications to our view of evolution. If time permits, I will also present and discuss the theoretical work-in-progress of emergent adaptation.
February 18, 2025: (4PM)- Gautum Reddy, Princeton University (Location: Smith Hall Annex, A-Level Physics Seminar Room)
Learning Spatial And Temporal Structure In Novel Environments.
Host: Eric Siggia
Learning involves forming associations between events that are separated in space and time. Classical theories of reinforcement learning (RL) explain many aspects of animal learning, but certain important puzzles remain unresolved. I will present two stories involving learning phenomena that are in apparent contradiction with established RL theory: (1) ‘a-ha’ moments while rodents learn to navigate maze-like environments, and (2) how animals measure the passage of time during classical conditioning.
February 25, 2025: (4PM)- Marcella Noorman, Howard Hughes Medical Institute (Janelia)
Maintaining And Updating Accurate Internal Representations Of Continuous Variables With A Handful Of Neurons.
Host: Nikolas Schonsheck
Many animals rely on persistent internal representations of continuous angular variables for working memory, motor control, and navigation. Theories have proposed that such representations are maintained by a class of recurrently connected networks called ring attractor networks. These networks rely on large numbers of neurons to maintain continuous and stable representations and to accurately integrate incoming signals. The head direction system of the fruit fly, however, seems to achieve these properties with a remarkably small network. These findings challenge our understanding of ring attractors and their putative implementation in neural circuits. In this talk, I will show analytically how small networks can overcome the constraints of their size to generate a ring attractor and are hence capable of stably maintaining an internal representation of a continuous, periodic variable. Further, I will show how ring attractors emerge in small threshold linear networks through the coordination of a discrete set of line attractors. More broadly, this work informs our understanding of the functional capabilities of small, discrete systems.
March 4, 2025: (4PM)- Hava Siegelman, University of Massachusetts, Amherst
AI for Autonomous Agents: Sequence AI and Peer Cooperative Lifelong Learning.
Host: Marcelo Magnasco
How come drones are still mainly human controlled and have such limited autonomy? First, drones operate under significant constraints, including limited computational power, energy capacity, and communication bandwidth. Reinforcement Learning fail to maintain optimal performance under such constraints. We propose sequence AI algorithms that significantly improving compute and energy efficiency. Among the key features are rapid onboard responses and adaptability in dynamic environmental changes, robustness to missing inputs, minimization of sensor usage and the ability to use cheaper sensors to greater effect, as well as making possible the use of cheaper hardware while maintaining peak effectiveness. Second issue is the need of communication and cooperation among drones. Distributed AI is known to suffer explosion of communication needs, and this is not available in realistic swarms of drones. We propose a cooperative AI where the agents are lifelong learners. On the go, they are able to update, learn from failures, and become more expert with more experience. This paradigm enables both collaborative AI without explosive communication as well as a great reduction in the required labeled data (teacher), since the agents peer-teach each other. We suggest that these two directions of research will advance us towards true safe autonomy.
March 11, 2025: (4PM)- Xaq Pitkow, Carnegie Mellon University
Principles For Control When Computation Is Costly.
Host: Nikolas Schonsheck
Thinking is hard. Sometimes it seems better just to hack a solution than to plan it carefully. Here we develop this idea quantitatively, defining a version of stochastic control that accounts for computational costs of inference. We apply this to Linear Quadratic Gaussian (LQG) control with an added internal cost on information. This creates a trade-off: an agent can obtain more utility overall by sacrificing some task performance, if doing so saves enough mental effort during inference. We discover that the rational strategy that solves the joint inference and control problem goes through phase transitions depending on the task demands, switching from a costly but optimal inference to a family of suboptimal inferences, each interpretable as misestimating the structure of the world. In all cases, the agent moves more to think less. This work provides a foundation for a new type of rational computations that could be used by both brains and machines under strong energy constraints.
March 18, 2025: (4PM)
To Be Announced.
Host: TBD
To come.
March 25, 2025: (4PM)- Nathan Lord, University of Pittsburgh (Location: Smith Hall Annex, A-Level Physics Seminar Room)
To Be Announced.
Host: Amy Shyer/Alan Rodrigues
To come.
April 1, 2025: (4PM)- Carina Curto, Brown University
To Be Announced.
Host: Nikolas Schonsheck
To come.
April 8, 2025: (4PM)- Edouard Hannezo, Institute of Science and Technology Austria
To Be Announced.
Host: Amy Shyer/Alan Rodrigues
To come.
April 15, 2025: (4PM)- Brent Doiron, University of Chicago
To Be Announced.
Host: Nikolas Schonsheck
To come.
April 22, 2025: (4PM)- Max Wilson, University of California, Santa Barbara
To Be Announced.
Host: Amy Shyer/Alan Rodrigues
To come.
April 29, 2025: (4PM)- Elias Barriga, Technical University Dresden
To Be Announced.
Host: Amy Shyer/Alan Rodrigues
To come.
May 6, 2025: (4PM)
To Be Announced.
Host: TBD
To come.
September 16, 2025: (4PM)
To Be Announced.
Host: TBD
To come.
September 23, 2025: (4PM)
To Be Announced.
Host: TBD
To come.
September 30, 2025: (4PM)- Suckjoon Jun, University of California, San Diego
To Be Announced.
Host: Avi Flamholz
To come.
October 14, 2025: (4PM)
To Be Announced.
Host: TBD
To come.
October 28, 2025: (4PM)- Frederick A. Matsen, Fred Hutchinson Cancer Research Center
To Be Announced.
Host: Gabriel Victora
To come.
November 4, 2025: (4PM)
To Be Announced.
Host: TBD
To come.
November 11, 2025: (4PM)
To Be Announced.
Host: TBD
To come.
November 18, 2025: (4PM)
To Be Announced.
Host: TBD
To come.
December 2, 2025: (4PM)
To Be Announced.
Host: TBD
To come.
December 9, 2025: (4PM)
To Be Announced.
Host: TBD
To come.
December 16, 2025: (4PM)
To Be Announced.
Host: TBD
To come.