Hava Siegelmann

Defense Advanced Research Projects Agency (DARPA) | Microsystems Technology Office (MTO)

Lecture Information:
  • December 15, 2017
  • 1:00 PM
  • ECS 241
Photo of Hava Siegelmann

Speaker Bio

Dr. Hava Siegelmann joined DARPA in July 2016 with the goal of developing programs that advance intelligence in computerized devices, focusing on life-long learning, context-aware adaptivity, and user-centered applications. Prior to joining DARPA, Dr. Siegelmann directed the Biologically Inspired Neural and Dynamical Systems (BINDS) Laboratory at the University of Massachusetts Amherst, from which she is on leave. While at the University, she also served as a Core Member of the Neuroscience and Behavior Program. Dr. Siegelmann’s mathematical and computational studies of the brain, somatic cells, cognition, and intelligence depend on a multi-disciplinary approach that combines complexity science, information and learning theories, computational simulations, biology, and neural networks. A unifying theme of her work has been the study of time-dependent adaptive dynamical complex systems. One of her research goals involves further investigation of how an underlying architecture brings about the dynamics that evolve into intelligent behavior and how behavioral feedback from the dynamics proceeds toward adaptation in the architecture. Her research accomplishments include advancing the understanding of biologically-inspired computational systems, among them neural systems and genetic networks of organisms. Dr. Siegelmann co-originated Support Vector Clustering, which has become one of the most widely used clustering algorithms in industry. She also created a sub-field of computation with her discovery of Super-Turing computation theory, which continues to spawn innovations in both computational methods and the interpretation of cognitive, biological, and physical processes. Dr. Siegelmann has over 150 publications, including over 60 peer-reviewed articles, over 20 book chapters, and over 40 proceedings papers. She is the author of Neural Networks and Analog Computation: Beyond the Turing Limit (Birkhauser, 1998). She also has given nearly 200 invited lectures and served on various editorial boards, including those for Frontiers in Computational Neuroscience, Neural Networks, Chaos, and Scholarpedia. All of Dr. Siegelmann’s degrees are in Computer Science: a Ph.D. from Rutgers University in New Jersey, an M.S. from The Hebrew University in Israel, and a B.A. from Technion in Israel. Her academic distinctions include fellowships from the Center for Complexity Systems, the Alon Fellowship of Excellence of the Israeli National Committee for Higher Education, and the Rutgers Doctoral Fellowship of Excellence. She is the 2016 recipient of the Hebb Award of the International Neural Network Society.


Lifelong Learning encompasses computational methods that allow systems to learn in runtime and incorporate learning for application in new, unanticipated situations. As this sort of computation is found almost exclusively in nature, Lifelong Learning looks to nature for its underlying principles and mechanisms. This talk will discuss different computational concepts found in nature – including Super Turing computation, stochastic and asynchronous communication, interactive computation, and Lifelong Learning computation. While seemingly different, these varied computational attributes are, in fact, computationally equivalent, implying and underlying basis for computational learning. Lifelong learning computation is the most practical way to reach the superior computational capabilities set out in Super-Turing theory, and is the basis of DARPA’s new novel L2M program. This upcoming program will combine studies of natural systems, with the creation of actual Lifelong Learning computational structures / networks / programs that adapt during operation. L2M systems are anticipated to be more context aware, less surprised by unexpected changes, and far more resilient and robust in real environments.