Navid NaderiAlizadeh

Assistant Research Professor of Biostatistics & Bioinformatics, Duke University

Lecture Information:
  • December 15, 2023
  • 2:00 PM
  • Zoom

Speaker Bio

Navid NaderiAlizadeh is an Assistant Research Professor in the Department of Biostatistics & Bioinformatics at Duke University. Prior to that, he was a Postdoctoral Researcher in the Department of Electrical and Systems Engineering at the University of Pennsylvania. Navid’s current research interests span foundations of machine learning, artificial intelligence, and signal processing, and their applications in developing novel methods for analyzing biological data. Navid received the B.S. degree in electrical engineering from Sharif University of Technology, Tehran, Iran, in 2011, the M.S. degree in electrical and computer engineering from Cornell University, Ithaca, NY, USA, in 2014, and the Ph.D. degree in electrical engineering from the University of Southern California, Los Angeles, CA, USA, in 2016. Upon graduating with his Ph.D., Navid spent four years as a Research Scientist at Intel Labs and HRL Laboratories.


For learning-based solutions to be deployed in real-world systems, they generally need to guarantee the satisfaction of specific requirements or constraints. In this presentation, I discuss the fundamentals and implications of constrained learning in three settings: optimizing the performance of wireless networks, driving sample selection in active learning scenarios, and enabling algorithm unrolling in federated learning. For wireless networking, I present methods at the intersection of constrained optimization and graph representation learning, which enable autonomous wireless network management solutions that provide performance guarantees and handle the irregular data structure of large-scale wireless networks. For active learning, I demonstrate how a constrained learning formulation enables the selection of a diverse and informative set of unlabeled samples as the query set via a Lagrangian duality approach. In addition, for federated learning, I discuss how imposing descent constraints can enable unrolling of the distributed gradient descent algorithm, leading to communication-efficient decentralized training of agents’ model parameters.