Abstract: Nonconvex optimization algorithms play a major role in solving statistical estimation and learning problems. Indeed, simple nonconvex heuristics, such as the stochastic gradient method, often provide satisfactory solutions in practice, despite such problems being NP hard in the worst case. Key examples include deep neural network training and signal estimation from nonlinear measurements. While practical success stories are common, strong theoretical guarantees, are rarer. The purpose of this talk is to overview a few (highly non exhaustive!) settings where rigorous performance guarantees can be established for nonconvex optimization, focusing on the interplay of algorithm dynamics, problem conditioning, and nonsmoothness.
Bio: Damek Davis received his Ph.D. in mathematics from the University of California, Los Angeles in 2015. In July 2016 he joined Cornell University's School of Operations Research and Information Engineering as an Assistant Professor. Damek is broadly interested in the mathematics of data science, particularly the interplay of optimization, signal processing, statistics, and machine learning. He is the recipient of several awards, including the INFORMS Optimization Society Young Researchers Prize in (2019) and a Sloan Research Fellowship in Mathematics (2020).