Congratulations to this year's winners of the INFORMS Optimization Society Prizes!
Khachiyan Prize: Dorit Hochbaum
Farkas Prize: Daniel Kuhn
Egon Balas Prize: Amir Ali Ahmadi
Young Researchers Paper Prize: Ernest K. Ryu
Young Researchers Paper Prize: Joseph Paat, Ingo Stallknecht, Zach Walsh, and Luze Xu
Student Paper Prize: Eitan Levin
Student Paper Prize 2nd Place: Liwei Jiang
Student Paper Prize Honorable Mention: Zikai Xiong
Student Paper Prize Honorable Mention: Sanyou Mei
Many thanks to the prize committees for their diligent work in selecting winners among excellent nominations and submissions from our vibrant optimization community. The prizes will be presented at the Optimization Society's business meeting on October 20th, 6:30-7:30 pm during the INFORMS 2024 National Meeting, Seattle, Washington. Please see the citations and prize committees below.
----------------------------------------------------------------------------
INFORMS Optimization Society 2024 Khachiyan Prize
----------------------------------------------------------------------------
Dorit Hochbaum (Department of Industrial Engineering and Operations Research, University of California, Berkeley, USA)
Citation: The 2024 Khachiyan Prize Committee recommends Professor Dorit Hochbaum as the winner of this lifetime research award. Dorit's career has focused on the design and analysis of algorithms for discrete optimization problems. Her work is characterized by a combination of mathematical elegance and intellectual incisiveness. In addition, Dorit has been an early contributor to the use of combinatorial algorithms in data mining and image segmentation. Dorit has also contributed to groundbreaking research in the area of neuroscience, in particular, investigating patterns of neuronal activity in vivo from digital imaging. Her research spans the spectrum from foundational algorithms to practical applications in many relevant domains. She has also graduated many Ph.D. students who have themselves gone on to make a name for themselves. For her range of accomplishments, the committee is pleased to recommend that Professor Dorit Hochbacum be awarded the Khachiyan Prize for 2024.
Prize committee: Ignacio E. Grossmann, Suvrajeet Sen (chair), Gerard Cornuejols, Renato Monteiro.
----------------------------------------------------------------------
INFORMS Optimization Society 2024 Farkas Prize
----------------------------------------------------------------------
Daniel Kuhn (Chair of Risk Analytics and Optimization, EPFL, Switzerland)
Citation: The INFORMS Optimization Society 2024 Farkas Prize is awarded to Daniel Kuhn, who is a pioneer and thought leader in the highly active areas of distributionally robust and multi-stage robust optimization.
Although it has a long history, the "modern era" of distributionally robust optimization began in the early 2000s, and it has been shaped to a considerable extent by Kuhn's seminal works on moment-based as well as data-driven ambiguity sets, in particular, Kuhn's work on "Distributionally Robust Convex Optimization", which provides a general framework that unifies much of the prior work in the field and characterizes precisely when distributionally robust optimization problems are (not) tractable. The work has become a standard reference in the discipline, and many researchers have extended it to incorporate new types of risk measures, novel characterizations of uncertainty as well as different constraint architectures. In his paper "Data-Driven Distributionally Robust Optimization Using the Wasserstein Metric: Performance Guarantees and Tractable Reformulations" (for which Kuhn received the 2020 INFORMS Frederick W. Lanchester Prize), Kuhn has shown that data-driven optimization problems over Wasserstein balls are efficiently solvable.
Multi-stage robust optimization seeks optimal decisions when information about the uncertain problem parameters is acquired sequentially, and today's decisions need to account for the fact that future decisions benefit from information that is unavailable today. In his seminal paper "Primal and Dual Linear Decision Rules in Stochastic and Robust Optimization", Kuhn offers one of the first theoretical analyses of the performance of affine decision rules, a popular approximation scheme going back to the 60s, and thus answers a question first raised by S. Garstka and R. Wets in 1974, and subsequently reiterated by A. Ben-Tal, L. El Ghaoui and A. Nemirovski in their book on Robust Optimization in 2009. In particular, Kuhn demonstrated that the (typically suboptimal) affine decision rule approximation can be applied not only to the multi-stage robust optimization problem at hand, but also to its dual, thus offering instance-specific optimality bounds that are often remarkably tight. Kuhn's bounds have subsequently been used by various researchers in domains such as finance, energy, and operations management. Later, Kuhn pioneered similar performance guarantees for robust optimization problems with discrete decisions (where duality-based schemes would fail due to the non-convexity) as well as Markov decision processes with uncertain transition kernels.
Kuhn is the current Editor-in-Chief for Mathematical Programming. He was named an IFORS distinguished lecturer in 2023, and he hasdelivered many plenary talks at major conferences. Kuhn was elected as an INFORMS Fellow in 2022, and he won the Wilhelm Bessel Research Award in 2021.
Prize committee: Katya Scheinberg, Jon Lee (chair), Garud Iyengar, Volker Kaibel
-----------------------------------------------------------------------------
INFORMS Optimization Society 2024 Egon Balas Prize
-----------------------------------------------------------------------------
Amir Ali Ahmadi (Department of Operations Research & Financial Engineering, Princeton University, USA)
Citation: The INFORMS Optimization Society 2024 Egon Balas Prize is awarded to Amir Ali Ahmadi for their contributions in the area of optimization. Amir Ali Ahmadi (Department of Operations Research and Financial Engineering, Princeton University, USA) earned his PhD in Electrical Engineering and Computer Science from MIT. After postdoctoral work at CSAIL and LIDS at MIT and as a Herman Goldstine Fellow at the IBM Watson Research Center, Dr. Ahmadi joined Princeton in 2014, where he is now a full professor.
Dr. Ahmadi has made deep and broad contributions in mathematical optimization with many connections to other fields including mathematics, control theory, and robotics. He has made significant advances in the study of optimization problems defined by polynomial functions, including the complexity of finding local minima and detecting convexity of polynomials. Other contributions include an algebraic characterization of perfect graphs, linear and second-order cone programming approaches to certify nonnegativity of polynomials, higher-order Newton methods for nonlinear optimization, and the construction of Lyapunov functions for switch systems in control theory. Dr. Ahmadi's contributions include the resolution of several problems that have been open for many years.
In recognition of his achievements, Dr. Ahmadi has received the Presidential Early Career Award for Scientists and Engineers as well as career awards from NSF, AFOSR and DARPA, a Sloan Fellowship in Computer Science, the INFORMS Optimization Society Young Researchers Prize, and the INFORMS Computing Society Prize.
The breadth of Dr. Ahmadi's contributions, their mathematical depth and the overall impact on the field are impressive and richly deserving of the Egon Balas prize.
Prize Committee: Amitabh Basu, Jim Luedtke(chair), Ted Ralphs, John Mitchell, Hande Benson
--------------------------------------------------------------------------------------------------
INFORMS Optimization Society 2024 Young Researchers Paper Prize
--------------------------------------------------------------------------------------------------
The committee has decided to give two awards this year.
1.
Ernest K. Ryu, (Department of Mathematics, University of California, Los Angeles, USA)
for the paper entitled "Uniqueness of DRS as the 2 operator resolvent-splitting and impossibility of 3 operator resolvent-splitting." Mathematical Programming 182 (1): 233-273, 2020.
Citation: Douglas-Rachford splitting (DRS) is a very influential operator splitting method, which was first used in numerical PDE in 1956 and later formalized by Lions and Mercier in 1979 to solve the maximal monotone inclusion problems. Two pivotal questions had remained open for forty years regarding DRS: (i) Is DRS the only choice for maximal monotone inclusion problems involving two operators? (ii) Can we extend DRS to maximal monotone inclusion problems involving three operators? The cited paper successfully addressed these two open questions by highly innovative techniques. First, it rigorously defined the notion of "admissible conditions", which includes "frugality" and "no lifting". Second, it proved that (i) DRS is the only admissible operator for maximal monotone inclusion problem with two operators, and (ii) there is no admissible operator for maximal monotone inclusion problem with three operators. Moreover, the cited paper proposed a new operator splitting algorithm with exciting perspectives for problems involving arbitrary number of operators. In summary, results in the cited paper resolved longstanding open questions in the field, and the highly skillful techniques in the proof as well as the notions of "frugality" and "no lifting" have generated a series of exciting follow-up work by other researchers.
2.
Joseph Paat, (Sauder School of Business, University of British Columbia, Canada)
Ingo Stallknecht, (Isha Foundation, India)
Zach Walsh, (Department of Mathematics and Statistics, Auburn University, USA)
Luze Xu (Department of Mathematics, University of California, Davis, USA)
for their joint paper "On the column number and forbidden submatrices for $\Delta$-modular matrices." SIAM J. Discrete Mathematics, Vol 38, No 1, pp. 1-18, 2024.
Citation: A major open question in integer programming that has attracted a lot of interest recently is whether integer programs defined over $\Delta$-modular matrices can be solved in polynomial time, assuming $\Delta$ is fixed. A matrix is $\Delta$-modular if every full-rank square submatrix has the absolute value of its determinant bounded above by a positive integer $\Delta$. It was known for a long time that this question has an affirmative answer when $\Delta= 1$, and more recently a polynomial time algorithm was obtained for $\Delta = 2$. The question remains open for $\Delta \geq 3$. As any answer to this question implicitly builds upon difficult results such as Seymour's characterization of totally unimodular matrices, and needs to combine results from matroid theory and polyhedral combinatorics, it is expected that this question is a difficult one to answer. Paat et. al. provide elegant new results that advance the study of $\Delta$-modular matrices while using techniques from multiple areas of discrete optimization. They not only improve upon earlier results and give a tight bound on the number of possible columns a $\Delta$-modular matrix can have, they also give families of matrices that form forbidden substructures in $\Delta$-modular matrices. We believe that their results will shed new light on the above-mentioned open question.
Prize Committee: Merve Bodur, Sanjeeb Dash, Shiqian Ma, Andy Sun (chair)
---------------------------------------------------------------------------------
INFORMS Optimization Society 2024 Student Paper Prize
---------------------------------------------------------------------------------
Eitan Levin (Applied and Computational Mathematics, California Institute of Technology, USA)
for the paper titled ""The effect of smooth parametrizations on nonconvex optimization landscapes" jointly authored with Joe Kileel and Nicolas Boumal
Citation:
This paper introduces a novel framework for analyzing relationships between optimization landscapes in nonconvex settings by utilizing smooth parametrizations, or "lifts," that map one optimization problem onto another. The authors systematically investigate how desirable properties, such as local minima and critical points in one problem, are preserved when mapped to another problem through these lifts. This framework is applied to a wide range of optimization problems, including low-rank matrix and tensor optimization, semidefinite programming, and training neural networks, revealing new insights into the structure and behavior of these problems. A key contribution is the identification of conditions under which lifts preserve various optimization properties, such as ensuring that local minima and stationary points in one problem correspond to similar points in the mapped problem. The authors demonstrate that these results generalize and unify many previously known results, providing a more comprehensive understanding of how optimization landscapes behave under parametrization.
Additionally, the paper explores the construction of lifts using fiber products and presents a detailed analysis of specific lifts, including the Burer-Monteiro lift for smooth semidefinite programs and lifts for low-rank matrices and tensors. The work sheds light on critical concepts in nonconvex optimization, such as benign nonconvexity, the strict saddle property, and hidden convexity, offering a reusable proof technique applicable across a broad range of problems. This paper stands out for its conceptual clarity and technical depth, addressing fundamental questions that have not been previously formulated in the optimization community. The framework developed has the potential to inspire significant follow-up research, both theoretical and algorithmic, making it a deserving candidate for this prize.
2nd Place:
Liwei Jiang (University of Washington) for the paper "Asymptotic Normality and Optimality in Nonsmooth Stochastic Approximation" jointly authored with Damek Davis and Dmitriy Drusvyatskiy
Honorable Mention:
Zikai Xiong (Massachusetts Institute of Technology) for the paper "Computational Guarantees for Restarted PDHG for LP Based on 'Limiting Error Ratios' and LP Sharpness" jointly authored with Robert M. Freund
Honorable Mention:
Sanyou Mei (University of Minnesota) for the paper "Accelerated First-Order Methods for Convex Optimization with Locally Lipschitz Continuous Gradient" jointly authored with Zhaosong Lu
Prize committee: Necdet Serhat Aybat, Albert Berahas, Georgina Hall, Robert Hildebrand (Chair), Aida Khajavirad,Weijun Xie
------------------------------
-------------------------------------
Oktay Gunluk
Georgia Tech, Atlanta
-------------------------------------
------------------------------