INFORMS Open Forum

2021 INFORMS Optimization Society Prizes Winners

  • 1.  2021 INFORMS Optimization Society Prizes Winners

    Posted 08-18-2021 17:59

    Dear colleagues,

    Please join us in congratulating the winners of the 2021 INFORMS Optimization Society prizes.

    -------------------------------------------------------------------       
    2021 INFORMS Optimization Society Khachiyan Prize
    -------------------------------------------------------------------

    Winner: Boris Polyak (Institute for Control Science, Russian Academy of Sciences)

    Citation: Boris Polyak occupies a unique place in both Optimization and Automatic Control. Over the last 60 years he made fundamental contributions which have left a lasting impact in both fields.

    In the early 1960s, when the field of Optimization was born, Boris's work on the gradient, projection gradient, heavy ball, and conjugate gradient methods, as well as Newton method for unconstrained minimization and for minimization with nonlinear equality constraints made a decisive impact on the young field and secured his leading role in Optimization. Furthermore, important results in non-smooth optimization, methods for solving ill posed, large scale linear and quadratic programming problems were just a few of Boris's contributions in this decade, which was incredible for Optimization. In the 1960s he not only developed and analyzed several methods for constrained and unconstrained optimization, but established new exposition standards where, along with existence and convergence properties of an algorithm, an analysis of its rate of convergence became a must. It should be stressed that the theory of gradient type first order methods for large-scale convex optimization, with its numerous extensions and its wide spectrum of current applications to Signal Processing and Machine Learning takes its origin in the pioneering papers of N. Shor and B. Polyak. To give a single example: in 1962 he discovered and analyzed the Heavy Ball method, a precursor to Fast Gradient algorithms widely used in today Machine Learning and the first implementation of a "gradient method with momentum," now the work horse of Deep Neural Net optimization. It is amazing that now, almost 60 years later, Boris continues to bring new important contributions in the field of Optimization.

    Boris is always looking for new problems with serious mathematical content and practical importance, and in his 85 years remains as active and nearly as productive as 40 years ago. In the past few years, he productively worked on the problem of robust principal component analysis, Monte-Carlo techniques in optimization, conditional gradient algorithm, adaptive Newton method,  sparse optimal control, mathematical aspects of search engines to name just a few.

    Prize committee: Alex Shapiro (chair), Dimitris Bertsimas, Gérard P. Cornuéjols, Stephen J. Wright

    --------------------------------------------------------------
    2021 INFORMS Optimization Society Farkas Prize
    --------------------------------------------------------------
    Winner: Andrea Lodi (Jacobs Technion-Cornell Institute, Cornell Tech and Technion, USA and CERC, Polytechnique Montréal, Canada)

    Citation: Andrea Lodi received his PhD in 2000 from the University of Bologna and he has received the 2004 Herman Goldstine Fellowship in Mathematical Sciences by IBM TJ Watson. Prior to joining Cornell Tech in 2021, he was Canada Excellence Research Chair at Polytechnique Montreal (2015-2021) and Professor of Operations Research at the University of Bologna (2007-2015). Professor Lodi is currently Area Editor of INFORMS Journal on Computing and a former Editor in Chief of Optima. Other editorial appointments include Mathematical Programming, Mathematics of Operations Research, Management Science and Mathematical Programming Computation.

    Andrea Lodi is a leader in the development of state-of-the-art solvers, practical applications of optimization, such as crew and vehicle scheduling, and water network design. His research on mathematical programming has generated important advancements in constraint programming, mixed-integer nonlinear programming (MINLP), and heuristics for mixed-integer linear programming (MILP). Professor Lodi was part of the IBM/CMUl team that developed the open-source BONMIN solver for MINLP, and he has been a regular contributor to commercial MILP solvers such as CPLEX. In addition to deterministic solvers, Andrea has developed numerous heuristics for MILP, most notably, the feasibility pump for MILP, which has been adopted as a standard heuristic in many commercial MILP solvers, and which he has extended to MINLP, including nonconvex MINLPs. Andrea has also worked on the intersection of local-search techniques and constraint programming. Most recently, he has pioneered the integration of machine learning and optimization, including the application of machine-learning techniques to tune MILP solvers and more generally solve hard combinatorial optimization problems. He is a worthy recipient of the 2021 Farkas prize for his broad range of seminal contributions to the practice and theory of optimization.

    Prize committee: Dorit S. Hochbaum (chair), Jesús A. De Loera, Sven Leyffer, Katya Scheinberg

     

    --------------------------------------------------------------------
    2021 INFORMS Optimization Society Egon Balas Prize 
    --------------------------------------------------------------------

    Winner: Wotao Yin (Alibaba US)

    Citation: Wotao Yin earned his Ph.D. degree in operations research from Columbia University in 2006. Before joining Alibaba US in 2019, he was an Applied Math Professor in the Department of Mathematics, University of California, Los Angeles. During 2006–2013, he was with the Department of Computational and Applied Mathematics at Rice University. Dr. Yin won the NSF CAREER award in 2008, an Alfred P. Sloan Research Fellowship in 2009, a Morningside Gold Medal in 2016, and a Damo Award in 2021.

    Wotao Yin's work covers topics ranging from hard theoretical analysis to practical algorithms and code development. He is one of the world's most influential researchers in the field of operator splitting methods, parallel and distributed computing, decentralized optimization, compressed sensing, and variational image processing. These are fast-growing fields that are particularly important in machine learning and data science.

    Wotao Yin's contributions to those fields and in particular to imaging science have injected the much needed rigor on the development of efficient optimization methods, which has generated a lasting influence over the last 15 years or so.

    Prize committee: Andrea Lodi (chair), Sam Burer, Santanu S. Dey, Daniel Kuhn


    -------------------------------------------------------------------------------------
    2021 INFORMS Optimization Society Young Researchers Paper Prize 
    -------------------------------------------------------------------------------------

    Winner: Haihao (Sean) Lu (Booth School of Business, University of Chicago) for the paper titled "An O(s^r)-Resolution ODE Framework for Understanding Discrete-Time Algorithms and Applications to the Linear Convergence of Minimax Problems".

    Citation: There has been renewed interest in using ordinary differential equations (ODEs) to understand the dynamics of a discrete-time optimization algorithm (DTA), due to the ease of analysis of ODEs compared to that of a DTA. However, two fundamental open questions have limited the applications of the ODE approach in optimization algorithm analysis: 1) how to obtain a suitable ODE from an optimization method? 2) what is the connection between the convergence of the ODE and the convergence of the optimization method? Haihao Lu's paper constructively answers the first question by proposing a hierarchy of O(s^r)-resolution ODEs, parameterized by the order r, that converges to the discrete-time optimization method as r goes to infinity, where s is the step-size of the algorithm. Furthermore, the paper addresses the second question by showing that with a properly chosen energy function in the analysis, the linear convergence of the ODE automatically guarantees the linear convergence of the corresponding optimization method. Finally, the paper answers fundamental questions about three central algorithms for minimax problems and their convergent/divergent behavior. In summary, the novel and unified framework developed in this paper for analyzing a broad class of optimization algorithms makes it highly deserving of the INFORMS Optimization Society Young Researcher Prize.

    Prize committee: Simge Küçükyavuz (chair), Illya V. Hicks, Guanghui (George) Lan, Alberto Del Pia


    ------------------------------------------------------------------------
    2021 INFORMS Optimization Society Student Paper Prizes
    ------------------------------------------------------------------------

    Winner: Alex L. Wang (Computer Science, Carnegie Mellon University) for the paper titled "On the tightness of SDP relaxations of QCQPs" (jointly authored with Fatma Kilinç-Karzan)

    Citation: This paper examines semidefinite programming (SDP) relaxations of general quadratically constrained quadratic programs (QCQPs). Although each QCQP can be relaxed into a semidefinite program immediately and there has been much work on understanding the approximation quality of SDP relaxations for specific QCQP classes (e.g., MaxCut), much less is known about the quality of SDP relaxations in terms of more abstract properties. Furthermore, not much is known about when the SDP relaxation of a given QCQP is exact beyond simple settings (e.g., the case where the QCQP has a single constraint). This paper makes significant progress towards identifying the properties of nonconvex QCQPs that ensure that their SDP relaxations are "good." To this end, the paper defines and examines two notions of SDP exactness: (i) objective value exactness-the condition that the optimal value of the QCQP and the optimal value of its SDP relaxation coincide; and (ii) convex hull exactness-the condition that the convex hull of the QCQP epigraph coincides with the (projected) SDP epigraph. This paper show that the exactness conditions hold whenever the QCQP satisfies two abstract properties: (i) its set of convex Lagrange multipliers is polyhedral; and (ii) a natural symmetry parameter of the QCQP, the quadratic eigenvalue multiplicity, is sufficiently large. Variants of the paper's results hold for a number of interesting practical problems (e.g., applying variants of the main result show that robust least squares with nonconvex uncertainty regions or certain sphere packing problems can be solved exactly using SDPs).

    This paper is of very high quality and timely. The constructions developed in this paper are essentially novel and highly nontrivial, witnessing very good creativeness and technical skills, and the overall framework has been exceptionally instrumental in establishing results on SDP exactness. Given the vast use of SDP relaxations and their surprisingly good performance in practical applications, the theoretical justifications for this phenomena offered in this paper are foundational.

     

    Honorable Mention: Shixuan Zhang (Industrial and Systems Engineering, Georgia Institute of Technology) for the paper "Stochastic Dual Dynamic Programming for Multistage Stochastic Mixed-Integer Nonlinear Optimization" (jointly authored with Andy Sun)

    Prize committee: Ruth Misener (chair), Paul Grigas, Robert Hildebrand, Weijun Xie

    Warmest congratulations!


    ------------------------------
    Andy Sun

    IOS Treasurer/Secretary

    Associate Professor
    Industrial and Systems Engineering
    Georgia Institute of Technology

    Atlanta, GA, USA

    ------------------------------