INFORMS Open Forum

Announcing the 2023 George Nicholson Student Paper Competition Finalists

  • 1.  Announcing the 2023 George Nicholson Student Paper Competition Finalists

    Posted 13 days ago

    On behalf of the 2023 Nicholson Prize Committee, we are pleased to announce this year's finalists. Please join us on Sunday, Oct 15th in Phoenix for the finalists' presentation.


    Session 1 (SC79) - CC-West 211B


    Bahar Taşkesen, EPFL 

    Title: Distributionally Robust Linear Quadratic Control


    We consider a generalization of the classic Linear-Quadratic-Gaussian (LQG) control problem, where the noise distributions are unknown and belong to Wasserstein ambiguity sets. The objective is to minimize a worst-case cost across all distributions in the ambiguity set. We prove that a control policy that is linear in the observations is optimal for this problem. We also propose an efficient Frank-Wolfe algorithm that efficiently identifies the least-favorable distributions within the Wasserstein ambiguity sets and then computes the controller's optimal policy using Kalman filter estimation.


    Xingyu Wang, Northwestern University 

    Title: Large Deviations and Metastability Analysis for Heavy-Tailed Dynamical Systems


    We propose a framework integrating large deviations and metastability analysis of heavy-tailed dynamical systems. Applying it to heavy-tailed stochastic difference/differential equations, we provide sample path large deviations and first exit time analysis, offering the heavy-tailed counterparts of the classical Freidlin-Wentzell and Eyring-Kramers theories. Moreover, our results systematically characterize the intricate phase transitions in first exit times and an intriguing global behavior under truncated heavy-tailed dynamics that sheds light on the generalization mystery in deep learning.


    Akshit Kumar, Columbia University 

    Title: Dynamic Resource Allocation: Spectrum of Achievable Performances and Algorithmic Design Principles


    This work explores the impact of distributional assumptions on algorithmic performance in dynamic resource allocation problems. We identify a novel fundamental driver of regret character showing that polynomial regret is unavoidable. We introduce the Conservativeness with respect to Gaps (CwG) principle and propose the Repeatedly Act using Multiple Simulations (RAMS) algorithm, achieving near-optimal performance. Our findings are in sharp contrast to existing guarantees and provide insights into the fundamental drivers of algorithmic performance in resource allocation.

    Session 2 (SD79) - CC-West 211B


    Daan Rutten, Georgia Institute of Technology 

    Title: Mean-field Analysis for Load Balancing on Spatial Graphs


    The analysis of parallel-server load balancing systems has relied heavily on mean-field analysis. A pivotal assumption is that servers are exchangeable. However, modern data-centers have data locality constraints, such that tasks of one type can only be routed to a small subset of servers. An emerging line of research considers load balancing algorithms on bipartite graphs where vertices represent task types and servers, respectively. In this talk, I will present a novel coupling-based approach to establish mean-field approximation for a large class of graphs which includes spatial graphs.


    Yunbei Xu, MIT  

    Title: Bayesian Design Principles for Frequentist Sequential Learning


    We develop a general theory to optimize the frequentist regret for sequential learning problems, where efficient bandit and reinforcement learning algorithms can be derived from unified Bayesian principles. We propose a novel optimization approach to create "algorithmic beliefs" at each round, and use Bayesian posteriors to make decisions. This is the first approach to make Bayesian-type algorithms prior-free and applicable to adversarial settings, in a generic, optimal, and efficient manner.


    Haofeng Zhang, Columbia University 

    Title: Estimate-Then-Optimize Versus Integrated-Estimation-Optimization: A Stochastic Dominance Perspective


    In data-driven stochastic optimization, model parameters of the underlying distribution need to be estimated from data in addition to the optimization task. The integration of the estimation and optimization processes can be readily shown to outperform ``estimate then optimize" when the model is misspecified, while we argue that when the model class is rich enough to cover the ground truth, the performance ordering between the two approaches is reversed for nonlinear problems in a strong sense. Analogous results also hold under constrained settings and when contextual features are available.


    Anton Braverman and He Wang
    (Co-chairs of the 2023 Nicholson Competition)