INFORMS Open Forum

Call for Papers: NeurIPS 2022 Workshop: Order up! The Benefits of Higher-Order Optimization in Machine Learning

  • 1.  Call for Papers: NeurIPS 2022 Workshop: Order up! The Benefits of Higher-Order Optimization in Machine Learning

    Posted 08-15-2022 11:23
    Dear Friends and Colleagues,

    Hope you are doing well and that you are safe and healthy.

    We are excited to invite you to our NeurIPS 2022 workshop on "Order up! The Benefits of Higher-Order Optimization in Machine Learning". We are currently accepting papers for the workshop and would be delighted to have you, your collaborators and students submit papers. The call for papers can be found at https://order-up-ml.github.io/CFP/. The submission deadline is: September 22nd, 2022 (AOE).

    Note: NeurIPS 2022 (https://neurips.cc/) will be held in New Orleans (Nov. 28th-Dec. 3rd, 2022). The workshop will be held on December 2nd, 2022.

    Please let us know if you have any questions. We look forward to receiving your submissions.

    ================================================================================
    ================================================================================
    Order up! The Benefits of Higher-Order Optimization in Machine Learning
    Optimization is a cornerstone of nearly all modern machine learning (ML) and deep learning (DL). Simple first-order gradient-based methods dominate the field for convincing reasons: low computational cost, simplicity of implementation, and strong empirical results.

    Yet second- or higher-order methods are rarely used in DL, despite also having many strengths: faster per-iteration convergence, frequent explicit regularization on step-size, and better parallelization than SGD. Additionally, many scientific fields use second-order optimization with great success.

    A driving factor for this is the large difference in development effort. By the time higher-order methods were tractable for DL, first-order methods such as SGD and it's main variants (SGD + Momentum, Adam, …) already had many years of maturity and mass adoption.

    The purpose of this workshop is to address this gap, to create an environment where higher-order methods are fairly considered and compared against one-another, and to foster healthy discussion with the end goal of mainstream acceptance of higher-order methods in ML and DL.

    =========================================
    Plenary Speakers:
    - Amir Gholami (UC Berkeley)
    - Coralia Cartis (University of Oxford)
    - Frank E. Curtis (Lehigh University)
    - Donald Goldfarb (Columbia University)
    - Madeleine Udell (Stanford University)

    =========================================
    CALL FOR PAPERS
    We welcome submissions to the workshop under the general theme of "Order up! The Benefits of Higher-Order Optimization in Machine Learning". Some examples of acceptable topics include:
    - Higher-order methods,
    - Adaptive gradient methods,
    - Novel higher-order-friendly models,
    - Higher-order theory papers,
    - and many more.

    For submission details, please see https://order-up-ml.github.io/CFP/. Please use our CMT submission portal which can be found at the following link: https://cmt3.research.microsoft.com/HOOML2022.

    Important Dates:
    Submission deadline: September 22, 2022 (AOE)
    Acceptance notification: October 20, 2022 (AOE)
    Final version due: TBD

    =========================================
    Organizers:
    - Albert S. Berahas (University of Michigan)
    - Jelena Diakonikolas (University of Wisconsin-Madison)
    - Jarad Forristal (University of Texas at Austin)
    - Brandon Reese (SAS Institute Inc.)
    - Martin Takáč (MBZUAI)
    - Yan Xu (SAS Institute Inc.)
    ================================================================================


    ------------------------------
    Albert S. Berahas
    Assistant Professor
    University of Michigan
    ------------------------------