INFORMS Open Forum

  • 1.  Capacity of LLMs in Doing OR/MS Research

    Posted 06-24-2025 08:29

    Hi everyone,

    We made a blog post here summarizing our findings in using ChatGPT-o3 for doing OR/MS research: The Capacity of LLMs in Doing OR/MS Research 

    A summary of the post is as follows:

    • In this post, we provide a summary of our findings in exploring the capability of ChatGPT-o3 (accessible with $20/month Plus plan) model by OpenAI in doing OR/MS research. Based on our exploration, all research types within the INFORMS community will be affected by the LLM tools: theory, modeling, and empirical. The tools significantly change the way people do OR/MS research or other related fields like statistics. Beyond that, a rethinking of the values of doing research and the training of Ph.D. students is also necessary.
    • The LLMs can help with every step of doing OR/MS research: proposing ideas, literature review, writing code, theory analysis/proof, and handling revisions. We don't try to argue for the failure case that the current LLMs can't do or to predict what the LLMs can do in the future. It's easy to find cases where the current LLMs fail or hallucinate. But we should keep in mind that the models are currently improved on a monthly/quarterly basis. We, as the authors of this article, wouldn't be surprised if a much stronger model appears in the next half or 1-2 years.
    • The current LLM tools can pose a challenge to the current publication system of INFORMS, in terms of submission volume, review system, judgment criteria, etc. We don't have an estimate of how many researchers in the INFORMS community have already used these tools in doing research or writing reviews, or to what extent the current usage is. We want to take this opportunity to align the understanding of the matter across the whole community.

    Cheers,

    Xiaocheng



    ------------------------------
    Xiaocheng Li
    Assistant Professor
    Imperial College Business School, Imperial College London
    ------------------------------


  • 2.  RE: Capacity of LLMs in Doing OR/MS Research

    Posted 06-25-2025 09:36

    Hi Xiaocheng,

    Thanks for sharing your exploration with LLMs and raising awareness. Their capability, and particularly the rate of progress, are indeed mind boggling. I hope we can have more discussion on the impact on us as a researcher, a reviewer, and a member of the INFORMS community.

    Best,

    Ningyuan



    ------------------------------
    Ningyuan Chen
    Associate Professor
    Department of Management, University of Toronto Mississauga
    Operations Management and Statistics, Rotman School of Management, University of Toronto
    e-mail: ningyuan.chen@utoronto.ca
    Web: individual.utoronto.ca/ningyuanchen
    ------------------------------



  • 3.  RE: Capacity of LLMs in Doing OR/MS Research

    Posted 06-25-2025 14:50

    Xiaocheng,

    Thanks for sharing your findings. It's interesting that o3 outperformed Claude and Gemini (for now) for your tasks, but as you stated, these models are improving rapidly.

    I'd love to hear member's thoughts as well. What are the advantages and critical issues for using LLMs in OR/MS research?

    --Warren



    ------------------------------
    Warren Hearnes, PhD
    Founder, OptiML AI
    INFORMS Board Role: VP Technology Strategy
    ------------------------------



  • 4.  RE: Capacity of LLMs in Doing OR/MS Research

    Posted 06-27-2025 17:18
    Edited by Gerhard-Wilhelm Weber 06-27-2025 17:20
    Dear Xiaocheng,
    thank you very much for your important and stimulating post!  I have learned a lot from it.
    The phrase "Everything is leaf" (,,Alles ist Blatt") goes back to the famous poet, thinker, and researcher Johann Wolfgang von Goethe. In his native language, German, ,,Blatt" ("leaf") is used not only in a botanical or even entomological- or ichthyological-zoological sense, but also for a sheet of paper, such as in a book. This is fitting, if only because the word "book" goes back to the tree "beech" (,,Buche"). Indeed, everything is represented in words, even in spiritual words, such as God's Word and the Book of Life, and words, in turn, consist of letters, in German: ,,Buchstaben" (from ,,Buch" ("book", "Fagus") and ,,Staben" ("staffs"), probably with etymological reasons in a Germanic cult of casting the lot). 
    Of course, a book is more than the sum of its pages (,,Blätter"), more than the sum of its words, and more than the sum of its letters, but also, quite essentially, of the connections between all of these, the meanings of all of these, ultimately the spirit underlying, in and above all of these, and the life or generalized life in all of these.
    Ultimately, we also speak of the Holy Spirit and Eternal Life. All of these permeate and are simultaneously reflected through "language" in general or "languages" in particular.
    It is therefore only logical that we trace our sciences and their applications back to languages ​​and their mutual intelligibility and interactions. Artificial Intelligence (AI), especially Large Language Model (LLM) tools, is no exception. The latter are crucial for the development, deployment and management of LLMs in so many modern applications.
    LLM tools serve tasks such as model fine-tuning and deployment, integration - including with other systems - and monitoring. Furthermore, they increase efficiency, align or streamline workflows, and enable the (generalized) creation of high-performance and more trustworthy uses of high-quality AI.
    The various translation programs for and between, about, and to languages ​​serve as a prototype or archetype for AI in general and LLM tools in particular. At the same time, we always keep in mind all the generalizations of all the aforementioned. 
    The (Holy) Spirit of truth, of life-friendliness and of love itself, which can and should be present in all of this, may then also be of decisive help in overcoming existing problems in LLM tools and in AI in general.
    Reference: G.-W. Weber, Times and Lives, in completion.



    ------------------------------
    Gerhard-Wilhelm Weber
    Professor
    Poznan University of Technology
    Poznan
    ------------------------------



  • 5.  RE: Capacity of LLMs in Doing OR/MS Research

    Posted 06-30-2025 16:40

    Thanks, Xiaocheng!

     I enjoyed your insightful blog article leveraging LLMs to help with OR/MS research. I particularly liked the prompt you provided in responding to revise-and-resubmits.

    I am more optimistic than you are about the implications of LLM improving researcher productivity. More excellent publications should result. Because journal capacities are mostly fixed, the papers they publish will have higher quality. This is good for the profession. Those who master LLMs (for instance, by following your suggestions) should be able to create the best papers - using human talents supplemented by AI. As journal readers, we should care primarily about the article quality and insights and not so much about how the papers were created (aside from avoiding plagiarism of course).



    ------------------------------
    John Milne
    Clarkson University
    Potsdam, NY
    jmilne@clarkson.edu
    ------------------------------



  • 6.  RE: Capacity of LLMs in Doing OR/MS Research

    Posted 07-08-2025 17:20

    Hi Xiaocheng,

    Thank you for sharing this wonderful and insightful post! Indeed, the capabilities of LLMs are truly astonishing, and their rapid progress is even more impressive.

    In line with your research, we recently conducted a case study at Columbia Business School [1] where we found all the LLMs we tested passed our qualification exams on stochastic problems by a significant margin. This result was genuinely surprising-even our well-prepared PhD students, who spend a year on these topics, find these problems challenging, and so do I. :)

    Still, I find myself on the optimistic side. As AI continues to advance, I see real opportunities for OR researchers to amplify their impact by embracing these new technologies. If we view the OR pipeline-from "real-world problem" to deployable solution (see Figure 1)-as a process that typically takes years, it's clear that many parts could be automated or made more efficient by LLMs and AI agents. This, in turn, could allow us to achieve broader and deeper impact.

    I also hope more OR colleagues will join the study and application of LLMs and AI agents, both in academia and industry. This technology has the potential to fundamentally revolutionize decision-making in the real world, and our community is well-positioned to make a central contribution. Opportunities include optimizing LLM systems with OR, building better simulations using LLMs, building better AI agents for process automation, improving AI agents with RL, and many, many more..

    So it's great to see posts like this, and I look forward to more research and discussion on LLMs in OR!

    Best,
    Tianyi

    [1] Kumar, Akshit, Tianyi Peng, Yuhang Wu, and Assaf Zeevi. "Performance of LLMs on Stochastic Modeling Operations Research Problems: From Theory to Practice." arXiv preprint arXiv:2506.23924 (2025).



    ------------------------------
    Tianyi Peng
    Assistant Professor
    Columbia University
    New York NY
    ------------------------------