Thanks for the thoughtful response, Alexander. I agree that human analysts remain responsible for understanding the problem, validating assumptions, and making final decisions, especially in technical fields like OR. AI agents shouldn't replace expert judgment. I think there's a role even now for AI agents to assist, not replace, analysts with well-defined, structured tasks. AI may not understand statistics like a human does, but it can follow rules and patterns to suggest or automate standard steps. A lot of times, these are repeatable processes that can greatly benefit from automation, especially at scale.
I personally use AI in a different way than asking Google- or Wikipedia-type questions. Since LLMs can read and process text and images, I ask follow-up questions and dive into different ideas/perspectives, having a conversation about the topic. I can have it check a few of my assumptions quickly with some python code that it writes and runs. I can have it read a chart and make a summary or comparison. I have even had it read an academic paper and create (with some success) the code to replicate the experiments. The quality of what it produces is getting better and better too. If it consistently does well at some tasks that I've given it, then I start to give it more autonomy to take action without me having to give my approval. That's the type of "AI agent" that I have in mind - something that has a goal and takes action to satisfy that goal.
As for the education point: I share the concern about over-reliance without understanding. I hope we can integrate them into education in a way that deepens understanding, not bypasses it.
------------------------------
Warren Hearnes, PhD
Founder, OptiML AI
INFORMS Board Role: VP Technology Strategy
------------------------------
Original Message:
Sent: 06-30-2025 11:48
From: Alexander Kolker
Subject: Are Agents AI or OR?
Dear Warren Hearnes,
I have much lower expectations for the role and the impact of AI agents. I don't think that AI agents can and should choose which tools to use. Your example "They do figure out things like "I need to use the formula for confidence intervals for the t-distribution to quantify uncertainty.." requires, first, understanding the meaning of confidence interval and the t-distribution, and whether t-distribution can be applied or not in the particular case. It is the qualified human analyst or OR specialist who must understand the problem to decide on, not an anonymous AI agent that is not responsible for anything.
One can ask AI something like "What is confidence interval" or "what is t-distribution?" with examples to refresh one's memory. But this is similar to, say, Wikipedia search. No AI is needed at all.
You write "We can make sure the right tools are chosen". Who are we? Anonymous AI agents or responsible human analysts?
I have serios concerns that the current trend in uncritical reliance on AI in technical fields and its impact will inevitably result in diminished students' education level.
------------------------------
Alexander Kolker
ge healthcare
MILWAUKEE WI
Original Message:
Sent: 06-30-2025 09:40
From: Warren Hearnes
Subject: Are Agents AI or OR?
Thanks for starting this discussion on AI agents and OR. Here are my thoughts.
I see an agent being a goal-driven entity that uses environmental data to act, yet that is a fuzzy definition that fits both humans and non-human systems alike. That's not a bad thing since AI is supposed to mimic human intelligence, right? The AI agent uses its capabilities to read and understand text, images, and structured data (similar to a person) to choose which tools to use, so it isn't all LLM generation. They do figure out things like "I need to use the formula for confidence intervals for the t-distribution to quantify uncertainty in the mean service time" and then it solves it, possibly by writing a python script that does that computation.
I think many of these tools that AI agents choose to use will be OR techniques that the INFORMS community has created, so that is where we can really add value. We can make sure the right tools are chosen, which will lead to smarter decisions. AI agents already enhance productivity of human analysts by automating low-level tasks, guiding model development, and translating outputs into actionable insights. I use AI agents to help me code. I remain in control by defining goals, validating models, and making final decisions while the AI acts as a smart assistant that brings speed, scale, and consistency to my workflow. Once the AI is right almost all the time, I think it is natural to give it the autonomy to make the decision on its own - not unlike how we would give a new employee more responsibility or promote an OR model into production.
How have others in this community utilized AI agents? What thoughts or concerns do you have?
------------------------------
Warren Hearnes, PhD
Founder, OptiML AI
INFORMS Board Role: VP Technology Strategy
Original Message:
Sent: 06-29-2025 23:02
From: kenneth fordyce
Subject: Are Agents AI or OR?
The hot AI topic of the month is Agents. Question of the week are Agents – AI or Operations Research (OR). I recently read the claims that AI agents will proposing allocation strategies during constrained supply. Is meeting this business need something to be accomplished by AI decision technology or OR decision technology? Of course, we first need to ask the questions which AI technology. I would suggest it is incumbent on the applied side of INFROMS to bring clarity to this question. My personal view is Agents are a software architecture to imbed solutions. As Peter Norden observed in the mid-1980s, AI is a moving target.
------------------------------
kenneth fordyce
director analytics without borders
Arkieva
Wilmington DE
------------------------------