Yijie Peng, Michael C. Fu, Jian-Qiang Hu, and Bernd Heidergott win 2019 Outstanding Simulation Publication Award

Pierre L’Ecuyer, Michael Fu, Yijie Peng, and Jian-Qiang Hu

The winners of the INFORMS Simulation Society Outstanding Publication Award for 2019 are Yijie Peng, Michael Fu, Jian-Qiang Hu and Bernd Heidergott for their article titled "A New Unbiased Stochastic Derivative Estimator for Discontinuous Sample Performances with Structural Parameters".

This paper develops a general type of unbiased estimator for the derivative of a mathematical expectation with respect to a model parameter θ. Stochastic derivative estimation by simulation is a very important problem with a relatively long and rich history in the I-SIM community. It is very important in particular for sensitivity analysis and for stochastic optimization with respect to continuous decision parameters.

The best known methods for derivative estimation are infinitesimal perturbation analysis, the likelihood ratio method, and the weak derivative approach. Variants and hybrids of these methods have also been proposed over the last 30 years. However, a key challenge has been handling discontinuities in the sample performances for non-distributional parameters, which arise in a wide variety of applications, from financial engineering to production/inventory management.

In their paper, Peng, Fu, Hu, and Heidergott propose a new approach named the *generalized likelihood ratio* (GLR) method, capable of dealing with a large scope of discontinuities in a general framework. GLR expands the applicability of gradient estimation techniques in a systematic way, by generalizing the three methods mentioned above. It provides a single-run unbiased derivative estimator of an expectation with respect to θ in the case where the sample performance measure is discontinuous with respect to θ and its probability distribution as well as its support may also depend on θ. Many existing and new applications with discontinuities can potentially be treated by the new method in a unified manner. The paper opens up further topics for investigation. In follow-up work, the authors apply their method to calibrating parameters in misspecified stochastic models, estimating sensitivities of a distortion risk measure used in behavioral economics, estimating the derivative of a quantile, and training discontinuous artificial neural networks.

Congratulations for a deep, solid, and remarkable piece of work!

--------------------------------------------------------------------------------------

Boris N. Oreshkin, Nazim Réegnard, and Pierre L’Ecuyer win 2018 Outstanding Simulation Publication Award

Left to right: Raghu Pasupathy, Pierre L’Ecuyer, Björn Johansson

The selection committee enthusiastically agrees and is pleased to give the INFORMS Simulation Society’s Outstanding Publication Award to Boris N. Oreshkin, Nazim Réegnard, and Pierre L’Ecuyer for their paper:

Rate-Based Daily Arrival Process Models with Application to Call Centers, *Operations Research* 64 (2) 510–527, 2016

- The authors explore an important problem in the area of input modeling. In particular, the authors propose, develop and compare new stochastic models for the daily arrival rate in a call center.
- Current models often presume certain independence features across time periods, but the authors argue that this may not be sufficiently realistic for assessing the performance or for making system design decisions. The authors propose and analyze models which account for dependence across these time periods, and develop specialized techniques for doing so.
- Along the way, they explore the relationship between arrival rates, for which maximum likelihood techniques may be challenging due to the lack of direct observability of the rates, and the counts for arrivals in the periods. Counts have the advantage of being observable.
- Although the main focus of the article is on the arrival processes for call centers, which comprise a significant sector of the economy, its contributions are also relevant for models of other important applications, such as arrival processes for retail, health services, restaurant services, and service management more generally.
- The authors have done a fine job of linking together theory for input process modeling, probabilistic tools such as normal copulas to explore correlations, and integration with both call center data and data from a major utility company to demonstrate the characteristics of their proposal.

Ilya Ryzhov wins 2017 Outstanding Simulation Publication Award

The selection committee enthusiastically agrees and is pleased to give the INFORMS Simulation Society’s Outstanding Publication Award to Ilya Ryzhov for his paper:

Ryzhov, I. 2016. On the convergence rates of expected improvement methods, *Operations Research*, **64**(6), 1515-1528.

Only in simulation is “complete enumeration” an optimization method; this is the case because we can only *estimate* system performance, simulation of feasible solutions does actually cost time or money, and the cost can be considerable when the number of feasible solutions is large or the simulation is slow. We tend to want to make simulation optimization problems into complete enumeration problems---better known as *ranking & selection problems*---because it is the one type of optimization that we know how to do effectively. When I say we know how to do it “effectively” I mean that there are a bunch of us with different approaches who argue that we know how to do it “more effectively” than the other folks do.

Dr. Ryzhov’s paper does *not* introduce a new ranking and selection method. Instead, it shines an incisive light on the performance of a class of methods that have great intuitive appeal and mountains of good empirical performance, but have defied a rigorous comparison with other approaches: these are the so-called “expected improvement” or EI methods. Typically defined within a Bayesian setting, EI uses the current joint posterior distribution of the objective function values of all feasible solutions to assess the upside of exploring each of them next, relative to what we know now. EI methods are therefore naturally sequential, which is good, but myopic, which might seem to be bad. However, Dr. Ryzhov shows that, under fairly broad conditions, following the EI prescription leads to simulating the systems such that their sampling ratios are asymptotically the same as those of Optimal Computing Budget Allocation, which is known to yield near optimal performance in ranking and selection. Not only is the analysis clever, but it clarifies the relationships among several EI variants. Quoting the nomination letter, “I believe that this paper represents a fundamental unifying development in R&S.”

The INFORMS Simulation Society’s Outstanding Publication Award recognizes exceptional contributions to the simulation literature in the form of articles, books, book chapters and monographs, copyrighted between 2013 and 2015. The award committee, consisting of Jeff Hong, Jeremy Staum and Barry Nelson, are pleased to present the 2016 Award to Chang-Han Rhee and Peter Glynn for their paper:

“Unbiased Estimation with Square Root Convergence for SDE Models,” which appeared in *Operations Research*, Volume* 63* (2015), 1026--1043.

The paper formalizes an idea by which a given sequence of biased estimators that converge to a quantity of interest can be strategically manipulated to construct an unbiased estimator of the quantity. It then applies this novel idea to the specific context of solving stochastic differential equations (SDEs), resulting in a method that is guaranteed to achieve the canonical square root convergence rate for any SDE solution scheme having strong order *p* > 1/2. But, as pointed out by the nominator, the contribution of this paper is “more far-reaching [than the SDE example] in that it opens the door for researchers to pursue similar but specific schemes for use within other areas,” such as derivative estimation, simulation optimization and function approximation. The committee adds that for a paper so technically deep, the exposition is amazingly clear.

*Left to right: Russell Barton, Jeremy Staum, Barry Nelson, and Wei Xie*

The award committee, consisting of David Goldsman, L. Jeff Hong, and Jeremy Staum, presented the 2015 Outstanding Simulation Award to Russell Barton, Barry Nelson, and Wei Xie for their papers:

- Russell R. Barton, Barry L. Nelson, and Wei Xie (2014). “Quantifying Input Uncertainty via Simulation Confidence Intervals,” INFORMS Journal on Computing 26(1): 74–87.
- Wei Xie, Barry L. Nelson, and Russell R. Barton (2014). “A Bayesian Framework for Quantifying Uncertainty in Stochastic Simulation,” Operations Research 62(6): 1439–1452.

Accounting for uncertainty in the inputs to a simulation model is an important problem in simulation experiment design and analysis. These papers provide practical and theoretically compelling solutions to the problem, while being extremely well-written and accessible. The papers provide methods to generate a bootstrap confidence interval or a Bayesian credible interval for the true performance of the simulated system given the unknown true values of the inputs. To reduce the computation cost of doing this, the proposed methods harness the power of stochastic kriging metamodeling. This avoids relying on a simpler metamodel, such as a linear metamodel, to be accurate. There is also an advance in justifying the use of bootstrapping because the metamodel is smoother than simulation output.

The award committee, consisting of David Goldsman, Marvin Nakayama, and Jeremy Staum, presented the 2014 Outstanding Simulation Publication Award to Vijay Desai, Vivek Farias, and Ciamac Moallemi for their paper:

V. Desai, V. Farias, and C. Moallemi, “Pathwise Optimization for Optimal Stopping Problems,” *Management Science*, Vol. 58, No. 12, Dec. 2012, pp. 2292‒2308.

Over the last two decades, simulation optimization has emerged as an important and thriving area of research. Besides being interesting from a theoretical point of view, simulation optimization has a wide variety of practical applications, including real option management of commodities, the pricing of financial derivatives, inventory optimization, and supply chain management. But optimization problems, especially in high-dimensional settings, are difficult and time-consuming to solve via naive simulation. Improving the performance of simulation optimization algorithms has been a longstanding goal.

The authors propose a new convex optimization procedure that uses a dual characterization of optimal stopping problems to establish upper and lower bounds on the optimal solution to the underlying optimization problem. In fact, the paper is distinctive in formulating an optimal stopping problem as a convex optimization problem; and it turns out that this insight makes the procedure very fast, despite being simulation-based.

From a theoretical perspective, the paper includes very nice results on martingale duality and bounds based on a Markov chain notion of predictability. At the same time, both the problem and the proposed solution are quite practical. In addition, the paper is extremely well written and accessible, despite its mathematical sophistication.

There has been a great deal of work in recent years on simulation-based methods for optimal stopping problems, but this paper stands out for its combination of theoretical and practical contributions. In fact, this paper makes valuable contributions to several strands of research: simulation-based optimization, approximate dynamic programming, and simulation methods for finance.

The award committee, consisting of Marvin Nakayama, David Goldsman, and Pierre L’Ecuyer, are pleased to present the 2013 Outstanding Simulation Publication Award to Bruce Ankenman, Barry L. Nelson, and Jeremy Staum:

B. Ankenman, B. L. Nelson, and J. Staum. Stochastic Kriging for Simulation Metamodeling.Operations Research, Vol. 58, No. 2, March-April 2010, pp. 371-382.

This article extends kriging for deterministic computer experiments to stochastic kriging for metamodeling; demonstrates importance of incorporating both intrinsic uncertainty in stochastic simulation and extrinsic uncertainty of the unknown response surface; and shows that some conventional wisdom in metamodeling doesnot hold.

Hong, L. Je . 2009. Estimating quantile sensitivities. *Operations Research*

and

Hong, L. Je and Guangwu Liu. 2009. Simulating sensitivities of conditional value-at-risk. *Management Science*

Quantiles of random performance distributions are often used as measures of risk in fi nance or to measure the quality of service in certain systems. Good quantile estimators have been available for a long time. But there are many situations, for example in optimization contexts, where the performance random variable depends on a parameter θ and we want to estimate the derivative of the quantile with respect to θ. This is much more difficult.

In the first article, Jeff Hong shows how to write this derivative as a conditional expectation, and constructs an asymptotically unbiased stochastic derivative estimator based on that. This estimator turns out not to be consistent, but Hong leverages it to construct a consistent estimator by batching data and averaging independent copies. He studies the large-sample behavior and derives a central limit theorem for the resulting estimator. The method is easy to implement and is applicable to a large class of problems. The construction is based on a new approach to estimate the derivative of the expectation of a discontinuous function, via perturbation analysis. This approach has already found applications in subsequent papers.

The second article considers estimating the derivative for another popular risk measure, the conditional value at risk (CVaR), which is the expected value conditional on being larger than a certain quantile. Hong and Liu show how to write this derivative as a conditional expectation, propose an estimator, analyze its asymptotic properties, demonstrate its use in optimization settings where the CVaR appears in the objective or in constraints, and perform numerical studies.

As one of the nominators puts it: These papers are the kind of work that, in retrospect, we might wish we had written because of the creativity and clarity of thinking involved.

When we simulate the sample path of a stochastic process defined by a stochastic differential equation, except in simple special cases, we must discretize the time, and replace any estimator defined as a function of the sample path by a function of the process values at the discretization points. A too crude discretization gives too much bias for the estimator, whereas a too fine discretization makes the simulation too expensive to run.

In his paper ``Multilevel Monte Carlo Path Simulation,'' published in Operations Research in 2008, Michael Giles developed a method based on clever multigrid ideas in numerical analysis, that provides an estimator with the same low bias and almost the same variance as an estimator based on a very fine grid, with a computing effort comparable to that required with a crude discretization.

The idea is to select a decreasing sequence of discretization time steps, and write the estimator for the finest discretization as equal to the estimator for the coarsest one plus a sum of corrections, where each correction is the difference between the estimators at two successive time steps. We first run the simulation at the coarse discretization for a large sample size, then we estimate each correction term independently by simulating the difference with carefully synchronized common random numbers, using a sample size that decreases when the time step decreases. Much smaller sample sizes can be used for the corrections at the finer levels because these corrections have much smaller variances. The efficiency improvement is characterized in the paper by a general theorem showing that the computing effort often increases, as a function of one over the mean square error, at a slower rate than with the standard Monte Carlo method.The achieved rate depends on how the variance of the correction and the bias behave as a function of the time step, for the application at hand. A similar technique was proposed earlier by Stefan Heinrich for the different problem of estimating a function of a continuous parameter everywhere in a finite interval, by Monte Carlo, when only noisy observations of the function are available.

The proposed method applies to various discretization schemes, including Euler and Milstein methods. In the prize-winning paper, its efficiency was illustrated with option pricing problems in finance. Subsequent papers have examined the combination with quasi-Monte Carlo, estimation of sensitivities, jump-diffusion processes, stochastic partial differential equations, and applications to other areas than finance.

This work is a significant breakthrough in Monte Carlo methods for stochastic differential equations, and is already having a large impact on simulation research in finance.

The 2010 Outstanding Simulation Award was awarded to Paul Dupuis and Hui Wang (Brown University) and Kevin Leder (Harvard University) for their two papers: “Subsolutions of an Isaacs Equation and Efficient Schemes for Importance Sampling” which appeared in Mathematics of Operations Research in 2007, and “Importance Sampling for Sums of Random Variables with Regularly Varying Tails” which appeared in ACM Transactions on Computer Modeling and Simulation in 2007. The awards committee consisted of Peter Glynn (chair), Christos Alexopoulos and Athanasios Avramidis.

2009-- **Avramidis, Athanassios (Thanos) and Pierre L'Ecuyer.** 2006. Efficient Monte Carlo and quasi-Monte Carlo option pricing under the variance-gamma model, *Management Science* 52(12): 1930-1944.

2008 -- **Asmussen, Soren and Peter Glynn.** 2007. *Stochastic Simulation: Algorithms and Analysis*, New York: Springer.

2007 --

1. **Alexopoulos, Christos and David Goldsman.** 2004. To Batch Or Not To Batch?, *ACM TOMACS*, 14 (1): 76-114.

2. **Duchon, Philippe, Philippe Flajolet, Guy Louchard, and Gilles Schaeffer.** 2004. Boltzmann Samplers for Random Generation of Combinatorial Structures, *Combinatorics, Probability and Computing.*

2006 -- **Boesel, Justin, Barry Nelson and Seong-hee Kim.** 2003. Using Ranking and Selection to "Clean Up" After Simulation Optimization. *Operations Research* 51(5): 814-825.

2005 -- **Glasserman, Paul.** 2003. *Monte Carlo Methods in Financial Engineering*, New York: Springer.

2004 -- Not given.

2003 -- **Haas, Peter.** 2002. *Stochastic Petri Nets: Modelling, Stability, Simulation*, New York: Springer.

2002 -- **Asmussen, Soren, Klemens Binswanger, and Bjarne Hojgaard.** 2000. Rare events simulation for heavy-tailed distributions. *Bernoulli*, 6 (2): 303-322.

2001 -- **Law, Averill M., and W. David Kelton.** 1999. *Simulation Modeling and Analysis*, 3rd edition, New York: McGraw-Hill.

2000 -- **Propp, James and David Wilson.** 1996. Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics. Random Structures and Algorithms, volume 9 , 223-252. 1998. Coupling from the past: a user's guide. Microsurveys in Discrete Probability, Volume 41 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 181--192. American Mathematical Society.

1999 -- **L'Ecuyer, Pierre.** 1996. Combined Multiple Recursive Random Number Generators. *Operations Research*, 44 (5), 816--822. Maximally Equidistributed Combined Tausworthe Generators. *Mathematics of Computation*, 65 (213), 203-213.

1998 --** Fu, Michael, and Jian-Qiang Hu.** 1997. *Conditional Monte Carlo: Gradient Estimation and Optimization Applications*. Boston: Kluwer Academic Press.

1997 --** Fishman, George.** 1996. *Monte Carlo: Concepts, Algorithms, and Applications*. New York: Springer-Verlag.

1996 -- **Shahabuddin, Perwez.** 1994. Importance sampling for the simulation of highly reliable Markovian systems. *Management Science* 40 (3): 333-352.

1995 -- **Niederreiter, Harald. **1992. *Random number generation and quasi--Monte Carlo methods*. Philadelphia: Society for Industrial and Applied Mathematics.

1994 -- ** Fujimoto, Richard M.** 1990. Parallel discrete event simulation. *Communications of the ACM* 33 (10): 30-53.

1993 -- **Fox, Bennett L., and Peter W. Glynn.** 1990. Discrete-time conversion for simulating finite-horizon Markov processes. *SIAM Journal on Applied Mathematics* 50 (5): 1457-1473.

1992 -- **Glasserman, Paul. **1991. *Gradient estimation via perturbation analysis*. Boston: Kluwer Academic Publishers.

1991 -- **Whitt, Ward. **1989. Planning queueing simulations. *Management Science* 35 (11): 1341-1366.

1990 -- **Heidelberger, Philip , Xi-Ren Cao, Michael A. Zazanis, and Rajan Suri.** 1988. Convergence properties of infinitesimal perturbation analysis estimates. *Management Science* 34 (11): 1281-1302.

1989 -- **Devroye, Luc. **1986. *Non-uniform random variate generation*. New York: Springer-Verlag.

1988 -- **Zeigler, Bernard P. **1984. *Multifacetted modelling and discrete event simulation*. London: Academic Press.

1987 -- **Schruben, Lee W. **1983. Confidence interval estimation using standardized time series. *Operations Research* 31 (6): 1090-1108.

1986 -- Not given.

1985 -- **Wilson, James R., and A. Alan B. Pritsker.** 1984. Experimental evaluation of variance reduction techniques for queueing simulation using generalized concomitant variables. *Management Science* 30 (12): 1459-1472.

1984 -- Not given.

1983 (tie) -- **Meketon, Marc S., and Philip Heidelberger. **1982. A renewal theoretic approach to bias reduction in regenerative simulations. *Management Science *28 (2): 173-181.

1983 (tie) -- **Law, Averill M., and W. David Kelton.** 1982. Confidence interval procedures for steady-state simulations, II: A survey of sequential procedures. *Management Science* 28 (5): 550-562.

1982 -- **Lavenberg, Stephen S., and Peter D. Welch.** 1981. A perspective on the use of control variables to increase the efficiency of Monte Carlo simulations. *Management Science* 27 (3): 322-335.

1981 -- **Schruben, Lee W.** 1980. A coverage function for interval estimators of simulation response. *Management Science* 26 (1): 18-27.