The Stochastic Models & Reinforcement Learning Area welcomes manuscripts that deal with the computational aspects of models that represent systems in which uncertainty is a central concern and methods that evaluate or optimize the performance of these systems. Computation is a critical dimension in dealing with large-scale stochastic systems that defy exact analytical solution. The challenge is to develop efficient, effective, and reliable algorithms for such systems. Papers of interest develop new such methodologies; conduct novel and insightful analysis of methods; and report on timely, important, and innovative applications based on data or realistic parameter values. The scope of this area includes a variety of topics. Examples include Markovian modeling of stochastic systems; aggregation approaches for Markov chains, such as ones that arise in data-driven applications that feature huge data sets; and reinforcement learning, approximations, and bounding techniques for intractable Markov decision processes and stochastic dynamic programs. (Papers that focus on supervised or unsupervised learning should typically be submitted to one of the other areas of the journal.) Appealing manuscripts deal with models of both traditional and emerging contextual domains, such as inventory, supply-chain, and service management; revenue management, pricing, and market analytics; social networks; energy; and financial engineering, computational finance, and risk management. Clear and concise exposition and rigorous execution are defining elements of successful articles.
The Institute for Operations Research and the Management Sciences
phone 1 443-757-3500
phone 2 800-4INFORMS (800-446-3676)