INFORMS Open Forum

Call for Papers Annals of Operations Research

  • 1.  Call for Papers Annals of Operations Research

    Posted 05-20-2022 08:04

    Call for Papers

    Annals of Operations Research

     

    Special Issue: Interpretable Machine Learning and Explainable Artificial Intelligence

     Closing Date: March 31, 2023

    _______________________________________________________________________

     Annals of Operations Research seeks submissions for a special issue on Interpretable Machine Learning and Explainable Artificial Intelligence. The deadline for submission is March 31, 2023.

     

    Machine learning (ML) has garnered significant interest in recent years due to its applicability to a wide range of complex problems. There is increasing realization that ML models, in addition to making predictions, reveal information about relationships between domain data items, commonly referred to as interpretability of the model. A similar situation is occurring in the artificial intelligence (AI) scientific community, which has concentrated on explainable AI (XAI) along the dimensions of algorithmic interpretability, explainability, transparency, and accountability of algorithmic judgments. ML approaches may be classified as white-box or black-box. White-box techniques, like rule learners and inductive logic programming, provide explicit models that are intrinsically interpretable, while black-box techniques, such as (deep) neural networks, provide opaque models. With the growing use of ML, there have been significant social concerns about implementing black-box models for decisions requiring the explanation of domain relationships. The ability to express information obtained from ML models in human-comprehensible language–aka interpretability–has sparked considerable attention in academics and industry. These interpretations have found applications in healthcare, transportation, finance, education, policymaking, criminal justice, etc. As it evolves, one aim in ML is the development of interpretable techniques and models that explain themselves and their output.

     

    This special issue invites papers on advancements in interpretable ML from the modeling and learning perspectives. We are looking for high-quality, original articles presenting work on the following (not exhaustive) topics:

    • Probabilistic graphical model applications
    • Explainable artificial intelligence
    • Rule learning for interpretable machine learning
    • Interpretation of black-box models
    • Interpretability in reinforcement learning
    • Interpretable supervised and unsupervised models
    • Interpretation of neural networks and ensemble-based methods
    • Interpretation of random forests and other ensemble models
    • Causality of machine learning models
    • Novel applications requiring interpretability
    • Methodologies for measuring interpretability of machine learning models
    • Interpretability-accuracy trade-off and its benchmarks

       

      Instructions for authors can be found at: https://www.springer.com/journal/10479/submission-guidelines

       

      Authors should submit a cover letter and a manuscript by March 31, 2023, via the Journal's online submission site. Please see the Author Instructions on the web site if you have not yet submitted a paper through Springer's web-based system, Editorial Manager. When prompted for the article type, please select Original Research. On the Additional Information screen, you will be asked if the manuscript belongs to a special issue, please choose yes and the special issue's title, Interpretable Machine Learning and Explainable Artificial Intelligence, to ensure that it will be reviewed for this special issue. Manuscripts submitted after the deadline may not be considered for the special issue and may be transferred, if accepted, to a regular issue.

       Papers will be subject to a strict review process under the supervision of the Guest Editors, and accepted papers will be published online individually, before print publication.

       

      Special Issue Guest Editors

       Kazim Topuz

      Assistant Professor of Operations Management & Business Analytics, The University of Tulsa kazim-topuz@utulsa.edu

       

      Akhilesh Bajaj

      Professor of CIS, The University of Tulsa akhilesh-bajaj@utulsa.edu

       

      Kristof Coussement

      Professor of Business Analytics, IESEG School of Management k.coussement@ieseg.fr

       

      Timothy L. Urban

      Professor Emeritus of Operations Management timothy-urban@utulsa.edu

       



      ------------------------------
      Kazim Topuz
      Assistant Professor of Business Analytics
      University of Tulsa
      Tulsa OK
      ------------------------------