ISR Special Issue on Compassionate AI
Special Issue Editors
Rajiv Kohli (William & Mary), rajiv.kohli@mason.wm.edu
Meng Li (University of Houston), mli@bauer.uh.edu
Ting Li (Erasmus University Rotterdam), tli@rsm.nl
Paul A. Pavlou (University of Miami), pavlou@miami.edu
Due Date: January 25, 2026
Submissions will be accepted starting January 10, 2026
Background
Although Artificial Intelligence (AI)-including generative and agentic forms-continues to reshape industries and societies, it remains largely devoid of human qualities, and today's AI is often characterized as mechanistic and even "sterile". Indeed, current AI design paradigms focus on maximizing efficiency, accuracy, and computational sophistication, often at the
expense of embedding emotional, social, and ethical dimensions that are inherent in human interactions. While AI is effective in processing data and automating tasks, it is often
impersonal, thus detaching from the human experience and undermining the adoption of AI.
To address these shortcomings, AI development must be rethought toward an approach that emphasizes compassionate design by integrating ethical considerations, cultural sensitivity, and emotional intelligence into the core design of AI systems. By embedding these principles, AI should not only be technically proficient but also capable of understanding and responding to the diverse needs of its human users to ensure that AI advances meaningfully contribute to humanity. The need for compassion-centered AI design has never been more pressing.
Compassionate AI refers to systems that not only recognize human emotions and sufferingbut also proactively seek to alleviate distress, promote well-being, and uphold human dignity. Compassionate AI systems are envisioned as tools that complement human decision-making by providing support that is empathetic, inclusive, and contextually aware. Unlike empathy, which involves understanding others' emotional states, or sympathy, which evokes feelings of sorrow, compassion combines emotional awareness with a purposeful intention to help other human beings (Raman & McClelland, 2019; Chatterjee et al., 2021).
While AI has increased efficiency, personalization, and cost reduction in many settings, it has also raised ethical, legal, and societal concerns-particularly in high-stakes settings such ashealthcare, education, crisis response, and social services. These concerns are magnified when AI systems, lacking moral agency, make decisions that affect vulnerable populations. In many applications-ranging from healthcare to finance and customer service-the absence of a humanistic perspective can result in interactions that feel mechanistic and unresponsive to
the complexities of individual circumstances and propagate (or even amplify) existing societal biases.
Compassionate AI addresses this challenge by embedding empathy, care, and contextual sensitivity into the design, deployment, and governance of AI systems. It envisions AI not as a mechanistic and utilitarian tool, but as a partner that embodies humanity's highest moral aspirations. Ultimately, compassionate AI is both an ethical imperative and a catalyst-one
that rehumanizes AI, ensuring that our AI-led future uplifts humanity, nurtures societal well- being, and reflects our collective commitment to a more empathetic and better world.
Emerging applications across several settings underscore the growing relevance of compassionate AI toward the betterment of society (Graves & Compson, 2024):
- In social services, a mobile app, Tarjimly, connects refugees with volunteer interpreters, leveraging AI to improve translation services and essential assistance.
- In crisis management, the International Rescue Committee uses AI to deliver crucial information to displaced people, helping to combat misinformation during crises
- In mental health, chatbots such as Woebot and other AI-driven virtual therapists use natural language processing and sentiment analysis to provide empathetic support to users experiencing stress, anxiety, or depression by mimicking compassionate human interactions. They offer a first line of emotional assistance and guide people who suffer from mental health issues to seek professional (human) help.
- In customer service, companies are deploying AI chatbots with emotional recognition, capable of detecting customer emotions through text and voice analysis, enabling
more empathetic interactions (Liu-Thompkins et al., 2022).
- In education, AI can support personalized learning with tutoring AI systems equipped with emotion recognition algorithms that gauge learners' frustration or confusion in
real time to improve the emotional challenges of learning.
- In human resources, AI tools are increasingly used in workplace settings to monitor employee well-being. By analyzing communication patterns and employee feedback, compassionate AI can identify early signs of distress, burnout, or disengagement to foster a more empathetic workplace culture and enhance overall job satisfaction.
These examples show that compassionate AI is not an abstract ideal-but a concrete design and performance criterion that can be measured, tracked, and embedded into today's AI systems. A central tension in the development of AI lies in its origin: AI systems are typically designed as rational, logical, and optimal agents, devoid of human emotion. This has led to concerns that integrating compassion into AI might compromise efficiency. Yet, if AI aims to replicate, and ideally augment human intelligence, it must incorporate both logic and compassion. Empathy is a desirable trait and a core component of human cognition (Ovsyannikova et al., 2025). As such, there is a pressing need to
(i) identify the challenges posed by compassionate AI and strategies to mitigate them;
(ii) find a balance between AI's efficiency and a human-centered emphasis on empathy;
(iii) create theoretical frameworks that guide the design of AI systems grounded in compassion.
To harness AI's full potential responsibly, its development must be grounded in principles of empathy, kindness, and a deep understanding of human values (Calvo & Peters, 2014).
Compassionate AI goes beyond traditional "ethical AI" notions by actively prioritizing care in its design, implementation, and real-world deployment. For example, empathetic algorithms can promote user-centricity and cultural sensitivity during the design and adoption phase. In implementation, systems should be built to minimize harm while promoting the well-being of human users. At the user level, AI applications should foster trust, alleviate any suffering, avoid the proliferation of social biases, and uphold inclusivity (Yong et al., 2021).
The theoretical underpinnings of compassionate AI draw from a wide range of disciplines (Tsui, 2013). Affective computing (Picard, 1997) explores how AI systems can recognize and respond to human emotions, while empathy-based models like Interpersonal Reactivity Index (Davis, 1983) offer guidelines for simulating emotional care and understanding. Ethical design principles-such as principles of beneficence, non-maleficence, autonomy, fairness, and explicability (Floridi & Cowls, 2019)-offer actionable guidelines to ensure AI remains responsible and human-centered. In healthcare, Kerasidou (2020) highlights the importance of incorporating compassion into AI systems to enrich patient experiences and improve
outcomes. Philosophical traditions such as virtue ethics and care ethics further inform the development of compassionate AI by emphasizing well-being, empathy, and moral responsibility in human-AI interactions.
Recent advancements in generative AI and agentic AI technologies have broadened the potential of compassionate AI, particularly in settings such as healthcare, education, crisis management, and social services (Stade et al., 2024). Generative AI enables emotionally attuned, personalized, and context-sensitive interactions that can meaningfully enhance the quality of life for individuals and communities. Agentic AI brings autonomy and purpose- oriented behavior, allowing AI systems to proactively augment complex human decision-making. In healthcare, for example, compassionate AI is used to enhance mental health care, provide empathetic end-of-life counseling, and offer ongoing patient support, establishing a new, higher standard for technology-enhanced patient-centered care. Agentic AI systems can autonomously plan and execute tasks, such as real-time monitoring of patient conditions or adjusting treatment plans based on dynamic real-time biometric data. These AI technologies are fostering a powerful integration of human empathy with intelligent autonomy, advancing the credibility, trustworthiness, and overall effectiveness of AI systems (Inzlicht et al., 2024).
Special Issue Focus
This special issue invites submissions of rigorous and creative scholarly work related to compassionate AI. Relevant areas for this SI include, but are not limited to:
- Healthcare: Compassionate AI can enhance patient care by predicting adverse events, assisting with end-of-life decision-making, and providing emotional support to
patients and families. For instance, AI-powered chatbots and virtual assistants can offer non-judgmental, empathetic support to individuals facing mental health
challenges, while monitoring systems can aid older adults by alleviating social isolation through companionship and daily activity tracking.
- Crisis Management: During natural disasters or emergencies, compassionate AI systems can analyze real-time data, such as social media posts, to identify distressed
individuals and provide timely assistance. These systems can also coordinate resource allocation and deliver emotionally sensitive communications to those affected.
- Education: AI systems can transform education by personalizing learning experiences, identifying individual strengths and weaknesses, and adapting instruction to meet diverse student needs. Additionally, empathetic AI-driven feedback
mechanisms can support students' emotional well-being, fostering engagement and boosting motivation in learning environments. These AI technologies can also play a pivotal role in improving access to education for underserved populations.
- Social Services: Compassionate AI can support victims of trauma or abuse by offering non-judgmental, understanding virtual assistance tailored to their needs.
These AI systems can also connect individuals with critical resources, providing a safe and accessible channel for seeking help.
- Customer Service: AI-powered chatbots can enhance customer experiences by providing compassionate and empathetic responses to inquiries, reducing frustration, and increasing satisfaction. By recognizing and adapting to customer emotions, these systems can de-escalate tense situations and build stronger customer relationships.
- Human Resources: AI can monitor employee well-being by detecting signals of stress or burnout and proactively offering support resources. Furthermore,
emotionally aware AI systems can improve workplace dynamics by facilitating more compassionate and effective communication between managers and team members.
Examples of topics/themes that fit this special issue include, but are not limited to:
- Redefining Compassion in the Age of AI: How should compassion be defined in the context of AI? How does it differ from other concepts, such as empathy, ethics, and
responsibility, and how can these be integrated into AI systems? What can historical and philosophical frameworks teach us about embedding compassion in AI?
- Embedding Compassion in AI Design: How can compassion be operationalized, embedded, and measured in AI designs? What role do algorithms, data, and societal and cultural considerations play in designing compassionate AI systems?
- Measuring Compassion in AI Systems: What metrics can be developed to evaluate the level of compassion in AI interactions? How can these metrics be standardized across diverse AI applications and settings?
- Governance of Compassionate AI: What governance models are needed to ensure AI systems incorporate compassion while balancing efficiency and technical
sophistication?
- Accountability in Compassionate AI: When AI systems fail to exhibit compassion or cause harm, who should be held accountable-developers, users, or regulatory and governing bodies? How can accountability be embedded in the design of AI systems?
- Compassion in Practical Settings: How can compassionate AI improve patient outcomes in mental health, chronic disease management, and end-of-life care? How can AI systems identify and respond to distressed individuals during emergencies, ensuring that compassionate communication is maintained? How can compassionate AI systems address educational inequities by supporting vulnerable students?
- Human-AI Collaboration: Under what conditions can AI systems enhance human compassion during collaboration, such as in healthcare, education, or social services? How can agentic AI systems that exhibit autonomy and technical sophistication balance compassionate decision-making with achieving task-specific objectives?
- Downsides of Compassionate AI: What are the potential risks of overemphasizing compassion in AI systems, such as reduced technical sophistication, reduced
efficiency, human manipulation, and even cultural insensitivity?
- Cultural Contexts of Compassion: How do cultural variations in the understanding and expression of compassion affect the design and use of AI systems?
- Behavioral Responses to Compassionate AI: How do users perceive and respond to compassionate AI, and how does it affect user trust, satisfaction, and adoption?
- Societal Impact: How can compassionate AI contribute to global challenges, such as poverty, inequality, and climate change? How can AI systems improve healthcare? How can AI systems transform customer service interactions by adapting to user emotions and needs? How can AI systems support employee well-being and foster a more inclusive and empathetic workplace?
- Scalability of Compassionate AI: What strategies can ensure that compassionate AI systems scale effectively while maintaining empathy and care? How can autonomy and efficiency in AI systems be aligned with the principles of compassion to create more responsible and effective AI technologies?
Questions may be sent to any of the Special Issue guest editors (please cc all guest editors):
- Rajiv Kohli (rajiv.kohli@mason.wm.edu) is the John. N. Dalton Professor of Business in the Raymond A. Mason School of Business at William & Mary. Dr. Kohli is ranked as the #1 scholar in a recent Health Information Technology thought leadership study. Dr. Kohli's research has been published in MIS Quarterly, Management Science, Information Systems Research, MIS Quarterly Executive, Journal of Management Information
Systems, Journal of Operations Management, and Decision Support Systems, among others. Dr. Kohli is a Senior Editor for Information Systems Research. He has also served as a Senior Editor for MIS Quarterly and as a member of the editorial board of several international journals. He was the Project Leader of Decision Support Services at Trinity Health.
- Meng Li (mli@bauer.uh.edu) is the founding director of the Human-Centered AI Institute and the Bauer Chair of AI, at the C.T. Bauer College of Business at the University of Houston. His research has appeared in Management Science, Operations Research, Manufacturing and Service Operations Management, Production and Operations Management, Nature Sustainability, Journal of Operations Management, and Strategic Management Journal, among others. His research has won the Best Paper for the POMS College of Operational Excellence Best Paper Competition, and the Second Place JFIG Paper Competition. He is a guest editor for the Production and Operations Management and Decision Sciences Journal, a Senior Editor for Production and Operations Management, and a Department Editor for the Journal of Operations Management (Technology Management) and Decision Sciences Journal.
- Ting Li (tli@rsm.nl) is the Professor of Digital Business at Rotterdam School of Management, Erasmus University, where she leads the Information Systems group. She is the founding member and the Academic Director of Digital Business Practice of the Erasmus Centre for Data Analytics, and also heads the Immersive Tech & AI Lab. Her research focuses on the strategic use of information and digital technologies, focusing on their economic impacts on consumers, organizations, and society. Her interdisciplinary research has been supported by major grants from national science foundations and multinational corporations. Ting's work has been published in leading journals, including Management Science, MIS Quarterly, Information Systems Research, Nature Communications, Production and Operations Management, and Harvard Business Reviews. In 2017, Ting was named by Poets & Quants as one of the Top 40 Professors Under 40. She holds a Ph.D. in Management Science from the Erasmus University and an MSc in Computational Science from the University of Amsterdam.
- Paul A. Pavlou (pavlou@miami.edu) is the Dean of the University of Miami Patti and Allan Herbert Business School. He is also the Leonard M. Miller University Chair Professor. His research has been cited more than 100,000 times by Google Scholar, and Thomson Reuters identified him among the "World's Most Influential Scientific Minds" based on an analysis of Highly Cited Researchers. Paul was ranked No. 1 globally in publications in top Information Systems journals from 2010 to 2016. His research appeared in Management Science, Information Systems Research, MIS Quarterly, Journal of Marketing, Journal of Marketing Research, Journal of the Academy of Marketing Science, Decision Sciences, Journal of Management Information Systems, Journal of the Association of Information Systems, among others.
Associate Editors and Editorial Review Board Associate Editors
Associate Editors
Sutirtha Chatterjee (University of Nevada, Las Vegas)
Monica Chiarini Tremblay (William & Mary)
Jennifer Claggett (Wake Forest University)
Yulin Fang (HKU Business School)
Shu He (University of Florida)
Nina Huang (University of Miami)
Tina Blegind Jensen (Copenhagen Business School)
Hyeokkoo Eric Kwon (Nanyang Technological University)
Gwanhoo Lee (American University)
Ilan Oshri (University of Auckland)
Matti Rossi (Aalto University School of Business)
Mochen Yang (University of Minnesota)
Editorial Review Board
Uttara Anathakrishnan (Carnegie Mellon University)
Ofir Ben-Assuli (Ono Academic College)
Zhi Cao (Sichuan University)
Andreas Fügener (University of Cologne)
Ambica Ghai (IIM Lucknow)
Xitong Guo (Harbin Institute of Technology)
Nakul Gupta (MDI Gurgaon)
Dominik Gutt (Erasmus University Rotterdam)
Brian Han (University of Illinois Urbana-Champaign)
Jove Hou (University of Houston)
Sarah Lebowitz (University of Virginia)
Reza Mousavi (University of Virginia)
Dandan Qiao (National University of Singapore)
Liangfei Qiu (University of Florida)
Kai Reimer (University of Sydney)
Sujeet Kumar Sharma (Indian Institute of Management, Nagpur)
Leiser Silva (University of Houston)
Sriram Somanchi (University of Notre Dame)
Shujing Sun (University of Texas at Dallas)
Wenqi Wei (University of Surrey)
Edgar Whitley (London School of Economics)
Review Process
Authors must submit all manuscripts through the Information System Research online submission platform no later than January 20, 2026. The editorial team will screen all
submissions to determine their suitability for the special issue. Only manuscripts deemed to have a reasonable chance of acceptance under an accelerated review timeline will proceed to the next stage. Submissions that pass this initial screening will undergo a maximum of two rounds of review.
In consultation with the Associate Editors, the Guest Editors will make the final acceptance decisions. Authors must follow a strict timeline for both submission and revisions. Rejected
manuscripts may only be submitted as regular submissions to Information System Research if the special issue rejection letter recommends explicitly this course of action. Such
recommendations will be made in exceptional circumstances, such as when a manuscript demonstrates strong potential for acceptance but is deemed thematically unsuitable for the
special issue or requires extensive revisions that cannot be completed within the accelerated review timeline.
All submissions from authors with a conflict of interest with any of the Guest Editors will be managed by the Editor-in-Chief or other designated editors to ensure impartiality and
fairness.
Projected Timeline
Full Paper Submission: January 20, 2026
First Round of Editorial Decisions: April 15, 2026
Workshop at the University of Miami: May-June 2026
Revisions Due: August 31, 2026
Second Round of Editorial Decisions: December 31, 2026
Final Revisions Due: February 28, 2027
Final Editorial Decisions: May 31, 2027
References
Calvo, R. A., & Peters, D. (2014). Positive computing: technology for wellbeing and human potential. MIT press.
Chatterjee, S., Chakraborty, S., Fulk, H. K., & Sarker, S. (2021). Building a
compassionate workplace using information technology: Considerations for information systems research. International Journal of Information Management, 56, 102261.
Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a
multidimensional approach. Journal of Personality and Social Psychology, 44(1), 113– 126.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society.
Harvard Data Science Review, 1(1), 1-15.
Inzlicht, M., Cameron, C. D., D'Cruz, J., & Bloom, P. (2024). In praise of empathic AI. Trends in Cognitive Sciences, 28(2), 89-91.
Graves, M., & Compson, J. (2024). Compassionate AI for Moral Decision-Making,
Health, and Well-Being. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Vol. 7, pp. 520-533).
Kerasidou, A. (2020). Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bulletin of the World Health Organization, 98(4), 245.
Liu-Thompkins, Y., Okazaki, S., & Li, H. (2022). Artificial empathy in marketing interactions: Bridging the human-AI gap in affective and social customer
experience. Journal of the Academy of Marketing Science, 50(6), 1198-1218.
Morrow, E. M., Ross, F., Zidaru, T., Patel, K. D., Mason, C., Ream, M., & Stockley, R. (2023). Artificial intelligence technologies and compassion in healthcare: A systematic scoping review. Frontiers in Psychology, 13, 971044.
Picard, R. W. (1997). Affective Computing. Cambridge, MA: MIT Press.
Raman, R., & McClelland, L. (2019). Bringing compassion into information systems
research: A research agenda and call to action. Journal of Information Technology, 34(1), 2-21.
Stade, E. C., Stirman, S. W., Ungar, L. H., Boland, C. L., Schwartz, H. A., Yaden, D. B., ... & Eichstaedt, J. C. (2024). Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation. NPJ Mental Health Research, 3(1), 12.
Tsui, A. S. (2013). 2012 Presidential address-On compassion in scholarship: Why should we care?. Academy of Management Review, 38(2), 167-180.
Ovsyannikova, D., de Mello, V. O., & Inzlicht, M. (2025). Third-party evaluators perceive AI as more compassionate than expert humans. Communications Psychology, 3(1), 4.
Young, A. G., Majchrzak, A., Leidner, D. E., Niederman, F., Raman, R., Jarvenpaa, S. L., & Chatterjee, S. (2023). Panel: Cultivating compassionate workplaces: Should IS
research claim a seat at the table? [Conference panel]. International Conference on Information Systems (ICIS), Austin, TX, United States.