INFORMS Open Forum

  • 1.  Can Artificial Intelligence Agents Develop Trust With Humans? New Research Says Yes!

    Posted 02-21-2024 10:19

    BALTIMORE, MD, February 21, 2024 – Artificial intelligence (AI) has made great strides in the past few years, even months. New research in the INFORMS journal Management Science finds that AI agents can build trust like that of humans.

    Click here to read the full press release.



    ------------------------------
    Ashley Smith
    Public Relations Specialist
    INFORMS Public Affairs Coordinator
    Baltimore MD
    ------------------------------


  • 2.  RE: Can Artificial Intelligence Agents Develop Trust With Humans? New Research Says Yes!

    Posted 02-23-2024 11:25

    I'm mostly responding to the title, but the article is not very detailed, so here goes...

    I believe that trust is a human-human connection that should not be claimed to be possible by a human-AI connection. Let me explain what I mean by connection. Connection between humans (human-human) is heart-to-heart, mind-to-mind, and soul-to-soul. Connection between humans and AI is neither mind, body, spirit now and nor should it ever be. This is a poor title that should not pass an ethics test, and that just highlights the need for discussion and legislation.

    This will be a very important topic to continue discussing over the next decade, and I've stated how I'll approach the discussion. Human-human connections should NOT EVER be mistaken to be replaceable/replicated by a machine.



    ------------------------------
    Aaron Hussey
    Founder & Principal
    Integral Analytics, LLC
    Concord NC
    ------------------------------



  • 3.  RE: Can Artificial Intelligence Agents Develop Trust With Humans? New Research Says Yes!

    Posted 02-24-2024 08:00
    Edited by Rahul Saxena 02-24-2024 12:50

    Quite an interesting subject, so I read the paper.

    The experiment itself, the Trust Game (designed by Berg et al.), is in current use as an investment scam in India.

    That DNN AIs act like humans playing the Trust Game is interesting, though I don't see it as very unexpected.

    As Aaron Hussey puts it, the concept of "trust" in an AI is in itself problematic, needs to be anchored better. An AI can be updated at any time so that it can turn from taking an infinite-iteration view to any other view (including switching to a one-shot game viewpoint or at the direction of a scamster). Therefore the human notion of trust is harder to apply to an AI if you don't own the AI.

    If you rent an AI (as many AI-as-an-API or AI embedded in SaaS users do), there's some sense of incentive alignment from the owners of the AI. In that sense I use "trust my AI" because my business goals are to keep you as a happy customer. In that sense too, there is an implicit trust-handoff (trust in the humans transfers to trust in the AI). 

    For a provider of commercial AI, the trustworthiness of an untrammeled LLM or DNN becomes problematic. Trust in constrained AIs can be quite straightforward, e.g., for LP optimizers or natural-language parsers (including LLM in such use) that tell you what they understood your question to be and then answer the question. In this sense, I'm quite interested in the subject of trustworthy AI.



    ------------------------------
    Rahul Saxena
    FrogData.com
    ------------------------------