INFORMS Open Forum

  • 1.  ChatGPT Regulations - share your thoughts

    Posted 29 days ago

    Have you had a chance to read Patricia Neri's article entitled "Chasing Tomorrow: ChatGPT Regulations and Models in a Rapidly Changing Landscape" in the March edition of OR/MS Today? What are you thoughts? How do you feel about ChatGPT and other generative AI models? What kinds of regulations do you think need to be in place? Please share your opinions.



    ------------------------------
    Jill Capello
    Membership Associate
    INFORMS
    Catonsville MD
    jcapello@informs.org
    ------------------------------


  • 2.  RE: ChatGPT Regulations - share your thoughts

    Posted 29 days ago

    Thanks Jill.

    I really enjoyed this quick read in ORMS Today.
    I'm interested in hearing some good discussion on this topic, and provide the following to help kickstart it.

    Can you really put GenAI back in the bag?
    I immediately thought of the Superbowl commercial by Discover.  The one with the following dialogue:

    ...
    Maya:  Wait, are you a robot?
    Jennifer Coolidge: ......how would I prove that I'm not?
    ...

    Just a few questions of many:

    With a significant lack of regulations on GenAI, how do we:
    - deal with fake news in the looming 2024 elections around the world?
    - detect and/or reduce bias being perpetuated by biased training data?
    - keep a skeptical attitude without eroding trust in new uses of GenAI?
    - accept and use AI responsibly?



    ------------------------------
    Nicholas Ulmer
    CANA
    Pacific Grove CA
    ------------------------------



  • 3.  RE: ChatGPT Regulations - share your thoughts

    Posted 29 days ago

    I'm not sure GenAI is a particular problem in the "fake news" arena (other than the extent to which it facilitates creation of deep fake images and videos). We've had bots spewing fake news for quite a while now. The core issue with fake news is not an excess of artificial intelligence; it's a shortage of human intelligence.

    Biased training data leading to biased models is indeed a concern, one I recall being addressed in a plenary (I think) talk at a recent annual meeting. The recent flap about Gemini's "woke" images has me wondering if generative AI has some way of introducing biases even when trained on data that we would consider "unbiased". If the image problem was not the result of a programmatic attempt to counter biased training data that somehow ran a bit amok, then is it possible that an AI's "hallucinations" somehow feed back and strengthen the propensity to hallucinate? (Given the current political climate in the US, the last thing we need is more self-reinforcing hallucinations.)



    ------------------------------
    Paul Rubin
    Professor Emeritus
    Michigan State University
    East Lansing MI
    ------------------------------



  • 4.  RE: ChatGPT Regulations - share your thoughts

    Posted 28 days ago

    I take as a given that any regulation will be side-stepped by some nefarious researchers and content creators. Therefore I tend to favor establishing standards for disclosure and certification: disclosure means that content states what, and in what way, AI-generated content was employed, and certification means some sort of tracking that can be followed to establish what was disclosed. Then the goal is to get people to trust only content with full disclosure and certification. 



    ------------------------------
    Barry Nelson
    Walter P. Murphy Professor
    Northwestern University
    Evanston IL
    ------------------------------



  • 5.  RE: ChatGPT Regulations - share your thoughts

    Posted 28 days ago

    In addition to the issue of Generative AI (e.g. ChatGPT) regulations, Patricia Neri's article provides an insightful summary of the technology's many uses and potential as well as its limitations.

    As for regulations, they should be focused on the small set of Generative AI applications that are nefarious in nature. And even then, the regulations should be focused on what is produced and distributed rather than how it is created, as bad outcomes may result from a combination of AI and human actions. The regulations could indicate what is prohibited (unlawful) or merely the requiring of disclosure. For the vast majority of Generative AI applications, regulatory bodies should stay out of it. As for the elimination of bias in training data, that is a problem better left to the free market and technologists rather than an oversight organization, which is bound to botch it. In summary, regulate nefarious activities but do not broadly regulate the technology. And don't frame the problem as "How to regulate AI?" any more than you would say "How to regulate human beings?" Yes, there should be regulations that apply to some human and some AI activity, but for the vast majority of our activities, we do what we want and what our employers want. This is particularly important with a rapidly changing technology.



    ------------------------------
    John Milne
    Clarkson University
    Potsdam, NY
    jmilne@clarkson.edu
    ------------------------------



  • 6.  RE: ChatGPT Regulations - share your thoughts

    Posted 28 days ago

    Interesting article but I am not keen on regulation. Not sure what it is that we should be afraid of as a society. But what does come to mind is Pilate: quid est veritas? 



    ------------------------------
    Alberto Aparicio
    Data Analyst
    Charitable Adult Rides & Services, Inc.
    San Diego
    ------------------------------