INFORMS Open Forum

  • 1.  National artificial intelligence R&D strategic plan from the White House

    Posted 05-31-2023 10:57

    The White House released the national AI R&D strategic plan and is soliciting public input on critical AI issues. You can weigh in by June 7.

    FACT SHEET: Biden-Harris Administration Takes New Steps to Advance Responsible Artificial Intelligence Research, Development, and Deployment | OSTP | The White House

    The White House remove preview
    FACT SHEET: Biden-Harris Administration Takes New Steps to Advance Responsible Artificial Intelligence Research, Development, and Deployment | OSTP | The White House
    Today, the Biden-Harris Administration is announcing new efforts that will advance the research, development, and deployment of responsible artificial intelligence (AI) that protects individuals' rights and safety and delivers results for the American people. AI is one of the most powerful technologies of our time, with broad applications.
    View this on The White House >

     



    ------------------------------
    Laura Albert
    INFORMS President
    Professor and David H. Gustafson Chair
    University of Wisconsin-Madison
    ------------------------------


  • 2.  RE: National artificial intelligence R&D strategic plan from the White House

    Posted 06-02-2023 10:29

    Since AI is part of a suite of tools to cognitive work, it is critical for INFORMS to have input since it has the most depth and breadth of experience with these tools.  A place to start is Simon's paper in Interfaces (IJAA) - two heads are better than one.



    ------------------------------
    Ken Fordyce
    director analytics without borders
    Arkieva
    Wilmington DE
    ------------------------------



  • 3.  RE: National artificial intelligence R&D strategic plan from the White House

    Posted 06-04-2023 04:05
    Edited by Rahul Saxena 06-04-2023 04:40

    The strategic intent to make AI so cheap and easy that everyone can use it runs up against the desire to add safeguards. "If only wealthy hospitals can take advantage of AI systems, the benefits of these technologies will not be equitably distributed." vs five core protections:

    1. Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
    2. Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
    3. Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
    4. Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
    5. Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter

    Safeguards seem to add costs to AI, which goes against making AI cheap and widespread. If we can convert the problem of safeguards into a problem of effectiveness, we can get to the virtuous cycle where AI becomes cheap and effective while honoring the safeguards.

    Each AI owner is already internally incentivized to build an AI-checker to ensure that the AI is effective (i.e., it does what it's supposed to do). That addresses the requirement for effective AI, where unsafe AI is handled as being ineffective. Let's call this sort of AI "Effective AI".

    Algorithmic discrimination and data privacy work against the effectiveness imperative of the AI owner. Denial of data would typically be used by the AI as part of its algorithm (what does it mean for my decision that Tom has requested that xyz datapoint should not be used in his case?). Whether by AI or not, discrimination is the basis of decision-making. Locating and combating illegal discrimination can and should rest with the agencies responsible for eliminating it, because those social-good agencies can convert an otherwise open-ended problem to a well-defined set of algorithms. This approach, again, aligns with the natural need for using an effective AI, in this case to further the interests of the social-good agencies. Let's call this AI to be "Social Alignment AI" that its owners will want to be efficient and effective.

    Notice and Explanation stems from being fair. As currently worded, it can create an expensive arms race as malicious players use explanations to understand and win against the AIs. A black box that's doubly guarded by Effectiveness and Social Alignment can be sufficient to meet the need for fairness.

    Alternative Options deals with the ethics of not trapping people. A real "way out" should enable review and redress, stemming from being just. These are difficult matters anyway, not just for AI, and need to be addressed as a set of "Justice AI" that seems to be an aspect of Alignment AI. The problem is that an unjust AI decision can be trivially easy to find-and-solve or devilishly hard, possibly rife with false-positives and false-negatives. This class of "Social Alignment AI" is likely to use triage approaches.

    Separating the concepts of Effectiveness and Social Alignment AI will, I think, provide both cost-effective and safeguarded AI. Each AI owner is incented to make its AI efficient and effective. This ecosystem requires a market-making player to enable each Effectiveness AI to check its outputs against the relevant set of Alignment AIs so that it can constantly locate and eliminate misalignments.


    ------------------------------
    Rahul Saxena
    FrogData.com
    ------------------------------



  • 4.  RE: National artificial intelligence R&D strategic plan from the White House

    Posted 06-05-2023 03:12

    European Union Agency for CyberSecurity is hosting this open meeting which is in person and virtual which might be of interest

    ENISA AI Cybersecurity Conference

    On the 7th of June, 09:00 -17:00 CET time, ENISA organizes AI Cybersecurity Conference in Brussels:

    Registration link to participate

    :https://www.enisa.europa.eu/events/ai-cybersecurity-conference



    ------------------------------
    Klaus Peter Finke Harkonen
    Principal
    Finke Harkonen Oy
    Espoo
    ------------------------------