Blogs

Six Essential Elements Of A Responsible AI Model

By Aaron Burciaga posted 04-01-2024 01:55

  

The United States is in a high-stakes global race to lead and innovate in artificial intelligence, and we can’t compete without making immediate, sweeping changes to our manpower management model. China is on track to reach its goal of AI technology dominance by 2030 — but while AI and data science roles are demonstrating rapid growth in almost every sector, the U.S is lagging far behind.


At a high level, our problem isn’t hard to diagnose: we don’t have enough talent to fill existing AI positions, and our current system is too slow and monolithic to keep up with future developments. To catch up with foreign competition, we’ll need broader thinking, a bigger tent and a more inclusive/distributed workforce.


This is the third and final installment (for now) in my series on the importance of building a blue-collar AI workforce. In the first piece, I defined what it is and why we need it. In the second, I proposed eight actions for citizens, governments and employers to take to build this labor force. In this third piece, I have outlined the tactical strategy for ensuring success at scale — and why ethics and sustainability are fundamental to this approach.


Scaling A Blue-Collar AI Workforce


The clock is ticking. To outpace China, we need to be the AI leader by 2025. The answer isn’t to increase the number of AI graduate programs, white-collar managers or military officers (there has already been a bumper crop of analytics degrees and newly minted graduate programs of questionable value). The real answer is to make these expensive, time-consuming and generally fancy degrees unnecessary for accelerating the process of building out the base of our AI workforce.


Consider the history of innovation in our country: the Wright brothers were bike mechanics, not MIT professors. Bill Gates, Steve Jobs, Mark Zuckerberg, Michael Dell, Larry Ellison — none of these tech pioneers received a college degree. We won’t achieve our goal by investing in more candidates with master’s degrees or PhDs. We will only succeed if we scale a vast blue-collar AI workforce: an army of data engineers, data visualizers and cybercoders.


Why The AI Workforce Needs An Ethical Framework


Developing an AI labor force is not unlike building a pyramid. First we build the base, then we can add more layers of stability before finishing it with the capstone. The base is formed by our blue-collar AI workforce and shaped by nationwide K-12 education and community college programs, followed by layers of workers with bachelor’s degrees. The capstone is a much smaller element, composed of high-level employees with graduate degrees and doctorates who are supported by the blue-collar AI architecture. And the mortar holding the entire structure together is ethics.


There is no way to build the pyramid first and add an ethical veneer to it later; ethics must be incorporated from the very beginning. A sustainable AI workforce development strategy is anchored in ethical practices. If we aren’t intentionally teaching, discussing and enforcing a strict code of ethics in AI — in schools, in businesses, in government — then we face serious risks as a nation. Data scientists acting outside an ethical framework, professional standards and legal limitations will be more susceptible to dangers like the irresponsible collection of data or biases in analytics.


Navigating Ethics In The Real World


I went to the Naval Academy, and nearly weekly we attended ethical leadership sessions where we reviewed military case studies throughout history. By analyzing both sound leadership decisions and shameful atrocities, we put ethics into real-life scenarios. I believe a similar model should be established at every level of AI education and training: in community college programs, conferences, companies, military operations and government agencies.


I’m encouraged to see some new developments in organizations I’m involved with. This summer, I’m speaking at a National Convergence Technology Center workshop that will help attendees learn to add data science ethics curriculum to IT programs. Florida State College at Jacksonville recently created a data ethics course that teaches guidelines and principles for making ethical decisions in a variety of scenarios. And the White House Office of Science and Technology Policy (OSTP) and the National Science Foundation (NSF) just announced the formation of the National Artificial Intelligence (AI) Research Resource Task Force, which will expand educational tools and resources for AI innovation, and the National AI Advisory Committee, which will provide recommendations on AI topics, including ethical and legal issues.


Three Steps For Creating An Ethical Framework


I strongly believe that every organization working in AI has a responsibility to define its own ethical standards and practices. Start with these steps in your organization:


1. Define your vision.


Put your intentions into a short statement. What do you hope to achieve? How will you contribute to a greater cause? For example, the vision statement for ECS, where I lead data and AI, is: “To ensure the responsible research, development, and application of Artificial Intelligence toward public safety, security and prosperity.”


2. Identify your values.


Describe your ethical framework in detail. What ideals and beliefs do you hold? How will you evaluate whether a project or initiative upholds your values? In our company, we follow a framework that is based on those in the DoD and intelligence community. It dictates that all projects must be:


• Accountable


• Impartial


• Resilient


• Transparent


• Secure


• Governed


3. Establish oversight.


Create an AI ethics board that provides guidance and recommendations for decisions about programs and activities. Make it a force for governance and accountability within your organization.


The U.S. needs to recruit and train an inclusive blue-collar AI workforce on a massive scale. We are under pressure to accomplish this ambitious goal quickly, but to ensure success, we must do it within an ethical and sustainable framework.


It’ll be worth the investment. Research from McKinsey has indicated that 82% of companies who have AI and ML are benefiting financially from their investments. Business leaders see clear value in AI and want to expand its footprint within their companies, which is producing a massive surge in demand for AI talent. We can “mind the gap” and meet this surge by doing things differently, more tactically and with more players on our team filling key roles, rather than holding our breath for a fleet of perennial unicorns.


*This article was originally published in Forbes. Author has full rights to republish content.







0 comments
7 views

Permalink