Stakeholders across government, industry and advocacy groups continue to dissect the first-ever Artificial Intelligence Executive Order unveiled by the Biden Administration in early November. Hand-wringing over the 111-page document has not been in short supply, with concerns ranging from whether the EO will stifle innovation to the question of AI fairness - and whether potential real-world harms are sufficiently addressed.
I have spent the past several months helping to carve out a responsible path forward for AI as a member of the National Artificial Intelligence Advisory Committee, which advises President Joe Biden. But I’ve also spent decades of my career developing and advancing initiatives to address the impact of nascent technologies on human well-being, agency and equity.
After the EO was signed, I was asked to prepare remarks for Sen. Chuck Schumer’s Senate AI Insight Forum on High Impact AI. I shared my belief that the executive order represents an opportunity to unlock innovation, and am encouraged by the conversations happening around AI, which are thoughtful, deliberate and nonpartisan. That said, because of its potential ubiquity nationally, and internationally, there are foundational pillars to Responsible AI that all stakeholders should focus on.
AI literacy: done for us, not to us
Like electricity, where each of us has a basic understanding of its principles even if we don’t understand all the physics and math behind it — we need to have a basic understanding about AI, and the government has a role to play in that public education goal.
While most citizens won’t choose to become advanced AI researchers, they should understand how we all produce data and how it’s collected, analyzed, and fed into AI models. They need to understand the potential for confirmation and automation bias, as well as the need for vigilance with respect to AI being used as a tool of deception.
The Biden Administration Executive Order on Safe, Secure and Trustworthy Development of AI catalyzes various agencies with respect to AI literacy in the federal workforce. That is necessary and should be expanded to include broader workforce and non-workforce participants as well
Beyond the EO, consideration should be given to: creating a National AI Literacy Campaign to create engagement and awareness about AI throughout the nation; investing in formal educational or existing learning frameworks to advance the AI literacy of the American population; and investing in informal learning opportunities such as standalone public sessions, social media campaigns, and public messaging efforts. AI should not be “done to us” but “done for us” and it takes an educated public to know the difference.
Inclusive contribution
Most AI practitioners are working in good faith to do what’s beneficial, legal, and profitable. Most have no desire to harm. However, impact doesn’t depend on intention. Absent broad participation and a wider spectrum of perspectives, limited points of view lead to harmful outcomes as we’ve seen increasingly over the past several years of AI proliferation.
Americans are increasingly more concerned than excited about artificial intelligence (52% in August 2023 vs 38% in December 2022), according to a Pew Internet Research survey. Americans are also among the populations least trusting of technology. In multiple studies, potential job loss, misinformation, and fundamental change to American society were cited as reasons for concern. Such concerns are particularly acute in communities historically underrepresented in the design, development, and deployment of technology, furthering decades of distrust.
Everyone should participate in AI design, creation, and sustenance, not only the consumer demand phase of the lifecycle. AI inclusion will better inform ethical inquiry, aid in reducing harmful biases, and build confidence in the fairness of AI. There also must be a willingness to exist in the “messy middle ground,” where the potential for AI intersects with the realities of our past, present, and the desired future that we all must have a part in designing.
Working in the messy middle means we don’t ignore AI challenges, but work to overcome them. This means, for example, involving inclusive domain expertise before applying AI in specific high impact contexts such as health, finance, and law enforcement; funding the National AI Research Resource and other such methods of lowering the economic barriers of entry for AI; and incentivizing workforce education pathways for technical, and importantly, the non-technical talent needed for more robust AI systems.
Demonstrable trustworthiness
Focusing on the “end of humanity,” with no evidence, over existing AI risks only serves to further erode trust. That said, it would also be shortsighted to discount the concerns of so many AI experts. We should commit resources to exploring those extreme threats based on how probable they are, while committing the bulk of our energies towards the problems of today.
One way to mitigate immediate challenges is by providing a means to “trust but verify” AI. Trustworthy AI should be an end-to-end process, from capabilities ideation to sunset. AI providers should provide a means to measure and monitor performance. Systems should be auditable, with understandable reports illustrating if an AI model overstepped or underperformed its intended use.
Responsible AI can take several forms, such as model cards that summarize a model’s training data, intended use, and performance. Similar to nutrition labels for food, model cards are an appropriately transparent method of demonstrating for consumers, creators, and regulators alike that AI models are supporting their responsible, ethical and trustworthy AI goals.
Reggie Townsend is the VP of the SAS Data Ethics Practice. Townsend also serves as a member of the National Artificial Intelligence Advisory Committee (NAIAC), which advises President Biden, and as a board member of EqualAI, a nonprofit organization focused on reducing unconscious bias in the development and use of AI.