It’s been quite a year for artificial intelligence. Since November 2022, when OpenAI introduced ChatGPT, the first generative AI engine of its kind, the world has been abuzz with the latest AI innovations.
In response to concerns surrounding rapidly emerging and evolving technologies like AI, earlier this year the Biden Administration released its Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.
While the EO was the first national directive to offer U.S. agencies more context on safely testing, policing, and evaluating potential cybersecurity threats associated with AI, it was noticeably lacking in several core areas. And while the White House’s swift response to the proliferation of AI is commendable — particularly in recognizing the need to stay ahead of advancements and ensure tools used to develop these systems and capabilities are safe, secure, and trustworthy — the EO failed to include timing parameters and offer actionable guidance for federal agencies, which are critical elements if the administration hopes to convert this EO on AI (or any of its kind) into action.
AI challenges facing federal agencies
Right now, there are two sides to the AI coin.
While it offers the potential to help agencies improve cyber defenses, enhance productivity, and do more with fewer resources, bad actors are also leveraging the technology to make hacking attacks more sophisticated, increasingly targeted and more difficult to detect. This only puts federal agencies in a more precarious situation as they look to navigate and secure themselves against a widening digital landscape, and in the face of increasingly relentless attacks on critical infrastructure.
AI is just the latest technology or industry trend that underscores the need for a more modernized approach to shore up national cyber resilience. In today’s hyperconnected, hybrid world, an “assume breach” mindset, a core tenant of the zero trust framework, is essential.
Last year, organizations lost an average of $4.1 million to cloud breaches alone. Attacks are inevitable and bad actors are bound to penetrate some point of the software supply chain or cloud ecosystem. To proactively defend our nation’s most critical systems, agencies must consider how they can better mitigate risk proactively while optimizing AI for their own advantage.
Putting the EO into action
It’s a tall order, but here’s what’s needed from the Biden Administration to convert its EO into action. First, it’ll be crucial for the administration to follow up with timebound, actionable guidance that clearly articulates and defines how agencies are expected to proceed on their resilience roadmaps in the AI era.
Additionally, agencies need dedicated funding and resources to help them achieve these goals and objectives, especially as additional mandates and requirements are unfurled. And we still need more incentives in place for the federal government to recruit and retain top talent when it comes to understanding, securing, and better harnessing the power of emerging technologies, like AI, for good.
We know AI and emerging technologies are constantly advancing, and they’re here to stay. Plus, we don’t yet know the impact that innovations like these will have on our industry, the federal government, and across society in the next five or ten years. It’s an ongoing conversation the White House needs to constantly revisit and reassess, to ensure the mandates they’re pushing out accurately reflect the evolving world around us. While the EO does provide some guidance to certain agencies, we have to get more granular and be more aggressive.
In addition to more specifics, it would have been nice to see a bit more positivity in the EO. There’s so much doom and gloom and not enough focus on how federal agencies can leverage AI for good. How can agencies use AI to improve cybersecurity? How can they train personnel to use AI more effectively? These are the kind of details we need to see more of if we want to empower federal agencies to leverage the technology to its fullest potential, with both safety and security in mind.
Looking ahead
Right now, there’s a lot of focus on fighting fire with fire, but the answer to our AI problem isn’t more AI. The answer lies in doing the basics right: reduce the attack surface, control access to sources, contain attacks, and recover quickly and securely. Until federal agencies have a solid foundation in place, the rest of the directives and specifics outlined by the EO on AI (and future legislation like it) will fail to take hold, and federal agencies will continue to find themselves crippled by the latest threats.
While we wait for additional guidance and resources regarding AI usage to unfold, the best way for federal agencies to assuage their AI concerns is to leverage AI in the right places, and ensure they have the right fundamentals (like “assume breach” and zero trust) in place to support themselves as new threats emerge.
Gary Barlet is Federal Chief Technology Officer at Illumio