WASHINGTON — Artificial intelligence has the ability to help the federal government make sense of an unending flood of data now swamping defense, intelligence and other agencies, according to a study from General Dynamics Information Technology.
GDIT, a division of General Dynamics, this month made public the results of its defensive cyber operations research, which relied on a survey of 200 government leaders working in national security fields.
Some 41% of respondents found themselves “submerged in data,” with more than 30% saying they needed more skilled personnel and more-efficient analytics to handle it. More than one-quarter already see the value in AI for cybersecurity, namely for real-time threat detection and automated countermeasures. Human error was the most significant issue, the study found.
“There’s overwhelming volumes of data. I think that challenge only gets worse as we progress, because the threat landscape is ever-increasing,” Matthew McFadden, GDIT’s vice president of cyber, said in an interview. “One of the key findings is: How do we help cyber professionals work smarter, more efficiently? AI and automation is, really, a key way to do that.”
Potential applications of AI, automation and other pattern-recognizing tools are growing as the technologies mature and the public becomes aware of them. Their use for digital defense is gaining steam as hacking threats from small groups and world powers such as China and Russia evolve.
The Department of Defense and other federal civilian agencies consider AI a means to quickly parse piles of information and pass along useful insights, either on the battlefield or in furnishing public services. Machines and programs can also handle rote tasks, freeing up manpower already spread thin and sought after.
The GDIT study notes that robust cyber defenses comprise both trusted, well-defined capabilities and innovative technologies. Automation is a key piece of the Pentagon’s push to zero trust, a new cybersecurity paradigm. The approach assumes networks are jeopardized, requiring perpetual validation of users, devices and access.
“AI is, technically, providing better results to the defender than the attacker at this moment,” Matt Hayden, GDIT’s vice president of cyber, intelligence and homeland security, told C4ISRNET. “When you see responses like this, it’s recognized that all these customers see now is the time to make sure they’re getting the most out of what they’ve already invested in, and to put their chips to the middle of the table to get that defender’s advantage.”
The Pentagon requested $1.8 billion for AI in fiscal 2024. It is juggling more than 800 unclassified AI-related projects, the Associated Press reported.
While the military has led AI spending, funding for similar projects at other agencies, especially NASA, has similarly grown from 2020 to 2022, according to analysis by Deltek. Spending by the Department of Veterans Affairs on AI tripled in a two-year period, spurred by machine-learning and virtual-reality obligations, the software and consulting business said.
“We’re going to start to find that AI is not only in almost everything,” Hayden said, “but we’re going to have to start categorizing it as to how autonomous it is.”
Colin Demarest was a reporter at C4ISRNET, where he covered military networks, cyber and IT. Colin had previously covered the Department of Energy and its National Nuclear Security Administration — namely Cold War cleanup and nuclear weapons development — for a daily newspaper in South Carolina. Colin is also an award-winning photographer.
Molly Weisner is a staff reporter for Federal Times where she covers labor, policy and contracting pertaining to the government workforce. She made previous stops at USA Today and McClatchy as a digital producer, and worked at The New York Times as a copy editor. Molly majored in journalism at the University of North Carolina at Chapel Hill.