Congress and the executive branch need to make a more concerted effort to address and prepare for the rise of artificial intelligence, Reps. Will Hurd, R-Texas, and Robin Kelly, D-Ill., said in a white paper released Sept. 25.
The congressmen, who serve as the chairman and ranking member of the House IT Subcommittee, compiled information gathered in past congressional hearings and meetings with experts to argue for the criticality of federal input in the many facets of AI.
“In light of that potential for disruption, it’s critical that the federal government address the different challenges posed by AI, including its current and future applications. The following paper presents lessons learned from the Subcommittee’s oversight and hearings on AI and sets forth recommendations for moving forward,” Hurd and Kelly wrote.
“Underlying these recommendations is the recognition the United States cannot maintain its global leadership in AI absent political leadership from Congress and the executive branch. Therefore, the Subcommittee recommends increased engagement on AI by Congress and the administration.”
According to the White Paper, under current trends the United States is soon slated to be outpaced in research and development investments by countries like China that have prioritized artificial intelligence investment.
RELATED
“Particularly concerning is the prospect of an authoritarian country, such as Russia or China, overtaking the United States in AI. As the Subcommittee’s hearings showed, AI is likely to have a significant impact in cybersecurity, and American competitiveness in AI will be critical to ensuring the United States does not lose any decisive cybersecurity advantage to other nation-states,” Hurd and Kelly wrote.
Hurd characterized the Chinese investment in AI as a race with the U.S.
“It’s a race, we all know this, and one of the things we need [is] a national strategy, similar to what we’ve seen in the conversations around quantum computing yesterday at the White House. What we saw almost a decade ago when it came to nanotechnology. And part of that strategy does include increasing basic research, opening up data sets and making sure the U.S. is playing a part, leader on ethics when it comes to artificial intelligence,” said Hurd in a Sept. 25 press call.
The paper applauded current investments in R&D, such as the Defense Advanced Research Projects Agency’s creation of the Artificial Intelligence Exploration program, and encouraged government hosting more “Grand Challenges” like those conducted by DARPA to encourage outside-government innovation.
“I do believe the federal government has a role, because we’re sitting on data sets that could be used as a backbone of a Grand Challenge around artificial intelligence,” said Hurd, who added that the National Oceanic and Atmospheric Administration, healthcare agencies and many other components of the federal government possess the data to administer meaningful AI competitions.
“I think this would be a maybe a great opportunity for a public private partnership,” added Kelly on the press call.
The paper also identified four primary challenges that can arise as AI becomes more prevalent: workforce, privacy, bias and malicious use.
AI has the potential to both put portions of the workforce out of a job as more tasks become automated and increase the number of jobs for those trained to work with artificial intelligence.
Hurd and Kelly called on the federal government to lead the way in adapting its workforce by planning for and investing in training programs that will enable them to transition into AI work.
As with many technologies, AI has the potential to infringe on privacy, as intelligent products or systems such as virtual assistants constantly collect data on individuals. That data could be exploited by both the company that created the technology and hackers looking to steal personal information.
“The growing collection and use of personal data in AI systems and applications raises legitimate concerns about privacy. As such, federal agencies should review federal privacy laws, regulations, and judicial decisions to determine how they may already apply to AI products within their jurisdiction, and — where necessary — update existing regulations to account for the addition of AI,” Hurd and Kelly wrote.
The white paper also calls on federal agencies to make government data more available to the public for AI experimentation, while also ensuring that any AI algorithms used by agencies to “make consequential decisions about individuals” are “inspectable” to ensure that they operate without coded bias.
According to Hurd, the question of whether and how that inspectable information would be made available to the public still needs to be asked.
Finally, Hurd and Kelly called on government entities to consider how AI may be used to perpetuate cyber attacks or otherwise cause harm.
However, while recommending that agencies look to existing regulation and statute and some limited changes to those statutes, the paper encouraged a similar hands off approach that the federal government took to the development of the internet.
“The government should begin by first assessing whether the risks to public safety or consumers already fall within existing regulatory frameworks and, if so, consideration should be made as to whether those existing frameworks can adequately address the risks,” Hurd and Kelly wrote.
“At minimum, a widely agreed upon standard for measuring the safety and security of AI products and applications should precede any new regulations. A common taxonomy also would help facilitate clarity and enable accurate accounting of skills and uses of AI.”
Jessie Bur covers federal IT and management.