Serving the GW Community since 1904

The GW Hatchet


The GW Hatchet

Serving the GW Community since 1904

The GW Hatchet

Sign up for our twice-weekly newsletter!

Intelligence, international affairs experts discuss AI, national security

Lexi Critchet | Staff Photographer
From left to right, Jack Shanahan, Aaron Brown and Aaron Bateman speak during the panel at the Elliott School of International Affairs on Tuesday.

Former governmental officials and international affairs experts discussed the impact of artificial intelligence on national security at the Elliott School of International Affairs on Tuesday.

Assistant professor of history and international affairs Aaron Bateman, Lieutenant General Jack Shanahan and former CIA Senior Operations Officer Aaron Brown discussed harnessing AI to maximize the effectiveness of military tools and heighten competition between the United States and China. The Delta Phi Epsilon Professional Foreign Service Society hosted the conversation moderated by Alex Ryan, the organization’s vice president of external affairs.

Brown said AI’s capacity to increase intelligence agencies’ decision advantage — the speed at which decisions can be made from large amounts of information — will maximize their efficiency and give the U.S. an upper hand over countries with less advanced AI. 

“I think what we’re going to see is the ability to consume larger amounts of data faster to gain insights from those large amounts of data in ways we could not do so previously,” Brown said.

Brown said drones, which cost about $1,000 each, are easily attainable by the Ukrainian military and allow it to achieve military gains, like destroying previously unreachable Russian battle tanks, despite low funds.

Brown said shrinking costs for powerful weaponry lowers the barrier for nonstate militant actors and terrorist groups to execute greater damage with the funds available. Brown, who played a key role in the CIA’s search for Osama bin Laden, said the organizational structure of terrorist groups has been shifted by AI, with terrorist leaders who previously dealt with members who struggled to carry out attacks now entrusting AI-operated devices to easily carry out strikes.

“That competent person that can do the planning can now, or soon will be able to, carry out that act also themselves, potentially,” Brown said. “Or an entity that is able to make themselves much more knowledgeable much more quickly, a la ChatGPT-4, will go up a level of competence.”

Brown said misinformation created with AI can be easily debunked and poses little threat from a cybersecurity standpoint because it is created with easily detectable patterns. He said AI can be useful in mitigating the spread of misinformation, highlighting Taiwan’s implementation of AI to combat Chinese misinformation. 

Brown said combating disinformation involves increasing consumer education on identifying and reporting disinformation, as well as a technological solution. He said this type of education can prevent people from completely distrusting all information online.  

“It’s easy to take the human element out of the discussion with artificial intelligence, but human agency matters,” Brown said. “It is permeated throughout this policy challenge.”

Shanahan said the U.S. currently leads in terms of AI development, but other nations like the United Kingdom and Canada are catching up. He said China is the primary competitor for the U.S. and plans to overtake the U.S.’ dominance of AI by 2030. Shanahan said there is an economic component that makes military development of AI different compared to other technological advances because private firms conduct most AI research and development in the U.S. 

“The dynamics are the same we’ve had with every technology throughout history, throughout the military,” Shanahan said. “It feels different because these are commercial technologies being adapted, and that makes some commercial companies uncomfortable.”

Bateman said American governmental agencies have discussed the advantages of AI since the 1960s and that earlier forms of AI could only analyze data, whereas new generative AI can analyze information and create new data, text or images. Bateman said intelligence and military agents can use generative AI to quickly summarize large amounts of information and use its pattern recognition capacity to predict possible attacks. 

Bateman said collaboration between the U.S. and China would be the most beneficial for the continued development of AI, but private property protections often make such collaboration difficult. Bateman said engaging in Track II diplomacy, which involves informal negotiations between countries, can be useful to overcome such barriers by allowing country representatives to discuss views and needs candidly and can lead to an amicable environment for collaboration in the future. 

“Even when international and political tensions are at their highest point, that’s when these dialogues are even more important,” Bateman said.

More to Discover
Donate to The GW Hatchet