Chairman Green: Artificial Intelligence Promises to “Transform the Global Economy and National Security Landscape as We Know It”

WASHINGTON, D.C.—Today, House Committee on Homeland Security Chairman Mark E. Green, MD (R-TN) delivered the following opening statement in a hearing to examine how artificial intelligence (AI) is enhancing homeland security missions and the cybersecurity implications for AI development, deployment, and use. 

Watch Chairman Green’s full opening statement here.

As prepared for delivery:

In this era of rapid technological advancement, I’m especially proud to live in a nation of innovators—some of whom join us today.

Today, American ingenuity is paving the way once again. Artificial intelligence, or AI, promises to transform the global economy and national security landscape as we know it. AI has the potential to create new jobs, catalyze productivity in Americans’ daily lives, and protect our men and women in uniform and law enforcement.

Throughout this Congress, committees in both chambers have convened numerous hearings to understand the countless opportunities and challenges AI presents. Like cybersecurity, AI’s impact is a complex and cross-cutting issue that cannot be handled by one jurisdiction alone. Therefore, we are here to examine what I believe to be one of the most promising areas in which to expand our use of AI: the security and defense of our homeland.

The Committee on Homeland Security has an oversight obligation to make sure we harness AI technologies right. As with any new technology, AI presents new risks, and we must take the time to understand them. This includes prioritizing safety and security throughout AI development, deployment, and use. It also requires us to treat AI with appropriate nuance so that we understand the impact of proposed regulatory measures on our businesses.

Today’s full committee hearing follows up on a productive Cybersecurity and Infrastructure Protection Subcommittee hearing led by Chairman Garbarino last December. The subcommittee specifically considered the role of DHS and CISA in securing AI—a topic we will continue to explore today. As that hearing reaffirmed, the threats facing our country are increasingly complex, and DHS plays a critical role in keeping Americans safe and our country secure.

DHS has a broad mission and has explored and even implemented AI for specific purposes aligned with its unique missions.

For example, U.S. Customs and Border Protection (CBP) has used AI-powered systems to monitor border areas using drones and cameras, which help identify suspicious activity and unauthorized crossings in real-time. The Transportation Security Administration (TSA) is currently examining the ways in which AI can enhance its security screening processes, including using AI to augment its x-ray imaging of travelers’ carry-on luggage.

TSA may soon look to AI algorithms, and particularly facial recognition systems powered by AI, to identify security threats among the traveling public and enhance the pre-screening process.

While these AI-powered systems offer the promise of increased security and efficiency, they also bring significant risks that Congress must carefully assess. For instance, AI-powered facial recognition systems capture and store images of Americans and foreign travelers, which present substantial privacy concerns. We must ensure that the use of AI-powered facial recognition by TSA is balanced with strong protections of privacy, civil liberties, and ethical standards.

Furthermore, U.S. Immigration and Customs Enforcement (ICE) is using AI to help identify and track illegal activities, such as human trafficking and smuggling, by analyzing large datasets and detecting patterns. And the Cybersecurity and Infrastructure Security Agency (CISA) is carefully examining the risks and opportunities presented by AI and the ways it can be leveraged to enhance our nation’s resilience against cyber threats. In the years ahead, CISA will play a critical role in addressing and managing risks at the nexus of AI, cybersecurity, and critical infrastructure.

Considering the widespread push for AI adoption within DHS, it is critical that the department collaborates with Congress and with relevant stakeholders, including those from the private sector, to manage AI’s complexities and risks.

In addition to the domestic concerns relating to the emergence of AI, we must also consider the broader strategic implications. Our nation’s primary strategic adversary, the People’s Republic of China (PRC), has made AI development a national priority and is investing heavily in research, talent, and infrastructure. The Communist regime’s aggressive pursuit of AI poses a significant challenge to the United States, not only economically but also in terms of our national security.

In fact, DHS’ 2024 Homeland Threat Assessment warns that developing “[m]alicious cyber actors have begun testing the capabilities of AI-developed malware and AI-assisted software development—technologies that have the potential to enable larger scale, faster, efficient, and more evasive cyber attacks—against targets, including pipelines, railways, and other US critical infrastructure.”

This is extremely concerning. As complex as these threats are, our nation’s efforts to combat them will be even more challenging if our adversaries lead in AI research, development, and innovation. For these reasons, it is important for Congress, DHS, and the private sector to work together to ensure that we remain at the forefront of AI innovation while safeguarding our national security, economic competitiveness, and civil liberties.

Today, we will hear from a panel of experts who will provide insights into the current state of AI for homeland security, and the steps we can take to trust that the AI we use is secure.

To our witnesses, thank you for being here and for your efforts to educate Members of this Committee and the American people on how we can responsibly advance AI innovation. I look forward to your testimony.  

 ###