The Department of Homeland Security (DHS) is rolling out three $5 million AI pilot programs across three of its agencies, The New York Times reports. Through partnership with OpenAI, Anthropic, and Meta, DHS will test out AI models to help its agents with a wide array of tasks, including investigating child sex abuse materials, training immigration officials, and creating disaster relief plans.

As part of the AI pilot, the Federal Emergency Management Agency (FEMA) will use generative AI to streamline the hazard mitigation planning process for local governments. Homeland Security Investigations (HSI) — the agency within Immigration and Customs Enforcement (ICE) that investigates child exploitation, human trafficking, and drug smuggling — will use large language models to quickly search through vast stores of data and summarize its investigative reports. And US Citizenship and Immigration Services (USCIS), the agency that conducts introductory screenings for asylum seekers, will use chatbots to train officers.

DHS’s announcement is scant on details, but the Times report provides a few examples of what these pilots may look like in practice. According to the Times, USCIS asylum agents will use chatbots to conduct mock interviews with asylum seekers. HSI investigators, meanwhile, will be able to more quickly search its internal databases for details on suspects, which DHS claims could “lead to increases in detection of fentanyl-related networks” and “aid in identification of perpetrators and victims of child exploitation crimes.”

To accomplish this, DHS is building up an “AI corps” of at least 50 people. In February, DHS Secretary Alejandro Mayorkas traveled to Mountain View, California — famously the headquarters of Google — to recruit AI talent, and wooed potential candidates by stressing that the department is “incredibly” open to remote workers.

Hiring enough AI experts isn’t DHS’s only hurdle. As the Times notes, DHS’s use of AI hasn’t always been successful, and agents have previously been tricked into investigations by AI-generated deepfakes. A February report from the Government Accountability Office, which looked into two AI use cases within the department, found that DHS hadn’t used reliable data for one investigation. Another case hadn’t relied on AI at all, despite DHS claiming it had. Outside of DHS, there are plenty of documented cases of ChatGPT spitting out false results, including an instance in which a lawyer submitted a brief citing nonexistent cases that the AI model had completely made up.

Still, this expansion isn’t DHS’s first foray into AI. Some of the surveillance towers Customs and Border Protection (CBP) uses to monitor the US-Mexico border, such as those made by Anduril, use AI systems to detect and track “objects of interest” as they move across the rugged terrain of the borderlands. CBP hopes to fully integrate its network of surveillance towers through AI by 2034. The agency also plans to use AI to monitor official border crossing zones. Last year, CBP awarded a $16 million contract to a tech and travel company founded by its former commissioner, Kevin McAleenan, to build an AI tool that will scan for fentanyl at ports of entry.

The new DHS AI pilot programs, however, will rely on large language models rather than image recognition, and will largely be used in the interior of the country rather than at the border. DHS will report on the results of the pilot by the end of the year.

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *