Security news that informs and inspires

DHS Releases AI Security Guidance for Critical Infrastructure

By

New AI security guidelines from the Department of Homeland Security (DHS) give critical infrastructure operators a better understanding of the top risks associated with AI systems, and how to best approach the unique security issues that could arise from these risks.

The guidelines, released by the DHS on Monday at the behest of the Biden administration’s AI executive order last year, look at how critical infrastructure entities can best be secured against the various risks associated with AI. This includes both attacks using AI, such as AI-enabled compromises or social engineering, and attacks targeting AI systems that support critical infrastructure, such as adversarial manipulation of AI algorithms. The report also takes into account a significant AI risk category: Potential failures in the design or implementation of AI that could lead to malfunctions in critical infrastructure operations.

“AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks,” said Secretary of Homeland Security Alejandro N. Mayorkas in a statement on Monday. "Our Department is taking steps to identify and mitigate those threats."

The guidance consists of a four-phase mitigation strategy, which builds on the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. The four parts include a governance component, directing critical infrastructure organizations to prioritize safety and security outcomes when it comes to AI risk management; a mapping piece for entities to better understand the risks behind AI; a measurement aspect for organizations to develop systems that can assess and track AI risks and a management phase urging organizations to implement risk management controls for AI systems.

The DHS’s guidelines this week give some clarity for CISOs and security teams navigating how to best approach potential issues that could crop up as AI systems are deployed in their environments. On the heels of increasing popularity surrounding generative AI in particular, several government agencies and private sector companies over the past year have closely studied the best ways to mitigate against various AI-associated threats. Still, the guidelines from the DHS and other government entities are not mandatory requirements. Experts in the security industry have called for regulation, and have also pointed to a significant security challenge for AI: The development of AI systems is based on large language models (LLMs) that include many inherent risks themselves, such as the potential for polluted data or opaque model architectures. The DHS in its guidance did say that AI vendors should take on certain mitigation responsibilities, and that critical infrastructure organizations need to understand where dependencies on AI vendors exist in their environments.

“In many cases, AI vendors will also play a major role in ensuring the safe and secure use of AI systems for critical infrastructure,” according to the DHS guidance. “Certain guidelines apply both to critical infrastructure owners and operators as well as AI vendors. Critical infrastructure owners and operators should understand where these dependencies on AI vendors exist and work to share and delineate mitigation responsibilities accordingly.”

The DHS report is one of many mandates ordered by the White House’s AI executive order in October. The executive order, which attempted to set the stage for developing and deploying what it calls “responsible AI,” also asked the DHS to create an AI safety and security board to look at how the AI standards developed by NIST could be applied to the critical infrastructure sectors, the potential risks that crop up from the use of AI in critical infrastructure sectors, and how AI could be used by the critical infrastructure community to improve security and incident response.

The DHS on Friday officially launched that board, which includes 22 representatives from a range of sectors, including ones from OpenAI, Nvidia, Cisco, Delta Airlines and Humane Intelligence. In the months since the executive order, the DHS has also launched an AI roadmap detailing its current and future uses of AI and has implemented various pilot projects to test AI technology.