<![CDATA[Decipher]]> https://decipher.sc Decipher is an independent editorial site that takes a practical approach to covering information security. Through news analysis and in-depth features, Decipher explores the impact of the latest risks and provides informative and educational material for readers curious about how security affects our world. en-us info@decipher.sc (Amy Vazquez) Copyright 2024 3600 <![CDATA[Decipher Podcast: Chris Langford]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/decipher-podcast-chris-langford https://duo.com/decipher/decipher-podcast-chris-langford

]]>
<![CDATA[Kimsuky APT Using Newly Discovered Gomir Linux Backdoor]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/kimsuky-apt-using-newly-discovered-gomir-linux-backdoor https://duo.com/decipher/kimsuky-apt-using-newly-discovered-gomir-linux-backdoor

The Kimsuky APT group, which is closely linked to the North Korean military intelligence organization, has been deploying a newly discovered Linux backdoor in attacks against organizations in South Korea.

The backdoor is known as Gomir and is closely related to another piece of malware called GoBear, which is built for Windows targets. Researchers from Symantec discovered Gomir and said that it is also linked to Troll Stealer, an info stealer that Kimsuky was distributing in the last few months through compromised software packages. Kimsuky, which Symantec refers to as Springtail, has been active for more than a decade and is associated mainly with attacks on South Korean government and private sector organizations. The group is highly capable and develops an array of custom tools for its attacks. In November, the Department of the Treasury and government agencies from several European countries announced sanctions against Kimsuky and eight North Korean nationals.

The Gomir backdoor is the latest addition to Kimusky’s arsenal, which is considerable. The group has a wide range of custom and public tools at its disposal and is not shy about deploying them. Symantec’s researchers discovered the Gomir backdoor during investigations into Kimsuky’s use of Troll Stealer and GoBear.

“Symantec’s investigation into the attacks uncovered a Linux version of this malware family (Linux.Gomir) which is structurally almost identical and shares an extensive amount of distinct code with the Windows Go-based backdoor GoBear. Any functionality from GoBear that is operating system-dependent is either missing or reimplemented in Gomir,” Symantec’s analysis of the backdoor says.

Supply chain attacks have emerged as a key technique for many APT groups, and the past few years have seen several high-profile attacks that involved supply chain compromises. The SolarWinds, 3CX, and Kaseya attacks all had significant repercussions across a wide range of sectors, and those results didn’t go unnoticed by other attackers. Supply chain intrusions can provide a tremendous amount of return on investment.

“This latest Springtail campaign provides further evidence that software installation packages and updates are now among the most favored infection vectors for North Korean espionage actors,” the Symantec analysis says.

“The most notable example to date is the 3CX supply chain attack, which itself was the result of the earlier X_Trader supply chain attack. Springtail, meanwhile, has focused on Trojanized software installers hosted on third-party sites requiring their installation or masquerading as official apps. The software targeted appears to have been carefully chosen to maximize the chances of infecting its intended South Korean-based targets.”

The Gomir backdoor has a number of capabilities, including the ability to check arbitrary endpoints for TCP connections, discover and report the configuration of the machine it’s on, create a file on the machine, and exfiltrate any files from the computer.

]]>
<![CDATA[AI Security 'Is a Software Problem']]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/ai-security-is-a-software-problem https://duo.com/decipher/ai-security-is-a-software-problem

SAN FRANCISCO–Trying to figure out where the field of AI is going and how attackers and defenders will be using it is no one’s idea of a good time. AI usage is still in its very early stages, but some of the people working on and thinking about the safety and security of AI systems and LLMs are optimistic about the good outweighing the bad and say that many of the same principles used to design and build software securely can be applies to building LLMs and AI systems.

“It’s important to understand that when we talk about securing AI, we’re just talking about software. AI is just software. A lot of the techniques we’ve been developing over the last thirty years apply here as well. It’s not rocket science. We’ve been using machine learning inside systems for a very long time,” Heather Adkins, vice president of security engineering at Google, said during a panel discussion on AI safety at the RSA Conference here last week.

“When we talk about AI, it’s almost like AI is some unique monster. But it’s just a software problem.”

For the large part, when people talk about AI systems, they’re referring to generative AI tools such as ChatGPT that are built on top of large language models (LLMs). Those models are trained on massive data sets, many of which are proprietary and not generally open to public inspection. But despite this inherent opacity, AI systems comprise a set of software components, and humans have been building software for a long time. That process hasn’t always gone well, which is why the security industry exists. But people moderates how the software development process should work, what the built-in risks are, and how adversaries tend to attack software systems. That knowledge can be applied to securing AI systems and ensuring that their output fit for human consumption.

“If the platform they’re built on is built in a virtuous way and we get the right outcomes, then that’s what we’re looking for,” Adkins said.

“”We know security is a data problem. What the bad guys do with AI, we don’t know yet. Anything the defense will do with it the bad guys will do with it.”

One of the concerns around the rise of AI tools is that attackers will use it to automate their campaigns and intrusion efforts. To some degree, adversaries have been doing this for many years, but experts believe the advantage lies with defenders at the moment.

"We have a lot of security problems and AI taking over the world is not high on my list."

“It will magnify power in both areas. In our industry we have a data problem and a human resources problem. We can apply these technologies in ways the attackers can’t,” said Bruce Schneier, a cryptographer, technologist, and lecturer at the Kennedy School at Harvard University.

“Right now we are not defending at those speeds because it often requires judgment to do those things. In the short term I think AI will help defense more than offense. Long term, I have no idea.”

Adkins agreed in principle, but also emphasized that because AI is such a young technology, it’s virtually impossible to say where it will go.

“We are very bad at predicting what technology will be used for. We are today looking at our sci fi, a huge body of creative thinking, but the ways we’re deploying AI are very mundane. We are reducing toil in the SOC, automating report writing,” she said.

“it’s fairly difficult to predict what we’re going to do with it in two years, five years, or a hundred years The important thing is constant watchfulness. There are things we’re going to be delighted by. I love that we’re being very dialog driven but i think we all have to become AI experts now as a society.”

One thing the panelsist are not worried about in the short term is AI replacing humans at a large scale.

“Our ideas and fears about AI come from science fiction. This is why we think about the Terminator. That’s not science, that’s not engineering. That’s not something I’m worried about. We have a lot of security problems and AI taking over the world is not high on my list,” Schneier said.

]]>
<![CDATA[Rather Than Measuring Risk, Fix an Interesting Problem]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/rather-than-measuring-risk-fix-an-interesting-problem https://duo.com/decipher/rather-than-measuring-risk-fix-an-interesting-problem

SAN FRANCISCO–Measuring risk is one of the more difficult tasks that enterprise GRC teams face, as risk itself is a notoriously difficult thing to actually define and pin down. But perhaps fixing the fixable problems that contribute to risk, rather than measuring risk in absolute terms, should be the goal.

The concept of risk is a nebulous one, particularly as it’s typically applied to enterprise security. It can mean different things in different organizations, and different things to different people inside a given organization. And roles matter quite a bit when it comes to why people may want to measure risk and how they think about it. Security teams usually are interested in what kind of risk a given vulnerability or incident presents, and trying to mitigate and change that. CISOs might be more interested in measuring risk for the purpose of communicating it to the CEO and board of directors, because that’s where the money comes from. And the board may just want to compare a risk score from one quarter to the next to see whether things are improving.

“Lots of people want to measure risk, everyone from the CISO, who has to report up to the board, to the vendors, to the security teams,” Andy Ellis, former CSO at Akamai, said during a talk on the difficulty of measuring risk at the RSA Conference here last week. “And they all have different reasons for wanting to do that. The board mostly just wants to compare, the CISO wants to communicate, and the security teams and security vendors want you to change.”

But how much of the data and information that these various constituencies rely on is actually useful? It’s difficult to know. Security products are great at gathering, aggregating, and displaying information in dashboards and charts and heat maps. But without context, that information isn’t of much use. Knowing which vulnerabilities and other problems matter the most to your specific organization is what makes a difference, especially in a larger organization that might have hundreds of issues to address at any given time. Is this a rare but potentially critical issue or is it a common but less interesting problem?

“At the end of the day you are making up these numbers. You don’t have any data that fixing very rare problems matters. No company that isn’t selling risk services actually cares about the score of a risk. The goal is comparison,” Ellis said.

“Tell your teams to pick a problem and go fix it. I don’t care what problem it is. And when they fix it, buy them a cake in front of the CEO so other people will see that and think, I want a cake. It doesn’t matter which problem gets solved. Just fix one. Fix an interesting problem and I will buy you cake.”

Of course, if fixing interesting problems was easy, everyone would do it. And everyone is not doing it. Incentives matter when it comes to deciding which problems to address, and for security teams that will often mean focusing on the issues the CISO cares about. And for CISOs, incentives may dictate prioritizing things that can have the most-visible effect.

“Companies have perceived risk. Humans love to stay in the same spot for risk,” Ellis said.

]]>
<![CDATA[RSA Conference 2024: What We Wish People Were Talking About]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/rsa-conference-2024-what-we-wish-people-were-talking-about https://duo.com/decipher/rsa-conference-2024-what-we-wish-people-were-talking-about

It's hard to separate the signal from the noise at the RSA Conference, so we asked a group of experts which topics they wish people were discussing more, including security metrics, applying engineering concepts to security, and more.

]]>
<![CDATA[F5 Fixes Critical RCE Bugs in BIG-IP Next Central Manager]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/f5-fixes-critical-rce-bugs-in-big-ip-next-central-manager https://duo.com/decipher/f5-fixes-critical-rce-bugs-in-big-ip-next-central-manager

F5 has released updates to fix two vulnerabilities that can allow an unauthenticated remote attacker to gain complete control of the company’s BIG-IP Next Central Manager console. The attacker could then take advantage of three separate bugs to add invisible accounts on other BIG-IP devices controlled by the Next Central Manager.

The flaws affect versions 20.0.1 - 20.1.0 of the console and researchers from Eclypsium discovered them and disclosed them to F5, which released patches on Wednesday. One of the bugs is a SQL injection vulnerability while the other is a 0Data injection vulnerability.

“The vulnerabilities we have found would allow an adversary to harness the power of Next Central Manager for malicious purposes. First, the management console of the Central Manager can be remotely exploited by any attacker able to access the administrative UI via CVE 2024-21793 or CVE 2024-26026,” the Eclypsium advisory says.

“This would result in full administrative control of the manager itself. Attackers can then take advantage of the other vulnerabilities to create new accounts on any BIG-IP Next asset managed by the Central Manager. Notably, these new malicious accounts would not be visible from the Central Manager itself.”

Eclypsium researchers said they have not seen any evidence of active exploitation of these flaws, but given their seriousness and the position that F5 BIG-IP devices occupy in enterprise networks, upgrading affected products should be a priority for organizations,

“Once logged into BIG-IP Next Central Manager, the attacker can abuse an SSRF vulnerability to call any API method on any BIG-IP Next device. In this case, one of the available on-device methods will allow the attacker to create on-board accounts on the devices themselves, which are not visible from the Central Manager, and are not supposed to exist. This means that even if the admin password is reset in the Central Manager, and the system is patched, attacker access might still remain,” the Eclypsium advisory says.

]]>
<![CDATA[How CISA is Preparing For the Influx of CIRCIA Reports]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/how-cisa-is-planning-for-the-influx-of-circia-reports https://duo.com/decipher/how-cisa-is-planning-for-the-influx-of-circia-reports

SAN FRANCISCO - The streamlining of incident reporting is a large part of the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), and at RSA Conference this week, a CISA official outlined how it is laying the groundwork for the backend processes related to collecting and analyzing the information in these reports.

In the weeks since CISA released the proposed details for CIRCIA - a law passed by the Biden administration in 2022 that directed CISA to develop and implement requirements for critical infrastructure entities to report incidents and ransomware payment information to the agency - it has received feedback from public and private sector organizations, mostly centered around how it has defined what a covered incident is, and what types of information are considered reportable, said Brandon Wales, executive director with CISA during a panel at RSA Conference.

“The goal is to get it right and craft a rule that maximizes the benefits… It’s about spotting campaigns earlier, it’s about novel tactics and techniques, and it’s ensuring that the government, importantly, has insights into what’s happening across the entire cyber ecosystem, not only so it can take action, but also so that we can take understand the impacts of policies that we make,” said Wales. “Today, when the U.S. government decides on various policy initiatives, we don’t fully know the impact they are having on the ground because we don’t have consistent reporting for critical infrastructure… This will be an important tool to make sure we are calibrating what we do better in the future.”

The law, which will go into effect in 2025, will also mark a shift for CISA in the scale and scope of reported incidents that it receives. The rules apply to an estimated 316,244 entities across the 16 critical infrastructure sectors. Currently, Wales said that CISA opens up to 150,000 tickets in its operation center annually for incidents reported by government agencies. Wales said that CIRCIA will increase the number of reports from the private industry, and CISA in its proposed rules estimated that a total of 210,525 CIRCIA reports would be submitted through 2033.

A critical piece in keeping up with the sharp increase will in part come down to funding from Congress. CISA also estimated that the cost of the proposed rule would be $2.6 billion over the course of 11 years, driven by “initial costs associated with becoming familiar with the proposed rule,” as well as recurring data and records preservation requirements, and help desk calls and enforcement actions. Wales said that CISA needs the appropriate level of resources to develop and sustain modern systems and get the right people on board to analyze the influx of data at scale.

However, “we’re not necessarily concerned about the scale of reporting,” said Wales. "We’re putting in place technology in that will… enable improved analytic work inside CISA and improve the relationship even with our existing interagency partners.”

Part of the incident reporting rules also ties into an overall effort to better harmonize and streamline incident reporting rules across the board in the government, and for CISA that will require the ability to share analyzed information to support policy decisions, threat intelligence and overarching trends in the cybersecurity threat landscape. As part of these efforts, CISA is building on existing processes it already has in place with various regulators. For instance, CISA currently collects information on significant security incidents reported by transportation entities under TSA’s security directive and provides it to the TSA in real time.

“By the time the CIRCIA rule is final… I think we’ll be very confident that reports that come in can go to agencies that are required to receive them,” said Wales.

]]>
<![CDATA['Zero Day Piled on Zero Day']]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/zero-day-piled-on-zero-day https://duo.com/decipher/zero-day-piled-on-zero-day

SAN FRANCISCO–The focus from both state actors and cybercrime groups on exploiting edge devices in the last few months has been a serious challenge for enterprises and government agencies, and from all indications, that is not likely to change anytime soon.

Edge devices make attractive targets for attackers thanks to their position in networks as well as the powerful view they can offer once they’re compromised. There has been a string of vulnerabilities disclosed in security appliances and other edge devices in the last few months, some of which were exploited as zero days. One of the most serious cases involved two separate bugs in the Ivanti Connect Secure and Policy Secure appliances that were disclosed in January. Soon after the disclosure the Cybersecurity and Infrastructure Security Agency issued a rare emergency directive requiring federal agencies to disconnect affected devices from the Internet.

“There was zero day piled on zero day in some cases. Those devices are going to be the focus of both state actors and ransomware crews. They’re Internet-facing, they contain large amounts of data and are attractive places for the bypass of security boundaries,” recently retired NSA Director of Cybersecurity Rob Joyce said during a session at the RSA Conference here Wednesday.

“If you’re using these edge devices for protection and that’s your only protection, it’s not good enough. It gives the actor the opportunity to do credential harvesting, lay down persistent presence, use it as an exfiltration point. We really have to think about a broader set of sec than just edge devices,” said David Luber, who is Joyce’s successor at NSA.

“You need to have MFA enabled and zero trust behind those devices so there’s minimal opportunity for actors to move.”

NSA has one of the truly unique views of the threat landscape and attacker activity, and while it’s perhaps the most powerful signals intelligence agency on the planet, Joyce and Luber said that the agency and its counterparts still need assistance and collaboration from private sector experts and security companies to counter advanced attackers.

“It’s more important than ever to have our analysts working side by side to work on these advanced threats. It’s not just about what we can bring to bear from an NSA perspective. In many cases we need to do it from a national perspective,” Luber said.

Joyce, who spent 34 years at NSA and ran the agency’s Cybersecurity Directorate since 2021, said the agency’s unique perspective is an asset, but combining that with information from outside companies is vital.

“The ability to go out and marry those two viewpoints together is what we need,” Joyce said.

]]>
<![CDATA[To Fix IoT Security, 'We Need to Aim at the Security Have-Nots']]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/to-fix-iot-security-we-need-to-aim-at-the-security-have-nots https://duo.com/decipher/to-fix-iot-security-we-need-to-aim-at-the-security-have-nots

SAN FRANCISCO–On the long and ever-growing list of security priorities for enterprises and SMBs, IoT devices tend to fall somewhere near the bottom, something that attackers of all stripes have gladly taken advantage of for many years. But government and private sector experts alike are working to change that through regulatory efforts, advocacy, and technical solutions that they hope will raise the security bar and make it easier for organizations to manage and secure IoT devices.

IoT security often is framed as a consumer issue, tied to devices such as home routers, smart appliances, connected light bulbs and all manner of other devices that mostly have no business being on the public Internet. But there are billions of IoT devices sitting on enterprise networks, as well, and many of them have publicly known vulnerabilities in them, a situation that most security teams IT administrators understand, but often don’t have the time or resources to address. And in many cases, even if a team does have the resources to update or patch IoT devices, it may not be possible.

“These devices are mostly uninspectable, not just by the user but also by the IT admins. Even if they want to hire the right folks to update them, they can’t inspect everything. Those devices are incredibly unmanageable and vulnerable. They’re not built to even remotely the level of security resilience that you see even on a general purpose consumer computer,” Window Snyder, founder and CEO of Thistle Technologies, and a former Apple and Microsoft security leader, said during a panel discussion on IoT security at the RSA Conference here Tuesday.

“We need to develop the kind of confidence in the update process for these devices that we have for things like our phones. The updates for phones are very reliable and we don’t even think about installing them now. We need that kind of reliability. The degree of management that we expect for the enterprise.”

Cybercrime groups, APT teams, and even lone operators have been making a meal out of IoT devices for more than a decade now and have met very little in the way of resistance. IoT devices typically are built for convenience and are meant to have relatively short lives. Security is usually an afterthought, if it’s considered at all, and for the most part, IoT devices run some form of embedded Linux. When a new vulnerability is disclosed in Linux or one of the libraries used in popular IoT devices, attackers have no trouble finding vulnerable targets to go after. Patching those bugs is no simple task in most cases, thanks to often opaque update processes. For example, the small routers used in many SMBs and homes are favorite targets for attackers because updating them is difficult and a failed update can render the device unusable, so they stay vulnerable for long periods of time, if not indefinitely.

“I look at those routers as the first real IoT devices. They’re kind of the canary in the coal mine. The incentives of the person who owns the router are different from the attacker who might think about how to use that router. A small business owner might think, I don’t have anything to steal, why should I update this? They don’t think it’s a security risk, so the incentives aren’t there for them,” said Chris Wysopal, CTO of Veracode and a longtime security researcher.

Aside from the difficulty of patching, another major component of the IoT security challenge is the short shelf life of many devices. Once a device reaches end of life and the manufacturer stops supporting it and providing security updates, owners often have no real options for addressing any security or usability issues. That obsolescence is a feature, not a bug, for the manufacturers. But for owners, it can be a security nightmare.

"“We need to aim at the true security have-nots.”

“There needs to be a conversation between the government and manufacturers about end of life and updates,” said Allan Friedman, a senior advisor and strategist at the Cybersecurity and Infrastructure Security Agency. “We need to aim at the true security have-nots.”

Finding a way to allow device owners to have some measure of control and manageability of their IoT devices has proven to be a difficult task. Several states have enacted right-to-repair laws that enable device owners to modify or fix their devices, but those are the exception rather than the rule.

“I’d like to find a way to create a graceful default for end of life devices so things go into the public domain after a manufacturer stops supporting it or goes out of business. Right now, how do we know if we’re allowed to work on devices? If the company is gone, there needs to be a way to do this,” said Tarah Wheeler, CEO of Red Queen Dynamics and veteran security technologist.

“Having a de facto graceful exit for people who want to work on those things is important.”

The federal government has made some efforts to address IoT security, but it’s still early days for that.

“Too much of our software, including critical software, is shipped with significant vulnerabilities that our adversaries exploit,” the Biden administration said after releasing an executive order on security in 2021. “This is a long-standing, well-known problem, but for too long we have kicked the can down the road. We need to use the purchasing power of the Federal Government to drive the market to build security into all software from the ground up."

A new non-profit called the Secure Resilient Future Foundation is launching this week in an effort to help address the IoT security problems through advocacy and collaboration with the government and industry. Wysopal and Wheeler are both part of the effort, as is Paul Roberts, a longtime security journalist and right-to-repair advocate.

]]>
<![CDATA[Krebs: ‘Business Risk and Geopolitical Risk Are Intertwined’]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/krebs-business-risk-and-geopolitical-risk-are-intertwined https://duo.com/decipher/krebs-business-risk-and-geopolitical-risk-are-intertwined

SAN FRANCISCO - Businesses navigating cybersecurity risks are dealing with the dual challenges of the exploding threat actor landscape, and technology that’s inherently not secure and that by design must be deployed in an extremely complex way.

The overarching concern on the backend of these issues is the increasingly intertwined nature of business risk and geopolitical risks, said Chris Krebs, chief intelligence and public policy officer with SentinelOne and the former director of the Cybersecurity and Infrastructure Security Agency (CISA), speaking at the RSA Conference on Tuesday with Jen Easterly, the current director of CISA. One very relevant example of this entanglement is the targeting by Volt Typhoon attackers of critical infrastructure entities in the U.S., not solely for espionage purposes but to burrow in the organizations’ networks to launch disruptive attacks in the event of a major conflict.

“You think about the threat landscape, and... the range of threats we’re dealing with,” said Easterly. “This is a different threat, and it's why we’re talking so much about resilience and about secure by design... [The threat actors] largely take advantage of known flaws and defects. Why? Because for 40-plus years, the technology that's been created, and that now underpins the infrastructure that Americans rely on every hour of every day, is inherently insecure. It was not created to put security first.”

In order to get ahead of these types of threats, Easterly pointed to CISA’s Secure By Design initiative, which the agency has heavily been promoting as a way to push manufacturers to build in various safety and security measures and processes starting in the development phase of their products. This week, over 60 companies are signing a voluntary pledge committing to taking steps toward Secure by Design, said Easterly,

“It is a voluntary pledge, but we have a platform to advance radical transparency,” said Easterly. “This is a major effort we’re undertaking. It’s the only way we can make ransomware and cyberattacks a shocking anomaly - to make sure the technology is secure.”

There are other means of improving security at the manufacturing and business level beyond voluntary efforts, including civil litigation, said Krebs, citing the SEC’s lawsuit against SolarWinds after its 2020 breach. There are also regulatory actions and legislative measures, such as the SEC’s cyber rules introduced last year for publicly traded companies. Another significant factor is “an awakening of realizing that [security issues] will drive customers away,” said Krebs, pointing to Microsoft’s recent Secure Future Initiative that it launched after its intrusion last year by a Chinese state-affiliated threat group and an ensuing report by the Cyber Safety Review Board outlining a number of security failures made by the company. Last week, Microsoft CEO Satya Nadella shared a memo highlighting the importance of “prioritizing security above all else,” which among other measures said that senior executives’ compensations will be tied to progress in meeting security milestones.

While the public and private sectors are moving the needle in these areas, the industry isn't standing still, stressed Krebs. New technologies are emerging that are creating more risks in real time for enterprises, including the general availability of generative AI in 2022. Right now the defensive applications for AI - like use cases related to threat hunting - appear to be outweighing the uses by threat actors around social engineering or translation purposes, but “we have to take a step back and look at what the risk picture looks like with AI,” said Krebs.

“We don’t fully grok where the chinks in the armor are,” said Krebs. “There’s safety, there’s privacy, regulatory, legal, business operations."

]]>
<![CDATA[Decipher Podcast: Kelly Shortridge at RSA Conference]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/decipher-podcast-kelly-shortridge-at-rsa-conference https://duo.com/decipher/decipher-podcast-kelly-shortridge-at-rsa-conference

]]>
<![CDATA[Proposed Bill Focuses on Voluntary AI Security Incident Reporting]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/proposed-bill-would-create-reporting-database-for-ai-security-incidents https://duo.com/decipher/proposed-bill-would-create-reporting-database-for-ai-security-incidents

Senators this week introduced a new bill that would update cybersecurity information-sharing programs to better incorporate AI systems, in an effort to improve the tracking and processing of security incidents and risks associated with AI.

With both private sector companies and U.S. government agencies trying to better understand the security risks and threats associated with generative AI and the deployment of AI systems across various industries, “The Secure Artificial Intelligence Act of 2024" would specifically look at collecting more information around the vulnerabilities and security incidents associated with AI. Currently, the existing processes for vulnerability information sharing - including the National Institute of Standards and Technology's (NIST) National Vulnerability Database and the CISA-sponsored Common Vulnerabilities and Exposures program - "do not reflect the ways in which AI systems can differ dramatically from traditional software," senators Mark Warner (D-Va.) and Thom Tillis (R-NC) said in the overview of their new bill.

“When it comes to security vulnerabilities and incidents involving artificial intelligence (AI), existing federal organizations are poised to leverage their existing cyber expertise and capabilities to provide critically needed support that can protect organizations and the public from adversarial harm,” according to the overview of the bill. “The Secure Artificial Intelligence Act ensures that existing procedures and policies incorporate AI systems wherever possible – and develop alternative models for reporting and tracking in instances where the attributes of an AI system, or its use, render existing practices inapt or inapplicable.”

Under the new bill, these existing databases would need to better incorporate AI-related vulnerabilities, or a new process would need to be created to track the unique risks associated with AI, which include attacks like data poisoning, evasion attacks and privacy-based attacks. Already, researchers have identified various flaws in and around the infrastructure used to develop AI models, and in several cases these have been tracked through known databases and programs. Last year, for instance, the NVD added critical flaws in platforms used for hosting and employing large language models (LLMs), such as an OS command injection bug (CVE-2023-6018) and authentication bypass (CVE-2023-6014) in MLflow, a platform to streamline machine learning development.

Another priority is to establish a voluntary public database that would track reports of safety and security incidents related to AI. The reported incidents would involve AI system widely used in the commercial or public sectors, or AI systems used in critical infrastructure or safety-critical systems, which would result in “high-severity or catastrophic impact to the people or economy of the United States."

The bill would also establish an Artificial Intelligence Security Center at the NSA, which would serve as an AI research testbed for private sector researchers and help the industry develop guidance around best AI security practices. Part of this would be to develop an approach for what the bill calls "counter-artificial intelligence,” which are tactics around manipulating an AI system in order to subvert the confidentiality, integrity or availability that system. Additionally, it would direct CISA, NIST and the Information Communications Technology Supply Chain Risk Management task force to create a “multi-stakeholder process” for developing best practices related to supply chain risks associated with training and maintaining AI models.

The Secure Artificial Intelligence Act of 2024 joins an influx of other legislative proposals over the past year, and an overall flurry of government activity like the White House’s AI executive order in 2023, to better understand the security risks associated with AI. Last year, the Testing and Evaluation Systems for Trusted AI act was proposed in October 2023 by senators Jim Risch (R-Idaho) and Ben Ray Lujan (D-N.M.). The bill would require NIST and the Department of Energy to develop testbeds for assessing AI tools and supporting “safeguards and systems to test, evaluate, and prevent misuse of AI systems.” Warner has also introduced previous bills centered around AI security, including the Federal Artificial Intelligence Risk Management Act in November 2023, which would establish guidelines to be used within the federal government to mitigate risks associated with AI.

“As we continue to embrace all the opportunities that AI brings, it is imperative that we continue to safeguard against the threats posed by – and to -- this new technology, and information sharing between the federal government and the private sector plays a crucial role,” said Warner in a statement. “By ensuring that public-private communications remain open and up-to-date on current threats facing our industry, we are taking the necessary steps to safeguard against this new generation of threats facing our infrastructure.”

]]>
<![CDATA[Attacker Accessed Dropbox Sign User Authentication Data in Recent Intrusion]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/attacker-accessed-dropbox-sign-user-authentication-data-in-recent-intrusion https://duo.com/decipher/attacker-accessed-dropbox-sign-user-authentication-data-in-recent-intrusion

An unidentified attacker recently gained access to a database that held customer information for Dropbox Sign users, including usernames and emails, and authentication information such as API keys, OAuth tokens, and MFA information.

Dropbox on Wednesday disclosed the breach in a notice to the Securities and Exchange Commission and said that it discovered the intrusion on April 24, but did not say when the attacker gained access or how long the intrusion lasted. The company said that there is no evidence at the moment that the attacker accessed any of Dropbox’s other products or services. The company’s security team has already reset users’ passwords, logged them out of any devices that were signed in to Dropbox Sign and is in the process of rotating API keys and OAuth tokens.

“On April 24th, we became aware of unauthorized access to the Dropbox Sign (formerly HelloSign) production environment. Upon further investigation, we discovered that a threat actor had accessed data including Dropbox Sign customer information such as emails, usernames, phone numbers and hashed passwords, in addition to general account settings and certain authentication information such as API keys, OAuth tokens, and multi-factor authentication,” a Dropbox blog on the incident says.

Dropbox Sign is an online document creation and signing service and was formerly known as HelloSign. Company officials said the infrastructure for Dropbox Sign is largely separated from infrastructure used for other Dropbox services.

The attacker was able to access the customer database by compromising a service account that had a variety of privileges and was able to then access an automated system configuration tool.

“The actor compromised a service account that was part of Sign’s back-end, which is a type of non-human account used to execute applications and run automated services. As such, this account had privileges to take a variety of actions within Sign’s production environment. The threat actor then used this access to the production environment to access our customer database,” the blog says.

In its SEC filing, Dropbox officials said they do not believe this incident will have a material impact on the company’s operations.

]]>
<![CDATA[RSA Conference 2024 Preview: The Sessions to See This Year]]> dennis@decipher.sc (Dennis Fisher)lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/rsa-conference-2024-preview-the-sessions-to-see-this-year https://duo.com/decipher/rsa-conference-2024-preview-the-sessions-to-see-this-year

In this special episode, Dennis Fisher and Lindsey O'Donnell-Welch are joined by Brian Donohue of Red Canary to preview the RSA conference talks they're excited about and to try to make sense of some of the session titles that are maybe a little indecipherable.

]]>
<![CDATA[Senators Reprimand UnitedHealth CEO in Ransomware Hearing]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/senators-reprimand-unitedhealth-ceo-in-ransomware-hearing https://duo.com/decipher/senators-reprimand-unitedhealth-ceo-in-ransomware-hearing

Senators at a Wednesday government hearing had strong words for UnitedHealth Group CEO Andrew Witty about the organization’s lack of security protections leading up to the February Change Healthcare ransomware attack, and the fallout across the healthcare industry that occurred after the attack.

Witty’s statements during the Senate Finance Committee hearing, and later the Energy and Commerce’s Oversight and Investigations subcommittee hearing, stayed largely within the confines of his written testimony, though he did confirm UnitedHealth Group’s $22 million ransom payment and acknowledge that potentially one-third of Americans’ data was stolen. The questions and criticisms from senators across the board, meanwhile, highlighted overarching concerns about the impact of large corporations coming under attack. In this case, attackers targeted Change Healthcare - owned by UnitedHealth Group, which is the fifth largest company in the U.S. and that touches 152 million individuals overall - via a Change Healthcare Citrix remote access portal that didn’t have multi-factor authentication enabled.

“Mr. Witty owes Americans an explanation for how a company of UHG’s size and importance failed to have multi-factor authentication on a server providing open door access to protected health information, why its recovery plans were so woefully inadequate and how long it will take to finally secure all of its systems,” said Sen. Ron Wyden (D-Ore.) during the hearing.

Wyden condemned the attack as an example of the cybersecurity concerns that could happen should a “too big to fail” organization get hit by ransomware. After threat actors deployed the ransomware in February, nine days after gaining initial access via the stolen Citrix credentials, the fallout from the Change Healthcare attack lasted several weeks and crippled healthcare providers, hospitals and pharmacies across the countries.

The question of accountability loomed over the Wednesday hearings, and some of the questions centered around whether Witty knew about the lack of security measures, such as MFA, that enabled the attack. This follows a trend previously predicted by Gartner, where CEOs and board members are being increasingly held personally liable for breaches. As part of its cybersecurity rule finalized last year, the SEC also considered requiring companies to describe their board members’ oversight of security risks and cybersecurity expertise.

“UHG has not revealed how many patients’ private medical records were stolen, how many providers went without reimbursement, and how many seniors were unable to pick up their prescriptions as a result of the hack.”

Wyden said that UnitedHealth’s anti-competitive practices likely prolonged the fallout from the ransomware attack, and that the company and its top executives need to take responsibility for the attack.

“Consistently, your views seem to minimize the impact of your involvement,” said Wyden, speaking to Witty during the hearing. “You say that UnitedHealth’s payments processing accounts for only 6 percent of payments in the healthcare system. My view is that’s basically hiding the ball. In 2022 the Department of Justice said that Change retains records of at least 211 million individuals going back to 2012.”

Witty during the hearing said that it’s UnitedHealth’s policy to have MFA enabled for externally-facing applications and said that he did not know that MFA wasn’t enabled on the Change server before the attack. He also said that he was not aware of any audits conducted before the attacks that identified a lack of MFA on “this particular server” as a compliance or security risk. When asked why MFA wasn’t enabled on the application, the CEO said that Change Healthcare, acquired by UnitedHealth in 2022, came to its company with legacy technologies, and it was in the process of upgrading this technology when the attack occurred.

One other point of contention during the hearing was the compromised data itself. UnitedHealth Group recently said that attackers gained access to some protected health information and personally identifiable information “which could cover a substantial proportion of people in America,” but it will likely take several more months of investigation to fully understand what data was exfiltrated and who has been impacted. Wyden said beyond the sensitive nature of the data stolen - which could include cancer diagnoses or mental health treatment plans - the fact that government and military personnel information is included makes the hack a “clear national security priority.”

“Leaving this sensitive patient information vulnerable to hackers, whether criminals or a foreign government, is a clear national security threat,” said Wyden. “UHG has not revealed how many patients’ private medical records were stolen, how many providers went without reimbursement, and how many seniors were unable to pick up their prescriptions as a result of the hack.”

]]>
<![CDATA[‘Uncharted Territory:’ Companies Devise AI Security Policies]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/uncharted-territory-companies-devise-ai-security-policies https://duo.com/decipher/uncharted-territory-companies-devise-ai-security-policies

Businesses have been preparing and implementing security policies for the utilization of generative AI in the workplace, but many executives say that they still don’t fully understand how AI works and its impacts, according to a new Splunk report.

Splunk’s State of Security 2024 report, released Tuesday and based on a survey of 1,650 security executives across nine countries, highlights how security teams are mulling over generative AI security and data privacy policies in their organizations, and up to 44 percent of respondents listed AI as a top security initiative of 2024 (with 35 percent pointing to cloud security and 20 percent listing security analytics). Most businesses said their employees are actively leveraging AI, leaving CISOs to navigate the best ways to prepare for potential risks that could crop up as AI systems are utilized in their environments.

Despite its high adoption rate, some businesses - around one-third of report respondents - have not implemented corporate security policies clarifying the best security practices around generative AI. At the same time, while AI policies require a deep understanding of the technology itself and potential impacts across the business, 65 percent of respondents acknowledge that they lack education around AI.

“Many individuals lack a foundational understanding of what AI is, how it works, and its potential applications and limitations,” said Mick Baccio, global security advisor at Splunk SURGe. “I’m not implying mastery of machine learning algorithms, neural networks, and other AI techniques is a necessity, but a basic understanding of the systems being used. Like a car, it’s not necessary to know the details of a combustion engine, but a fundamental understanding of how it operates is critical.”

While having a company policy in place does not eliminate security issues, these types of policies can keep the ship on the right course in helping executives think through the security risks and corresponding mitigations associated with AI. For instance, corporate policies should give further clarity about what type of data can be used in public generative AI platforms, and specify the types of sensitive or private data that shouldn’t be used. AI security policies should also take into account areas like access control, training and awareness and regulatory compliance, said Baccio.

“I think there needs to be a basic understanding of the potential vulnerabilities of AI systems, such as adversarial attacks, data poisoning, and model inversion attacks,” said Baccio.

Perceptions of how generative AI will assist both security defenders and threat actors are also changing. Both businesses and government agencies have been trying to better understand the security issues behind both the development and deployment of AI systems. A new set of guidelines by the DHS for critical infrastructure entities released this week, for example, looked at the best security measures for organizations when it comes to attacks using AI, attacks targeting AI systems that support critical infrastructure, and potential failures in the design or implementation of AI that could lead to malfunctions.

Up to 43 percent of respondents thought that generative AI would help defenders, pointing to threat intelligence, security risk identification, threat detection and security data summarization as the top AI cybersecurity use cases. Furthermore, half of the respondents said they are in the middle of developing a formal plan for using generative AI for cybersecurity and for addressing potential AI security risks, though they said the plans aren’t complete or agreed upon.

However, 45 percent of respondents said generative AI will help attackers, and 77 percent believe that it “expands the attack surface to a concerning degree.” Respondents said they think that generative AI will make existing attacks more effective and increase the volume of existing attacks. Data leakage is a major concern for organizations.

“Not all AI threats originate from outside sources; 77% of respondents agree that more data leakage will accompany increased use of generative AI,” according to the report. “However, only 49% are actively prioritizing data leakage prevention - possibly because there aren’t many solutions yet that control the flow of data in and out of generative AI tools.”

]]>
<![CDATA[Verizon DBIR: Enterprises Know the Pain of Zero Day Exploits All Too Well]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/verizon-dbir-enterprises-know-the-pain-of-zero-day-exploits-all-too-well https://duo.com/decipher/verizon-dbir-enterprises-know-the-pain-of-zero-day-exploits-all-too-well

Thanks to the emergence of significant flaws in widely deployed products such as the MOVEit Transfer, Barracuda ESG, Atlassian Confluence, and others, the past year has seen a nearly 200 percent increase in the usage of vulnerability exploits as the initial access vector for data breaches around the world, according to statistical analysis of more than 10,000 breaches.

The significant spike in vulnerability exploitation as an entry point is tied to the use of several zero days and other vulnerabilities by ransomware groups and other cybercrime organizations last year. The MOVEit Transfer flaw (CVE-2023-34362) was a favorite target of several ransomware groups, notably Cl0p, and other actors targeted significant vulnerabilities in Atlassian Confluence, the Barracuda ESG appliances, and Ivanti servers, as well. The Verizon 2024 Data Breach Investigations Report (DBIR), released today, shows that attackers not only target critical flaws in the days right after (or sometimes before) they’re disclosed, but continue to use them in the weeks and months to come.

“This 180% increase in the exploitation of vulnerabilities as the critical path action to initiate a breach will be of no surprise to anyone who has been following the MOVEit vulnerability and other zero-day exploits that were leveraged by Ransomware and Extortion-related threat actors,” the report says.

“This was the sort of result we were expecting in the 2023 DBIR when we analyzed the impact of the Log4j vulnerabilities. That anticipated worst case scenario discussed in the last report materialized this year with this lesser known—but widely deployed— product.”

The Verizon DBIR comprises data from Verizon’s own breach investigations as well as data contributed by dozens of partner organizations, including law enforcement agencies, security companies, platform providers, and incident response firms from around the world. This year’s report includes data on more than 10,000 confirmed breaches across a broad range of industries. The DBIR investigators identified 1,567 individual breaches directly connected to exploitation of the MOVEit Transfer flaw in organizations across industries. Though the report does not have data on when each breach occurred, a survival analysis of vulnerabilities in the Known Exploited Vulnerabilities catalog maintained by the Cybersecurity and Infrastructure Security Agency shows that patching of critical, known exploited bugs doesn’t really ramp up in most organizations until more than 30 days after the first disclosure.

“But before organizations start pointing at themselves saying, “It’s me, hi, I’m the problem,” we must remind ourselves that after following a sensible risk-based analysis, enterprise patch management cycles usually stabilize around 30 to 60 days as the viable target, with maybe a 15-day target for critical vulnerability patching. Sadly, this does not seem to keep pace with the growing speed of threat actor scanning and exploitation of vulnerabilities,” the report says.

“This is not enough to shake the risk off. As we pointed out in the 2023 DBIR, the infamous Log4j vulnerability had nearly a third (32%) of its scanning activity happening in the first 30 days of its disclosure. The industry was very efficient in mitigating and patching affected systems so the damage was minimized, but we cannot realistically expect an industrywide response of that magnitude for every single vulnerability that comes along, be it zero-day or not.”

“If we can’t patch the vulnerabilities faster, it seems like the only logical conclusion is to have fewer of them to patch."

Patch management on an enterprise-level scale is a constant task, not a monthly or even weekly one. Prioritization becomes paramount, and while organizations with mature security programs can rely on vulnerability management and patch management systems, many companies don’t have that luxury and face the daunting task of trying to decide where to allocate their scant resources in order to be the most effective.

“We must remind ourselves that these are companies with resources to at least hire a vulnerability management vendor. That tells us that they care about the risk and are taking measures to address it. The overall reality is much worse, and as more ransomware threat actors adopt zero-day and/or recent vulnerabilities, they will definitely fill the blank space in their notification websites with your organization’s name,” the report says.

“If we can’t patch the vulnerabilities faster, it seems like the only logical conclusion is to have fewer of them to patch. We realize this is the stuff of our wildest dreams, but at the very least, organizations should be holding their software vendors accountable for the security outcomes of their product, even if there is no regulatory pressure on those vendors to do better.”

Ransomware actors typically will use whatever tactic is most convenient at the time in order to gain access to an environment, and if that happens to be a new bug in a widely deployed application, then so be it.

“As we gaze into our crystal ball, we wouldn’t be surprised if we continue to see zero-day vulnerabilities being widely leveraged by ransomware groups. If their preference for file transfer platforms continues, this should serve as a caution for those vendors to check their code very closely for common vulnerabilities. Likewise, if your organization utilizes these kinds of platforms—or anything exposed to the internet, for that matter—keep a very close eye on the security patches those vendors release and prioritize their application,” the report says.

]]>
<![CDATA[Memory Safe: Dennis Fisher]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/memory-safe-dennis-fisher https://duo.com/decipher/memory-safe-dennis-fisher

In a special bonus Memory Safe episode, Dennis Fisher, Decipher’s editor in chief, talks about his decades of experience writing about cybersecurity news, the article he authored that inspired him to get into the industry (hint: it involved phishing) and how the cybersecurity news world has changed over the years.

]]>
<![CDATA[Stolen Citrix Credentials Led to Change Ransomware Attack]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/stolen-citrix-credentials-led-to-change-ransomware-attack https://duo.com/decipher/stolen-citrix-credentials-led-to-change-ransomware-attack

Threat actors behind the Change Healthcare ransomware attack in February were able to gain initial access by leveraging compromised credentials for a Citrix remote access portal, which didn’t have multi-factor authentication enabled. The initial access vector behind the attack was revealed in a new testimony document from Andrew Witty, CEO of Change’s parent company UnitedHealth Group, before he attends a Wednesday hearing by the House Energy and Commerce subcommittee.

The issue of compromised credentials continues to haunt organizations, especially as attackers increasingly rely on identity-centric tactics. According to Witty, the threat actors on Feb. 12 were able to remotely compromise the account for the Change Healthcare Citrix application used to enable remote access to desktops. After gaining access, they then moved laterally within the systems “in more sophisticated ways” in order to exfiltrate data. Nine days later, the threat actors deployed the ransomware. In the testimony, Witty also addressed his decision to pay a reported $22 million ransom to the attackers.

“As we have addressed the many challenges in responding to this attack, including dealing with the demand for ransom, I have been guided by the overriding priority to do everything possible to protect peoples’ personal health information,” according to Witty’s testimony. “As chief executive officer, the decision to pay a ransom was mine. This was one of the hardest decisions I’ve ever had to make. And I wouldn’t wish it on anyone.”

Witty’s testimony also sheds light on the company’s incident response procedures following the attack. After the attack occurred, connectivity to Change environments was severed. Experts from Google, Microsoft, Cisco, Amazon, Mandiant and Palo Alto Networks offered support in mitigating the attack, as well as government agencies like the Department of Health and Human Services and FBI.

“Together with our Change Healthcare colleagues, they immediately began the around-the-clock and enormously complex task of safely and securely rebuilding Change Healthcare’s technology infrastructure from the ground up,” according to Witty’s testimony. “The team replaced thousands of laptops, rotated credentials, rebuilt Change Healthcare’s data center network and core services, and added new server capacity. The team delivered a new technology environment in just weeks – an undertaking that would have taken many months under normal circumstances.”

“Given the ongoing nature and complexity of the data review, it is likely to take several months of continued analysis before enough information will be available to identify and notify impacted customers and individuals."

Over the course of the past two months, UnitedHealth Group has slowly filled in the blanks on the many lingering questions around the ransomware attack. Most recently, Change Healthcare determined that the attackers gained access to some protected health information and personally identifiable information “which could cover a substantial proportion of people in America.” Witty in his testimony said that it will likely take several more months of investigation to fully understand what data was exfiltrated and who has been impacted.

“Given the ongoing nature and complexity of the data review, it is likely to take several months of continued analysis before enough information will be available to identify and notify impacted customers and individuals, partly because the files containing that data were compromised in the cyberattack,” according to Witty’s testimony. “Our teams, along with leading external industry experts, continue to monitor the internet and dark web to determine if data has been published.”

One aspect that will likely be discussed further in the Wednesday testimony are the security implications behind the sheer number of hospitals, healthcare providers and patients that rely on Change Healthcare overall. The attack disrupted many of Change Healthcare’s operations, but because the company handles data, payments and claims processing for a huge chunk of the U.S. healthcare industry, it also caused massive delays for thousands of providers and pharmacies around the country.

Witty will face more questions about the ransomware attack, and its impact on the wider healthcare sector, during Wednesday’s House Energy and Commerce subcommittee hearing. A letter on April 15 from the House Energy and Commerce subcommittee leaders, including Chair Cathy McMorris Rodgers (R-Wa.), requested more information about the timeline of the attack, how the breach was detected and how impacted healthcare organizations were notified and supported. The subcommittee letter also inquired about Change Healthcare’s security protocols, including whether UnitedHealth modified its cybersecurity incident response, prevention and detection processes after acquiring Change Healthcare in 2022.

“The health care system is rapidly consolidating at virtually every level, creating fewer redundancies and more vulnerability to the entire system if an entity with significant market share at any level of the system is compromised,” according to the letter. “It is important for policymakers to understand the events leading up to, during, and after the Change Healthcare cyberattack.”

]]>
<![CDATA[DHS Releases AI Security Guidance for Critical Infrastructure]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/dhs-releases-ai-security-guidelines-for-critical-infrastructure-sector https://duo.com/decipher/dhs-releases-ai-security-guidelines-for-critical-infrastructure-sector

New AI security guidelines from the Department of Homeland Security (DHS) give critical infrastructure operators a better understanding of the top risks associated with AI systems, and how to best approach the unique security issues that could arise from these risks.

The guidelines, released by the DHS on Monday at the behest of the Biden administration’s AI executive order last year, look at how critical infrastructure entities can best be secured against the various risks associated with AI. This includes both attacks using AI, such as AI-enabled compromises or social engineering, and attacks targeting AI systems that support critical infrastructure, such as adversarial manipulation of AI algorithms. The report also takes into account a significant AI risk category: Potential failures in the design or implementation of AI that could lead to malfunctions in critical infrastructure operations.

“AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks,” said Secretary of Homeland Security Alejandro N. Mayorkas in a statement on Monday. "Our Department is taking steps to identify and mitigate those threats."

The guidance consists of a four-phase mitigation strategy, which builds on the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. The four parts include a governance component, directing critical infrastructure organizations to prioritize safety and security outcomes when it comes to AI risk management; a mapping piece for entities to better understand the risks behind AI; a measurement aspect for organizations to develop systems that can assess and track AI risks and a management phase urging organizations to implement risk management controls for AI systems.

The DHS’s guidelines this week give some clarity for CISOs and security teams navigating how to best approach potential issues that could crop up as AI systems are deployed in their environments. On the heels of increasing popularity surrounding generative AI in particular, several government agencies and private sector companies over the past year have closely studied the best ways to mitigate against various AI-associated threats. Still, the guidelines from the DHS and other government entities are not mandatory requirements. Experts in the security industry have called for regulation, and have also pointed to a significant security challenge for AI: The development of AI systems is based on large language models (LLMs) that include many inherent risks themselves, such as the potential for polluted data or opaque model architectures. The DHS in its guidance did say that AI vendors should take on certain mitigation responsibilities, and that critical infrastructure organizations need to understand where dependencies on AI vendors exist in their environments.

“In many cases, AI vendors will also play a major role in ensuring the safe and secure use of AI systems for critical infrastructure,” according to the DHS guidance. “Certain guidelines apply both to critical infrastructure owners and operators as well as AI vendors. Critical infrastructure owners and operators should understand where these dependencies on AI vendors exist and work to share and delineate mitigation responsibilities accordingly.”

The DHS report is one of many mandates ordered by the White House’s AI executive order in October. The executive order, which attempted to set the stage for developing and deploying what it calls “responsible AI,” also asked the DHS to create an AI safety and security board to look at how the AI standards developed by NIST could be applied to the critical infrastructure sectors, the potential risks that crop up from the use of AI in critical infrastructure sectors, and how AI could be used by the critical infrastructure community to improve security and incident response.

The DHS on Friday officially launched that board, which includes 22 representatives from a range of sectors, including ones from OpenAI, Nvidia, Cisco, Delta Airlines and Humane Intelligence. In the months since the executive order, the DHS has also launched an AI roadmap detailing its current and future uses of AI and has implemented various pilot projects to test AI technology.

]]>