<![CDATA[Decipher]]> https://decipher.sc Decipher is an independent editorial site that takes a practical approach to covering information security. Through news analysis and in-depth features, Decipher explores the impact of the latest risks and provides informative and educational material for readers curious about how security affects our world. en-us info@decipher.sc (Amy Vazquez) Copyright 2024 3600 <![CDATA[Decipher Podcast: Kelly Shortridge at RSA Conference]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/decipher-podcast-kelly-shortridge-at-rsa-conference https://duo.com/decipher/decipher-podcast-kelly-shortridge-at-rsa-conference

]]>
<![CDATA[Proposed Bill Focuses on Voluntary AI Security Incident Reporting]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/proposed-bill-would-create-reporting-database-for-ai-security-incidents https://duo.com/decipher/proposed-bill-would-create-reporting-database-for-ai-security-incidents

Senators this week introduced a new bill that would update cybersecurity information-sharing programs to better incorporate AI systems, in an effort to improve the tracking and processing of security incidents and risks associated with AI.

With both private sector companies and U.S. government agencies trying to better understand the security risks and threats associated with generative AI and the deployment of AI systems across various industries, “The Secure Artificial Intelligence Act of 2024" would specifically look at collecting more information around the vulnerabilities and security incidents associated with AI. Currently, the existing processes for vulnerability information sharing - including the National Institute of Standards and Technology's (NIST) National Vulnerability Database and the CISA-sponsored Common Vulnerabilities and Exposures program - "do not reflect the ways in which AI systems can differ dramatically from traditional software," senators Mark Warner (D-Va.) and Thom Tillis (R-NC) said in the overview of their new bill.

“When it comes to security vulnerabilities and incidents involving artificial intelligence (AI), existing federal organizations are poised to leverage their existing cyber expertise and capabilities to provide critically needed support that can protect organizations and the public from adversarial harm,” according to the overview of the bill. “The Secure Artificial Intelligence Act ensures that existing procedures and policies incorporate AI systems wherever possible – and develop alternative models for reporting and tracking in instances where the attributes of an AI system, or its use, render existing practices inapt or inapplicable.”

Under the new bill, these existing databases would need to better incorporate AI-related vulnerabilities, or a new process would need to be created to track the unique risks associated with AI, which include attacks like data poisoning, evasion attacks and privacy-based attacks. Already, researchers have identified various flaws in and around the infrastructure used to develop AI models, and in several cases these have been tracked through known databases and programs. Last year, for instance, the NVD added critical flaws in platforms used for hosting and employing large language models (LLMs), such as an OS command injection bug (CVE-2023-6018) and authentication bypass (CVE-2023-6014) in MLflow, a platform to streamline machine learning development.

Another priority is to establish a voluntary public database that would track reports of safety and security incidents related to AI. The reported incidents would involve AI system widely used in the commercial or public sectors, or AI systems used in critical infrastructure or safety-critical systems, which would result in “high-severity or catastrophic impact to the people or economy of the United States."

The bill would also establish an Artificial Intelligence Security Center at the NSA, which would serve as an AI research testbed for private sector researchers and help the industry develop guidance around best AI security practices. Part of this would be to develop an approach for what the bill calls "counter-artificial intelligence,” which are tactics around manipulating an AI system in order to subvert the confidentiality, integrity or availability that system. Additionally, it would direct CISA, NIST and the Information Communications Technology Supply Chain Risk Management task force to create a “multi-stakeholder process” for developing best practices related to supply chain risks associated with training and maintaining AI models.

The Secure Artificial Intelligence Act of 2024 joins an influx of other legislative proposals over the past year, and an overall flurry of government activity like the White House’s AI executive order in 2023, to better understand the security risks associated with AI. Last year, the Testing and Evaluation Systems for Trusted AI act was proposed in October 2023 by senators Jim Risch (R-Idaho) and Ben Ray Lujan (D-N.M.). The bill would require NIST and the Department of Energy to develop testbeds for assessing AI tools and supporting “safeguards and systems to test, evaluate, and prevent misuse of AI systems.” Warner has also introduced previous bills centered around AI security, including the Federal Artificial Intelligence Risk Management Act in November 2023, which would establish guidelines to be used within the federal government to mitigate risks associated with AI.

“As we continue to embrace all the opportunities that AI brings, it is imperative that we continue to safeguard against the threats posed by – and to -- this new technology, and information sharing between the federal government and the private sector plays a crucial role,” said Warner in a statement. “By ensuring that public-private communications remain open and up-to-date on current threats facing our industry, we are taking the necessary steps to safeguard against this new generation of threats facing our infrastructure.”

]]>
<![CDATA[RSA Conference 2024 Preview: The Sessions to See This Year]]> dennis@decipher.sc (Dennis Fisher)lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/rsa-conference-2024-preview-the-sessions-to-see-this-year https://duo.com/decipher/rsa-conference-2024-preview-the-sessions-to-see-this-year

In this special episode, Dennis Fisher and Lindsey O'Donnell-Welch are joined by Brian Donohue of Red Canary to preview the RSA conference talks they're excited about and to try to make sense of some of the session titles that are maybe a little indecipherable.

]]>
<![CDATA[Attacker Accessed Dropbox Sign User Authentication Data in Recent Intrusion]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/attacker-accessed-dropbox-sign-user-authentication-data-in-recent-intrusion https://duo.com/decipher/attacker-accessed-dropbox-sign-user-authentication-data-in-recent-intrusion

An unidentified attacker recently gained access to a database that held customer information for Dropbox Sign users, including usernames and emails, and authentication information such as API keys, OAuth tokens, and MFA information.

Dropbox on Wednesday disclosed the breach in a notice to the Securities and Exchange Commission and said that it discovered the intrusion on April 24, but did not say when the attacker gained access or how long the intrusion lasted. The company said that there is no evidence at the moment that the attacker accessed any of Dropbox’s other products or services. The company’s security team has already reset users’ passwords, logged them out of any devices that were signed in to Dropbox Sign and is in the process of rotating API keys and OAuth tokens.

“On April 24th, we became aware of unauthorized access to the Dropbox Sign (formerly HelloSign) production environment. Upon further investigation, we discovered that a threat actor had accessed data including Dropbox Sign customer information such as emails, usernames, phone numbers and hashed passwords, in addition to general account settings and certain authentication information such as API keys, OAuth tokens, and multi-factor authentication,” a Dropbox blog on the incident says.

Dropbox Sign is an online document creation and signing service and was formerly known as HelloSign. Company officials said the infrastructure for Dropbox Sign is largely separated from infrastructure used for other Dropbox services.

The attacker was able to access the customer database by compromising a service account that had a variety of privileges and was able to then access an automated system configuration tool.

“The actor compromised a service account that was part of Sign’s back-end, which is a type of non-human account used to execute applications and run automated services. As such, this account had privileges to take a variety of actions within Sign’s production environment. The threat actor then used this access to the production environment to access our customer database,” the blog says.

In its SEC filing, Dropbox officials said they do not believe this incident will have a material impact on the company’s operations.

]]>
<![CDATA[Verizon DBIR: Enterprises Know the Pain of Zero Day Exploits All Too Well]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/verizon-dbir-enterprises-know-the-pain-of-zero-day-exploits-all-too-well https://duo.com/decipher/verizon-dbir-enterprises-know-the-pain-of-zero-day-exploits-all-too-well

Thanks to the emergence of significant flaws in widely deployed products such as the MOVEit Transfer, Barracuda ESG, Atlassian Confluence, and others, the past year has seen a nearly 200 percent increase in the usage of vulnerability exploits as the initial access vector for data breaches around the world, according to statistical analysis of more than 10,000 breaches.

The significant spike in vulnerability exploitation as an entry point is tied to the use of several zero days and other vulnerabilities by ransomware groups and other cybercrime organizations last year. The MOVEit Transfer flaw (CVE-2023-34362) was a favorite target of several ransomware groups, notably Cl0p, and other actors targeted significant vulnerabilities in Atlassian Confluence, the Barracuda ESG appliances, and Ivanti servers, as well. The Verizon 2024 Data Breach Investigations Report (DBIR), released today, shows that attackers not only target critical flaws in the days right after (or sometimes before) they’re disclosed, but continue to use them in the weeks and months to come.

“This 180% increase in the exploitation of vulnerabilities as the critical path action to initiate a breach will be of no surprise to anyone who has been following the MOVEit vulnerability and other zero-day exploits that were leveraged by Ransomware and Extortion-related threat actors,” the report says.

“This was the sort of result we were expecting in the 2023 DBIR when we analyzed the impact of the Log4j vulnerabilities. That anticipated worst case scenario discussed in the last report materialized this year with this lesser known—but widely deployed— product.”

The Verizon DBIR comprises data from Verizon’s own breach investigations as well as data contributed by dozens of partner organizations, including law enforcement agencies, security companies, platform providers, and incident response firms from around the world. This year’s report includes data on more than 10,000 confirmed breaches across a broad range of industries. The DBIR investigators identified 1,567 individual breaches directly connected to exploitation of the MOVEit Transfer flaw in organizations across industries. Though the report does not have data on when each breach occurred, a survival analysis of vulnerabilities in the Known Exploited Vulnerabilities catalog maintained by the Cybersecurity and Infrastructure Security Agency shows that patching of critical, known exploited bugs doesn’t really ramp up in most organizations until more than 30 days after the first disclosure.

“But before organizations start pointing at themselves saying, “It’s me, hi, I’m the problem,” we must remind ourselves that after following a sensible risk-based analysis, enterprise patch management cycles usually stabilize around 30 to 60 days as the viable target, with maybe a 15-day target for critical vulnerability patching. Sadly, this does not seem to keep pace with the growing speed of threat actor scanning and exploitation of vulnerabilities,” the report says.

“This is not enough to shake the risk off. As we pointed out in the 2023 DBIR, the infamous Log4j vulnerability had nearly a third (32%) of its scanning activity happening in the first 30 days of its disclosure. The industry was very efficient in mitigating and patching affected systems so the damage was minimized, but we cannot realistically expect an industrywide response of that magnitude for every single vulnerability that comes along, be it zero-day or not.”

“If we can’t patch the vulnerabilities faster, it seems like the only logical conclusion is to have fewer of them to patch."

Patch management on an enterprise-level scale is a constant task, not a monthly or even weekly one. Prioritization becomes paramount, and while organizations with mature security programs can rely on vulnerability management and patch management systems, many companies don’t have that luxury and face the daunting task of trying to decide where to allocate their scant resources in order to be the most effective.

“We must remind ourselves that these are companies with resources to at least hire a vulnerability management vendor. That tells us that they care about the risk and are taking measures to address it. The overall reality is much worse, and as more ransomware threat actors adopt zero-day and/or recent vulnerabilities, they will definitely fill the blank space in their notification websites with your organization’s name,” the report says.

“If we can’t patch the vulnerabilities faster, it seems like the only logical conclusion is to have fewer of them to patch. We realize this is the stuff of our wildest dreams, but at the very least, organizations should be holding their software vendors accountable for the security outcomes of their product, even if there is no regulatory pressure on those vendors to do better.”

Ransomware actors typically will use whatever tactic is most convenient at the time in order to gain access to an environment, and if that happens to be a new bug in a widely deployed application, then so be it.

“As we gaze into our crystal ball, we wouldn’t be surprised if we continue to see zero-day vulnerabilities being widely leveraged by ransomware groups. If their preference for file transfer platforms continues, this should serve as a caution for those vendors to check their code very closely for common vulnerabilities. Likewise, if your organization utilizes these kinds of platforms—or anything exposed to the internet, for that matter—keep a very close eye on the security patches those vendors release and prioritize their application,” the report says.

]]>
<![CDATA[‘Uncharted Territory:’ Companies Devise AI Security Policies]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/uncharted-territory-companies-devise-ai-security-policies https://duo.com/decipher/uncharted-territory-companies-devise-ai-security-policies

Businesses have been preparing and implementing security policies for the utilization of generative AI in the workplace, but many executives say that they still don’t fully understand how AI works and its impacts, according to a new Splunk report.

Splunk’s State of Security 2024 report, released Tuesday and based on a survey of 1,650 security executives across nine countries, highlights how security teams are mulling over generative AI security and data privacy policies in their organizations, and up to 44 percent of respondents listed AI as a top security initiative of 2024 (with 35 percent pointing to cloud security and 20 percent listing security analytics). Most businesses said their employees are actively leveraging AI, leaving CISOs to navigate the best ways to prepare for potential risks that could crop up as AI systems are utilized in their environments.

Despite its high adoption rate, some businesses - around one-third of report respondents - have not implemented corporate security policies clarifying the best security practices around generative AI. At the same time, while AI policies require a deep understanding of the technology itself and potential impacts across the business, 65 percent of respondents acknowledge that they lack education around AI.

“Many individuals lack a foundational understanding of what AI is, how it works, and its potential applications and limitations,” said Mick Baccio, global security advisor at Splunk SURGe. “I’m not implying mastery of machine learning algorithms, neural networks, and other AI techniques is a necessity, but a basic understanding of the systems being used. Like a car, it’s not necessary to know the details of a combustion engine, but a fundamental understanding of how it operates is critical.”

While having a company policy in place does not eliminate security issues, these types of policies can keep the ship on the right course in helping executives think through the security risks and corresponding mitigations associated with AI. For instance, corporate policies should give further clarity about what type of data can be used in public generative AI platforms, and specify the types of sensitive or private data that shouldn’t be used. AI security policies should also take into account areas like access control, training and awareness and regulatory compliance, said Baccio.

“I think there needs to be a basic understanding of the potential vulnerabilities of AI systems, such as adversarial attacks, data poisoning, and model inversion attacks,” said Baccio.

Perceptions of how generative AI will assist both security defenders and threat actors are also changing. Both businesses and government agencies have been trying to better understand the security issues behind both the development and deployment of AI systems. A new set of guidelines by the DHS for critical infrastructure entities released this week, for example, looked at the best security measures for organizations when it comes to attacks using AI, attacks targeting AI systems that support critical infrastructure, and potential failures in the design or implementation of AI that could lead to malfunctions.

Up to 43 percent of respondents thought that generative AI would help defenders, pointing to threat intelligence, security risk identification, threat detection and security data summarization as the top AI cybersecurity use cases. Furthermore, half of the respondents said they are in the middle of developing a formal plan for using generative AI for cybersecurity and for addressing potential AI security risks, though they said the plans aren’t complete or agreed upon.

However, 45 percent of respondents said generative AI will help attackers, and 77 percent believe that it “expands the attack surface to a concerning degree.” Respondents said they think that generative AI will make existing attacks more effective and increase the volume of existing attacks. Data leakage is a major concern for organizations.

“Not all AI threats originate from outside sources; 77% of respondents agree that more data leakage will accompany increased use of generative AI,” according to the report. “However, only 49% are actively prioritizing data leakage prevention - possibly because there aren’t many solutions yet that control the flow of data in and out of generative AI tools.”

]]>
<![CDATA[Senators Reprimand UnitedHealth CEO in Ransomware Hearing]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/senators-reprimand-unitedhealth-ceo-in-ransomware-hearing https://duo.com/decipher/senators-reprimand-unitedhealth-ceo-in-ransomware-hearing

Senators at a Wednesday government hearing had strong words for UnitedHealth Group CEO Andrew Witty about the organization’s lack of security protections leading up to the February Change Healthcare ransomware attack, and the fallout across the healthcare industry that occurred after the attack.

Witty’s statements during the Senate Finance Committee hearing, and later the Energy and Commerce’s Oversight and Investigations subcommittee hearing, stayed largely within the confines of his written testimony, though he did confirm UnitedHealth Group’s $22 million ransom payment and acknowledge that potentially one-third of Americans’ data was stolen. The questions and criticisms from senators across the board, meanwhile, highlighted overarching concerns about the impact of large corporations coming under attack. In this case, attackers targeted Change Healthcare - owned by UnitedHealth Group, which is the fifth largest company in the U.S. and that touches 152 million individuals overall - via a Change Healthcare Citrix remote access portal that didn’t have multi-factor authentication enabled.

“Mr. Witty owes Americans an explanation for how a company of UHG’s size and importance failed to have multi-factor authentication on a server providing open door access to protected health information, why its recovery plans were so woefully inadequate and how long it will take to finally secure all of its systems,” said Sen. Ron Wyden (D-Ore.) during the hearing.

Wyden condemned the attack as an example of the cybersecurity concerns that could happen should a “too big to fail” organization get hit by ransomware. After threat actors deployed the ransomware in February, nine days after gaining initial access via the stolen Citrix credentials, the fallout from the Change Healthcare attack lasted several weeks and crippled healthcare providers, hospitals and pharmacies across the countries.

The question of accountability loomed over the Wednesday hearings, and some of the questions centered around whether Witty knew about the lack of security measures, such as MFA, that enabled the attack. This follows a trend previously predicted by Gartner, where CEOs and board members are being increasingly held personally liable for breaches. As part of its cybersecurity rule finalized last year, the SEC also considered requiring companies to describe their board members’ oversight of security risks and cybersecurity expertise.

“UHG has not revealed how many patients’ private medical records were stolen, how many providers went without reimbursement, and how many seniors were unable to pick up their prescriptions as a result of the hack.”

Wyden said that UnitedHealth’s anti-competitive practices likely prolonged the fallout from the ransomware attack, and that the company and its top executives need to take responsibility for the attack.

“Consistently, your views seem to minimize the impact of your involvement,” said Wyden, speaking to Witty during the hearing. “You say that UnitedHealth’s payments processing accounts for only 6 percent of payments in the healthcare system. My view is that’s basically hiding the ball. In 2022 the Department of Justice said that Change retains records of at least 211 million individuals going back to 2012.”

Witty during the hearing said that it’s UnitedHealth’s policy to have MFA enabled for externally-facing applications and said that he did not know that MFA wasn’t enabled on the Change server before the attack. He also said that he was not aware of any audits conducted before the attacks that identified a lack of MFA on “this particular server” as a compliance or security risk. When asked why MFA wasn’t enabled on the application, the CEO said that Change Healthcare, acquired by UnitedHealth in 2022, came to its company with legacy technologies, and it was in the process of upgrading this technology when the attack occurred.

One other point of contention during the hearing was the compromised data itself. UnitedHealth Group recently said that attackers gained access to some protected health information and personally identifiable information “which could cover a substantial proportion of people in America,” but it will likely take several more months of investigation to fully understand what data was exfiltrated and who has been impacted. Wyden said beyond the sensitive nature of the data stolen - which could include cancer diagnoses or mental health treatment plans - the fact that government and military personnel information is included makes the hack a “clear national security priority.”

“Leaving this sensitive patient information vulnerable to hackers, whether criminals or a foreign government, is a clear national security threat,” said Wyden. “UHG has not revealed how many patients’ private medical records were stolen, how many providers went without reimbursement, and how many seniors were unable to pick up their prescriptions as a result of the hack.”

]]>
<![CDATA[Memory Safe: Dennis Fisher]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/memory-safe-dennis-fisher https://duo.com/decipher/memory-safe-dennis-fisher

In a special bonus Memory Safe episode, Dennis Fisher, Decipher’s editor in chief, talks about his decades of experience writing about cybersecurity news, the article he authored that inspired him to get into the industry (hint: it involved phishing) and how the cybersecurity news world has changed over the years.

]]>
<![CDATA[Stolen Citrix Credentials Led to Change Ransomware Attack]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/stolen-citrix-credentials-led-to-change-ransomware-attack https://duo.com/decipher/stolen-citrix-credentials-led-to-change-ransomware-attack

Threat actors behind the Change Healthcare ransomware attack in February were able to gain initial access by leveraging compromised credentials for a Citrix remote access portal, which didn’t have multi-factor authentication enabled. The initial access vector behind the attack was revealed in a new testimony document from Andrew Witty, CEO of Change’s parent company UnitedHealth Group, before he attends a Wednesday hearing by the House Energy and Commerce subcommittee.

The issue of compromised credentials continues to haunt organizations, especially as attackers increasingly rely on identity-centric tactics. According to Witty, the threat actors on Feb. 12 were able to remotely compromise the account for the Change Healthcare Citrix application used to enable remote access to desktops. After gaining access, they then moved laterally within the systems “in more sophisticated ways” in order to exfiltrate data. Nine days later, the threat actors deployed the ransomware. In the testimony, Witty also addressed his decision to pay a reported $22 million ransom to the attackers.

“As we have addressed the many challenges in responding to this attack, including dealing with the demand for ransom, I have been guided by the overriding priority to do everything possible to protect peoples’ personal health information,” according to Witty’s testimony. “As chief executive officer, the decision to pay a ransom was mine. This was one of the hardest decisions I’ve ever had to make. And I wouldn’t wish it on anyone.”

Witty’s testimony also sheds light on the company’s incident response procedures following the attack. After the attack occurred, connectivity to Change environments was severed. Experts from Google, Microsoft, Cisco, Amazon, Mandiant and Palo Alto Networks offered support in mitigating the attack, as well as government agencies like the Department of Health and Human Services and FBI.

“Together with our Change Healthcare colleagues, they immediately began the around-the-clock and enormously complex task of safely and securely rebuilding Change Healthcare’s technology infrastructure from the ground up,” according to Witty’s testimony. “The team replaced thousands of laptops, rotated credentials, rebuilt Change Healthcare’s data center network and core services, and added new server capacity. The team delivered a new technology environment in just weeks – an undertaking that would have taken many months under normal circumstances.”

“Given the ongoing nature and complexity of the data review, it is likely to take several months of continued analysis before enough information will be available to identify and notify impacted customers and individuals."

Over the course of the past two months, UnitedHealth Group has slowly filled in the blanks on the many lingering questions around the ransomware attack. Most recently, Change Healthcare determined that the attackers gained access to some protected health information and personally identifiable information “which could cover a substantial proportion of people in America.” Witty in his testimony said that it will likely take several more months of investigation to fully understand what data was exfiltrated and who has been impacted.

“Given the ongoing nature and complexity of the data review, it is likely to take several months of continued analysis before enough information will be available to identify and notify impacted customers and individuals, partly because the files containing that data were compromised in the cyberattack,” according to Witty’s testimony. “Our teams, along with leading external industry experts, continue to monitor the internet and dark web to determine if data has been published.”

One aspect that will likely be discussed further in the Wednesday testimony are the security implications behind the sheer number of hospitals, healthcare providers and patients that rely on Change Healthcare overall. The attack disrupted many of Change Healthcare’s operations, but because the company handles data, payments and claims processing for a huge chunk of the U.S. healthcare industry, it also caused massive delays for thousands of providers and pharmacies around the country.

Witty will face more questions about the ransomware attack, and its impact on the wider healthcare sector, during Wednesday’s House Energy and Commerce subcommittee hearing. A letter on April 15 from the House Energy and Commerce subcommittee leaders, including Chair Cathy McMorris Rodgers (R-Wa.), requested more information about the timeline of the attack, how the breach was detected and how impacted healthcare organizations were notified and supported. The subcommittee letter also inquired about Change Healthcare’s security protocols, including whether UnitedHealth modified its cybersecurity incident response, prevention and detection processes after acquiring Change Healthcare in 2022.

“The health care system is rapidly consolidating at virtually every level, creating fewer redundancies and more vulnerability to the entire system if an entity with significant market share at any level of the system is compromised,” according to the letter. “It is important for policymakers to understand the events leading up to, during, and after the Change Healthcare cyberattack.”

]]>
<![CDATA[DHS Releases AI Security Guidance for Critical Infrastructure]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/dhs-releases-ai-security-guidelines-for-critical-infrastructure-sector https://duo.com/decipher/dhs-releases-ai-security-guidelines-for-critical-infrastructure-sector

New AI security guidelines from the Department of Homeland Security (DHS) give critical infrastructure operators a better understanding of the top risks associated with AI systems, and how to best approach the unique security issues that could arise from these risks.

The guidelines, released by the DHS on Monday at the behest of the Biden administration’s AI executive order last year, look at how critical infrastructure entities can best be secured against the various risks associated with AI. This includes both attacks using AI, such as AI-enabled compromises or social engineering, and attacks targeting AI systems that support critical infrastructure, such as adversarial manipulation of AI algorithms. The report also takes into account a significant AI risk category: Potential failures in the design or implementation of AI that could lead to malfunctions in critical infrastructure operations.

“AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks,” said Secretary of Homeland Security Alejandro N. Mayorkas in a statement on Monday. "Our Department is taking steps to identify and mitigate those threats."

The guidance consists of a four-phase mitigation strategy, which builds on the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. The four parts include a governance component, directing critical infrastructure organizations to prioritize safety and security outcomes when it comes to AI risk management; a mapping piece for entities to better understand the risks behind AI; a measurement aspect for organizations to develop systems that can assess and track AI risks and a management phase urging organizations to implement risk management controls for AI systems.

The DHS’s guidelines this week give some clarity for CISOs and security teams navigating how to best approach potential issues that could crop up as AI systems are deployed in their environments. On the heels of increasing popularity surrounding generative AI in particular, several government agencies and private sector companies over the past year have closely studied the best ways to mitigate against various AI-associated threats. Still, the guidelines from the DHS and other government entities are not mandatory requirements. Experts in the security industry have called for regulation, and have also pointed to a significant security challenge for AI: The development of AI systems is based on large language models (LLMs) that include many inherent risks themselves, such as the potential for polluted data or opaque model architectures. The DHS in its guidance did say that AI vendors should take on certain mitigation responsibilities, and that critical infrastructure organizations need to understand where dependencies on AI vendors exist in their environments.

“In many cases, AI vendors will also play a major role in ensuring the safe and secure use of AI systems for critical infrastructure,” according to the DHS guidance. “Certain guidelines apply both to critical infrastructure owners and operators as well as AI vendors. Critical infrastructure owners and operators should understand where these dependencies on AI vendors exist and work to share and delineate mitigation responsibilities accordingly.”

The DHS report is one of many mandates ordered by the White House’s AI executive order in October. The executive order, which attempted to set the stage for developing and deploying what it calls “responsible AI,” also asked the DHS to create an AI safety and security board to look at how the AI standards developed by NIST could be applied to the critical infrastructure sectors, the potential risks that crop up from the use of AI in critical infrastructure sectors, and how AI could be used by the critical infrastructure community to improve security and incident response.

The DHS on Friday officially launched that board, which includes 22 representatives from a range of sectors, including ones from OpenAI, Nvidia, Cisco, Delta Airlines and Humane Intelligence. In the months since the executive order, the DHS has also launched an AI roadmap detailing its current and future uses of AI and has implemented various pilot projects to test AI technology.

]]>
<![CDATA[Decipher Podcast: Source Code 4/26]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/decipher-podcast-source-code-4-26 https://duo.com/decipher/decipher-podcast-source-code-4-26

]]>
<![CDATA[Cactus Ransomware Group Targets Qlik Sense Servers]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/cactus-ransomware-group-targets-qlik-sense-servers https://duo.com/decipher/cactus-ransomware-group-targets-qlik-sense-servers

In an ongoing campaign that began in November, actors associated with the Cactus ransomware group are exploiting three vulnerabilities in the Qlik Sense data visualization platform to deploy ransomware, and researchers warn that there are thousands of vulnerable instances online at the moment.

The first indications of the activity emerged in November, when researchers observed attackers targeting the Qlik Sense vulnerabilities (CVE-2023-41265, CVE-2023-41266, and CVE-2023-48365) in sporadic attacks. Qlik Sense had released patches for the bugs in August after researchers with Praetorian disclosed them to the vendor. Three months later, the Cactus ransomware attacks began and they all followed a similar pattern, from intrusion to deployment of post-exploitation tools to deployment of the ransomware itself.

“Following exploitation of Qlik Sense installations, the observed execution chain was consistent between all intrusions identified and involves the Qlik Sense Scheduler service (Scheduler.exe) spawning uncommon processes. The threat actors leveraged PowerShell and the Background Intelligent Transfer Service (BITS) to download additional tools to establish persistence and ensure remote control,” an analysis by Arctic Wolf from November says.

Among the tools the actors downloaded were MangeEngine UEMS, AnyDesk, and PuTTY Link. The attackers also disabled some security applications, changed admin passwords on compromised systems, and set up an RDP tunnel, which they used for lateral movement. Researchers say the attackers also are feeding false information about their intrusions to victims in an effort to confuse them.

“Since November 2023, the Cactus ransomware group has been actively targeting vulnerable Qlik Sense servers. These attacks are not just about exploiting software vulnerabilities; they also involve a psychological component where Cactus misleads its victims with fabricated stories about the breach. This likely is part of their strategy to obscure their actual method of entry, thus complicating mitigation and response efforts for the affected organizations,” Willem Zeeman and Yun Zheng Hu of Fox IT said in a new analysis of the Cactus ransomware campaign.

Based on a scan from April 17, the Fox IT researchers identified more than 3,100 Qlik Sense servers that are vulnerable to the exploits used by the Cactus ransomware actors. The largest number of vulnerable servers are in the United States.

Cactus is a relatively young ransomware group, having emerged in early 2023. The group typically has exploited bugs in VPN appliances, along with the Qlik Sense servers, to gain initial access to a network. The highest profile intrusion on the group’s scorecard is an attack on Schneider Electric in January.

Organizations running potentially vulnerable Qlik Sense instances can check for the presence of two font files, qle.ttf and qle.woff, as indications of compromise. The attackers use those files, which are not part of the default installation of the server, to store command output.

“When the indicator of compromise artefact is present on a remote Qlik Sense server, it can imply various scenarios. Firstly, it may suggest that remote code execution was carried out on the server, followed by subsequent patching to address the vulnerability (if the server is not vulnerable anymore). Alternatively, its presence could signify a leftover artefact from a previous security incident or unauthorised access,” the Fox IT analysis says.

]]>
<![CDATA[NSA Advisory Sheds Light on Securely Deploying AI Systems]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/nsa-advisory-sheds-light-on-securely-deploying-ai-systems https://duo.com/decipher/nsa-advisory-sheds-light-on-securely-deploying-ai-systems

A recent advisory from the NSA highlighted the ways that operators of national security systems and Defense Industrial Base companies can best securely deploy AI systems that have been designed by third parties.

Last week’s guidance, which comes as companies continue to weigh potential security risks inherent either in AI systems themselves or in how they are deployed, specifically gave recommendations around securely operating AI in the environment and continuously protecting AI systems for vulnerabilities. The advisory marked the first set of guidelines from the Artificial Intelligence Security Center, which was established by the NSA in September in order to help detect and counter AI flaws, develop and promote AI best practices and drive collaborations across the industry relating to AI.

“The rapid adoption, deployment, and use of AI capabilities can make them highly valuable targets for malicious cyber actors,” according to the NSA’s cybersecurity guidance, released jointly with a number of other Five Eyes agencies, including the National Cyber Security Centre and the Australian Signal Dictorate. “Actors, who have historically used data theft of sensitive information and intellectual property to advance their interests, may seek to co-opt deployed AI systems and apply them to malicious ends.”

With organizations typically deploying AI systems within their existing infrastructure, the NSA said that security best practices and requirements also apply to AI systems. Cybersecurity gaps might arise if teams outside of IT are deploying the systems, and the NSA recommended that companies make sure that the person accountable for AI system security is also responsible for the organization’s cybersecurity in general. If organizations outside of IT are operating an AI system, they should work with IT to make sure the system is “within the organization’s risk level” overall. Organizations should also require AI system developers to provide a threat model for their system, which outlines potential threats and mitigations for those threats.

The question of data security and privacy for AI is critical. Companies implementing AI systems should map out all data sources that the organization will use in AI model training, including the list of data sources for models trained by others, though notably, these types of models aren’t typically publicly available. Additionally, security teams should apply existing best practices - like encrypting data at rest, implementing strong authentication mechanisms and ensuring the use of MFA - in the AI deployment environment.

“Do not run models right away in the enterprise environment,” according to the NSA. “Carefully inspect models, especially imported pre-trained models, inside a secure development zone prior to considering them for tuning, training, and deployment. Use organization approved AI-specific scanners, if and when available, for the detection of potential malicious code to assure model validity before deployment.”

The NSA also outlined steps that organizations should take after the initial implementation of AI in order to continuously make sure that data running through the system is secure, including testing the AI model for accuracy and for potential flaws after modifications have been made, evaluating and securing the supply chain for external AI models and data and securing potentially exposed APIs. Metin Kortak, CISO at Rhymetec, said that cybersecurity measures around actively monitoring model behavior are particularly significant because “AI can be unpredictable.”

“Prior to deploying AI systems, companies need to acknowledge and tackle data privacy and security concerns,” said Kortak. “AI systems inherently handle extensive datasets, encompassing sensitive personal and organizational data, rendering them enticing targets for cyber threats.”

]]>
<![CDATA[Ransomware Task Force: We Need to Disrupt Operations at Scale]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/ransomware-task-force-we-need-to-disrupt-operations-at-scale https://duo.com/decipher/ransomware-task-force-we-need-to-disrupt-operations-at-scale

While recent coordinated law enforcement efforts have been successful in temporarily knocking down ransomware groups like LockBit and BlackCat, a new report highlighted how the industry as a whole needs to scale disruption efforts against ransomware in order to see effective, long-term impacts.

The report was released Wednesday by the Institute for Security and Technology’s Ransomware Task Force (RTF), a coalition of more than 60 industry, government and law enforcement experts that made 48 recommendations in 2021 aimed at targeting the ransomware threat ecosystem. Though 24 of these 48 recommendations have seen significant progress, the remaining half have still not been fully implemented, and the RTF pinpointed areas where these measures could use further investment and resource allocations from governments, industry and civil society.

It's important to note that law enforcement agencies have carried out varying types of disruptive measures against ransomware groups over the last year, including efforts to target infrastructure, seize backend servers and take down darknet sites, as seen in the Hive and BlackCat disruptions. But more work beyond these efforts is needed, said the RTF report: While these have been temporarily disruptive to ransomware operations, they don’t fully eliminate the issue. The effectiveness of these operations is difficult to measure, for instance, and threat actors behind the groups have in some cases been able to rebuild their infrastructure or reassemble under new names.

“The purpose of disruptions is to throw as much sand in the gears as possible,” said Taylor Grossman, deputy director for digital security at the Institute for Security and Technology, in a video interview with Decipher. “The disruptions we’re seeing are affecting bottom lines. [Ransomware groups are] still active, which is a problem, and they’re still able to reform... so that’s where I think it’s about prioritization and resource allocation, making sure that governments have the manpower and financial resources to throw more people at this problem, to start to disrupt as much as possible.”

The RTF said that in order to better combat ransomware groups, government agencies need to work more closely with industry partners in order to “increase the costs associated with the ransomware profit model.” Part of that partnership should involve more clarity around lawful defensive measures that the private sector can take against ransomware groups, in order to help assuage concerns about legal liability.

“Providing clearer information about how and when companies can protect themselves without fearing later legal repercussions will increase the likelihood that they do so and enhance the defense of the entire ecosystem,” according to the report.

The report also pointed to increased information sharing as another critical piece for ransomware disruption. While cyber incident sharing measures - like CIRCIA and the SEC’s cyber rules - are coming together, the RTF said the government should also create more incentives for voluntary sharing in other areas that touch the ransomware ecosystem. For instance, more information sharing between cryptocurrency entities and law enforcement could lead to valuable insights about cryptocurrency accounts or transactions associated with ransomware actors.

The disruption of ransomware is complex, in part because it involves several stakeholders across the industry - including law enforcement and cybersecurity government agencies, private sector organizations, security researchers and cryptocurrency firms. At the same time, the ransomware threat landscape continues to evolve. A recent report released by Chainalysis in February recorded $1.1 billion in ransomware payments in 2023, a significant increase from the $567 million reported in 2022 and the highest number observed by the firm ever.

With all of these different moving pieces, the RTF called for an overhaul in some of the processes that entities are using to fight ransomware. The U.S. government should rethink how it incentivizes companies to adopt security measures outside of merely providing guidance for them, for instance, and do more to draw attention to the worst ransomware offenders. There should also be more “reciprocal sharing” of information in the partnerships formed around mitigating ransomware, the report said.

“Achieving progress on the remaining 24 RTF recommendations will help address the ransomware threat, and the U.S. and other governments worldwide will need to continue to act going forward,” according to the report. “At the same time, they should work toward driving adoption of secure-by-design and default across the ecosystem.”

]]>
<![CDATA[Defusing the Threat of Compromised Credentials]]> bnahorne@cisco.com (Ben Nahorney) https://duo.com/decipher/defusing-the-threat-of-compromised-credentials https://duo.com/decipher/defusing-the-threat-of-compromised-credentials

Let’s say that, during the middle of a busy day, you receive what looks like a work-related email with a QR code. The email claims to come from a coworker, requesting your help in reviewing a document. You scan the QR code with your phone and it takes you to what looks like a Microsoft 365 sign-in page. You enter your credentials; however, nothing seems to load.

Not thinking much of it, and being a busy day, you continue to go about your work. A couple minutes later a notification buzzes your phone. Not picking it up immediately, another notification comes. Then another, and another after that.

Wondering what’s going on, you grab the phone to find a series of multi-factor authentication (MFA) notifications. You had just attempted to log into Microsoft 365, maybe there was a delay in receiving the MFA notification? You approve one and return to the Microsoft 365 page. The page still hasn’t loaded, so you get back to work and resolve to check it later.

This is very similar to an attack that Cisco Talos Intelligence discusses in their latest Talos Incident Response (IR) Quarterly Report. In this case the Microsoft 365 sign-in page was fake, set up by threat actors. These attackers used compromised credentials to repeatedly attempt to sign in to the company’s real Microsoft 365 page, triggering the series of MFA notifications—an attack technique known as MFA exhaustion. In the end, some employees who were targeted approved the MFA requests and the attackers gained access to these accounts.

More than the annoyance of changing your password

While the use of QR codes is a relatively recent development in phishing, attacks like the one described by Talos have been around for years. Most phishing attacks employ similar social engineering techniques to trick users into turning over their credentials. Phishing is frequently one of the top means of gaining initial access in the Talos Incident Response Quarterly Report.

Attackers hammering MFA-protected accounts is also a concerning development in the identity threat landscape. But sadly, most successful credential compromise attacks occur with accounts that don’t have MFA enabled.

According to this quarter’s Talos IR report, using compromised credentials on valid accounts was one of two top initial access vectors. This aligns with findings from Verizon’s 2023 Data Breach Investigations Report, where the use of compromised credentials was the top first-stage attack (initial access) in 44.7% of breaches.

The silver lining is that this appears to be improving. Early last year, in research published by Oort, now a part of Cisco, found that 40% of accounts in the average company had weak or no MFA in the second half of 2022. Looking at updated telemetry from February 2024, this number has dropped significantly to 15%. The change has a lot to do with wider understanding of identity protection, but also an increase in awareness thanks to an uptick in attacks that have targeted accounts relying on base credentials alone for protection.

How credentials are compromised

Phishing, while one of the most popular methods, isn’t the only way that attackers gather compromised credentials. Attackers often attempt to brute force or password spraying attacks, deploying keyloggers, or dumping credentials.

These are just a few of the techniques that threat actors use to gather credentials. For a more elaborate explanation, Talos recently published an excellent breakdown of how credentials are stolen and used by threat actors that is worth taking a look at.

Not all credentials are created equal

Why might an attacker, who has already gained access to a computer, attempt to gain new credentials? Simply put, not all credentials are created equal.

While an attacker can gain a foothold in a network using an ordinary user account, it’s unlikely they’ll be able to further their attacks due to limited permissions. It’s like having a key that unlocks one door, where what you’re really after is the skeleton key that unlocks all the doors.

That skeleton key would be a high-level access account such as an administrator or system user. Targeting administrators makes sense because their elevated privileges allow an attacker more control of a system. And target them they do. According to Cisco’s telemetry, administrator accounts see three times as many failed logins as a regular user account.

Another resource threat actors target is credentials for accounts that are no longer in use. These dormant accounts tend to be legacy accounts for older systems, accounts for former users that have not been cleared from the directory, or temporary accounts that are no longer needed. Sometimes the accounts can include more than one of the above options, and even include administrative privileges.

Dormant accounts are an often-overlooked security issue. According to Cisco’s telemetry, 39% of the total identities within the average organization have had no activity within the last 30 days. This is a 60% increase from 2022.

Guest accounts are an account type that repeatedly gets overlooked. While a convenient option for temporary, restricted access, these often password-free accounts are frequently left enabled long after they are needed.

And their use is increasing. In February 2024, almost 11% of identities examined are guest accounts— representing a 233% jump from the 3% reported in 2022. While we can only speculate, it is possible that cloud-adoption and remote work contributed to this rise, as enterprises used temporary accounts to stage new services and applications or enable remote workloads in the short-term. The use of temporary accounts is understandable, but if they’re forgotten or ignored, these shortcuts represent a serious risk.

Reducing the impact of compromised credentials

It goes without saying that protecting credentials from being compromised and abused is important. However, eradicating this threat is challenging.

One of the best ways to defend against these attacks is by using MFA. Simply confirming that a user is who they say they are—by checking on another device or communication form—can go a long way towards preventing compromised credentials from being used.

However, it isn’t a silver bullet. There are a few ways that threat actors can sidestep MFA. Some MFA forms, such as those that use SMS, can be manipulated by threat actors. In these cases—frequently referred to as Adversary in the Middle (AitM) attacks—the attacker intercepts the MFA SMS, either through social engineering or by compromising the mobile device. The attacker can then input the MFA SMS when prompted and gain access to the targeted account.

The good news here is that there has been a drop in the use of SMS as a second factor. In 2022, 20% of logins leveraged SMS-based authentication. As of February 2024, this number has declined 66%, to just 6.6% of authentications. That is a tremendous change, and a positive one at that. In addition to AitM attacks, SIM swapping attacks have all but rendered SMS-based authentication checks useless.

This is backed up by research coming from the 2024 Duo Trusted Access Report, where using SMS texts and phone calls as a second factor has dropped to 4.9% of authentications, compared to 22% in 2022.

Going passwordless

If you really want to reduce your reliance on passwords when confirming credentials, passwordless authentication is another option. Passwordless authentication is a group of identity verification methods that don’t rely on passwords at all. Biometrics, security keys, and passcodes from authenticator apps can all be used for passwordless authentication.

Based on the numbers, passwordless is the new trend. In 2022, phishing resistant authentication methods such as passwordless accounted for less than 2% of logins. However, in 2024, Cisco’s telemetry shows this number is climbing, currently representing 20%, or nearly a 10x increase. This is great news, but still highlights a critical point—80% are still not using strong MFA.

Protecting MFA from threat actors

Recall the MFA exhaustion attack Talos described in their latest IR report. Talos’ example does highlight how there are select circumstances where attackers can still get past MFA. A distracted or frustrated user may simply accept a notification just to silence the application.

In this case, user education can go a long way towards preventing these attacks from succeeding, but there is more that can be done. It’s also important to have protections in place to detect unusual identity patterns based on behavior.

To illustrate, let’s look at when the threat actor begins hammering the login with the compromised credentials. Having monitoring in place that can recognize anomalies such as MFA floods, as well as the moment the user gets annoyed and accepts the request, can help to quickly alert to potentially malicious activity.

It’s also important to keep an eye out for other anomalies, such as a user signing in from an unmanaged device in a location that would be impossible for them to reach—say Peculiar, Missouri—given they had just logged in an hour ago from Normal, Illinois.

User identities have become one of the most active battlegrounds in the threat landscape. While having MFA in place is critical, as well as implementing trusted access policies, it’s just as important to monitor logins for strange and anomalous behavior. Doing so can provide a leg up against attackers all the more interested in gaining access using compromised credentials.

Ben Nahorney is a threat intelligence analyst at Cisco.

]]>
<![CDATA[Decipher Podcast: Lachlan McGill and Euan Moore]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/decipher-podcast-lachlan-mcgill-and-euan-moore https://duo.com/decipher/decipher-podcast-lachlan-mcgill-and-euan-moore

]]>
<![CDATA[Change Healthcare Says Attackers Accessed PHI and PII]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/change-healthcare-says-attackers-accessed-phi-and-pii https://duo.com/decipher/change-healthcare-says-attackers-accessed-phi-and-pii

Two months after the initial disclosure of the ransomware attack on its network, Change Healthcare officials said the company has now determined that the attackers gained access to some protected health information and personally identifiable information “which could cover a substantial proportion of people in America”.

The company has been investigating the intrusion since it was discovered in late February, but most of the available information about the incident focused on the ransomware deployment and the effects on the company’s systems and its downstream partners and customers. The attack crippled much of Change Healthcare’s operations and, because the company handles data, transaction clearing, and payment and claims processing for a huge chunk of the U.S. healthcare industry, caused massive delays for thousands of providers and pharmacies around the country. On Tuesday, Change Healthcare said that its ongoing investigation has now found that the attackers were able to steal files that included both PHI and PII.

“Based on initial targeted data sampling to date, the company has found files containing protected health information (PHI) or personally identifiable information (PII), which could cover a substantial proportion of people in America. To date, the company has not seen evidence of exfiltration of materials such as doctors’ charts or full medical histories among the data,” the statement says.

“The company, along with leading external industry experts, continues to monitor the internet and dark web to determine if data has been published. There were 22 screenshots, allegedly from exfiltrated files, some containing PHI and PII, posted for about a week on the dark web by a malicious threat actor. No further publication of PHI or PII has occurred at this time.”

The attack on Change Healthcare has developed into one of the more potentially damaging and far-reaching such incidents in recent years. Given the depth of the company’s integration into the healthcare ecosystem in the U.S., the effects from the ransomware attack may still be unfolding in the coming months. Many practices, pharmacies, hospitals, and other organizations have experienced significant delays for both claims and payment processing as a result of the incident, and some pharmacy chains were unable to fill prescriptions for some time, as well.

The attack has been attributed to the ALPHV/BlackCat ransomware group, which had been the target of a disruption effort by law enforcement just two months before the Change Healthcare intrusion was discovered. The company said it paid a ransom to the attackers, reportedly $22 million. But some of the stolen data was published online anyway.

Federal regulators and legislators have followed the details of the breach closely, and Andrew Witty, the CEO of Change Healthcare’s parent company, UnitedHealth Group, will testify in a hearing before the House Energy and Commerce Committee on May 1 to discuss the effects of the attack on providers and patients.

“We know this attack has caused concern and been disruptive for consumers and providers and we are committed to doing everything possible to help and provide support to anyone who may need it,” said Witty.

]]>
<![CDATA[Nation-State Actors Exploited Ivanti Bugs to Hit MITRE]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/nation-state-actors-exploited-ivanti-bugs-to-hit-mitre https://duo.com/decipher/nation-state-actors-exploited-ivanti-bugs-to-hit-mitre

The MITRE Corporation on Friday disclosed a breach impacting one of its collaborative networks used for research, development and prototyping. MITRE said in January attackers had exploited two known Ivanti Connect Secure vulnerabilities in order to deploy sophisticated backdoors and harvest credentials.

MITRE, a nonprofit organization that manages federally funded research and development centers supporting government agencies in cybersecurity, defense, homeland security and more, is only the latest high-profile organization to be hit via Ivanti’s vulnerabilities in its Connect Secure and Policy Secure gateways - the U.S. Cybersecurity and Infrastructure Security Agency (CISA) was another recent target, according to officials. MITRE said that, in its specific incident, the nation-state actor behind the attack first performed reconnaissance before exploiting the Ivanti flaws in one of its VPNs and bypassing its multi-factor authentication measures via session hijacking.

“In April 2024 we confirmed that MITRE was subject to an intrusion into one of our research and prototyping networks," said Lex Crumpton and Charles Clancy with MITRE in a Friday post. "MITRE’s security team immediately began an investigation, cut off all known access to the threat actor, and brought in third-party Digital Forensics Incident Response teams to perform their own independent analysis alongside our in-house experts."

After initial access, attackers were able to move laterally and use a compromised administrator account to dig into the network’s VMware infrastructure. Though MITRE had followed best practices and instructions from Ivanti and the U.S. government to upgrade, replace and harden their Ivanti devices, they did not detect the lateral movement into the VMware infrastructure, said Crumpton and Clancy.

During the course of the incident response, MITRE took various measures, including isolating impacted systems and segments of the network to curb the scope of the attack, improving their monitoring of impacted systems and migrating to new systems.

“We launched multiple streams of forensic analysis to identify the extent of the compromise, the techniques employed by the adversaries, and whether the attack was limited to the research and prototyping network or had spread further,” according to Crumpton and Clancy. “While this process is still underway, and we have a lot more to uncover about how the adversary interacted with our systems, trusted log aggregation was perhaps the most important component to enabling our forensic investigation.”

MITRE said the investigation is ongoing and it is still working to determine the scope of the information potentially compromised. The impacted unclassified MITRE research and development system, called the Networked Experimentation, Research, and Virtualization Environment (NERVE), was launched in 2015 as a way to help researchers better collaborate with external labs and partners. MITRE said there is currently no indication that its core enterprise network or partner systems have been impacted.

The incident shows the continued level of fallout from Ivanti’s flaws, disclosed in January (CVE-2024-21887 and CVE-2023-46805), which have been widely exploited by threat actors and also led to an emergency directive by the U.S. government ordering federal agencies to temporarily disconnect all instances of the appliances from agency networks, perform a factory reset and then rebuild and upgrade them.

]]>
<![CDATA[Russian Group Forest Blizzard Deploying GooseEgg Tool to Exploit CVE-2022-38028]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/russian-group-forest-blizzard-deploying-gooseegg-tool-to-exploit-cve-2022-38028 https://duo.com/decipher/russian-group-forest-blizzard-deploying-gooseegg-tool-to-exploit-cve-2022-38028

Microsoft researchers have discovered a notorious Russian state-backed threat actor using a previously undocumented tool called GooseEgg to steal credentials and escalate privileges after gaining initial access to a new device.

The tool has been in use for at least four years and possibly longer, and it has the ability to exploit a Windows Print Spooler vulnerability (CVE-2022-38028), which wasn’t disclosed until 2022. Actors from a threat group that Microsoft calls Forest Blizzard, which is known more commonly as Fancy Bear or APT28, have deployed GooseEgg in attacks on a variety of targets in Europe and North America in recent years. The tool is relatively simple but is effective and has the ability to launch other apps and move laterally.

“Microsoft has observed Forest Blizzard using GooseEgg as part of post-compromise activities against targets including Ukrainian, Western European, and North American government, non-governmental, education, and transportation sector organizations. While a simple launcher application, GooseEgg is capable of spawning other applications specified at the command line with elevated permissions, allowing threat actors to support any follow-on objectives such as remote code execution, installing a backdoor, and moving laterally through compromised networks,” Microsoft said in a new analysis.

Forest Blizzard is a threat group associated with Russia’s GRU intelligence service and has been active for nearly 15 years. The group generally targets organizations of strategic value for Russia’s foreign policy objectives, including government agencies, technology providers, and higher education institutions.

“Microsoft has observed that, after obtaining access to a target device, Forest Blizzard uses GooseEgg to elevate privileges within the environment. GooseEgg is typically deployed with a batch script, which we have observed using the name execute.bat and doit.bat. This batch script writes the file servtask.bat, which contains commands for saving off/compressing registry hives. The batch script invokes the paired GooseEgg executable and sets up persistence as a scheduled task designed to run servtask.bat,” Microsoft said in its analysis.

“The GooseEgg binary—which has included but is not limited to the file names justice.exe and DefragmentSrv.exe—takes one of four commands, each with different run paths. While the binary appears to launch a trivial given command, in fact the binary does this in a unique and sophisticated manner, likely to help conceal the activity.”

The first command doesn’t do much, but the second and third commands launch the actual exploit for the CVE-2022-38028 vulnerability, and the fourth one checks to make sure the exploit worked. Microsoft researchers said GooseEgg can create a new directory and when the Print Spooler service tries to load a specific driver, it is redirected to the attacker-created directory, where there is a function that has been modified by the attacker.

“This results in the auxiliary DLL wayzgoose.dll launching in the context of the PrintSpooler service with SYSTEM permissions. wayzgoose.dll is a basic launcher application capable of spawning other applications specified at the command line with SYSTEM-level permissions, enabling threat actors to perform other malicious activities such as installing a backdoor, moving laterally through compromised networks, and remotely executing code,” the Microsoft analysis says.

]]>
<![CDATA[Decipher Podcast: Source Code 4/19]]> lindsey@decipher.sc (Lindsey O’Donnell-Welch) https://duo.com/decipher/decipher-podcast-source-code-4-19 https://duo.com/decipher/decipher-podcast-source-code-4-19

]]>