Security news that informs and inspires

‘Uncharted Territory:’ Companies Devise AI Security Policies

By

Businesses have been preparing and implementing security policies for the utilization of generative AI in the workplace, but many executives say that they still don’t fully understand how AI works and its impacts, according to a new Splunk report.

Splunk’s State of Security 2024 report, released Tuesday and based on a survey of 1,650 security executives across nine countries, highlights how security teams are mulling over generative AI security and data privacy policies in their organizations, and up to 44 percent of respondents listed AI as a top security initiative of 2024 (with 35 percent pointing to cloud security and 20 percent listing security analytics). Most businesses said their employees are actively leveraging AI, leaving CISOs to navigate the best ways to prepare for potential risks that could crop up as AI systems are utilized in their environments.

Despite its high adoption rate, some businesses - around one-third of report respondents - have not implemented corporate security policies clarifying the best security practices around generative AI. At the same time, while AI policies require a deep understanding of the technology itself and potential impacts across the business, 65 percent of respondents acknowledge that they lack education around AI.

“Many individuals lack a foundational understanding of what AI is, how it works, and its potential applications and limitations,” said Mick Baccio, global security advisor at Splunk SURGe. “I’m not implying mastery of machine learning algorithms, neural networks, and other AI techniques is a necessity, but a basic understanding of the systems being used. Like a car, it’s not necessary to know the details of a combustion engine, but a fundamental understanding of how it operates is critical.”

While having a company policy in place does not eliminate security issues, these types of policies can keep the ship on the right course in helping executives think through the security risks and corresponding mitigations associated with AI. For instance, corporate policies should give further clarity about what type of data can be used in public generative AI platforms, and specify the types of sensitive or private data that shouldn’t be used. AI security policies should also take into account areas like access control, training and awareness and regulatory compliance, said Baccio.

“I think there needs to be a basic understanding of the potential vulnerabilities of AI systems, such as adversarial attacks, data poisoning, and model inversion attacks,” said Baccio.

Perceptions of how generative AI will assist both security defenders and threat actors are also changing. Both businesses and government agencies have been trying to better understand the security issues behind both the development and deployment of AI systems. A new set of guidelines by the DHS for critical infrastructure entities released this week, for example, looked at the best security measures for organizations when it comes to attacks using AI, attacks targeting AI systems that support critical infrastructure, and potential failures in the design or implementation of AI that could lead to malfunctions.

Up to 43 percent of respondents thought that generative AI would help defenders, pointing to threat intelligence, security risk identification, threat detection and security data summarization as the top AI cybersecurity use cases. Furthermore, half of the respondents said they are in the middle of developing a formal plan for using generative AI for cybersecurity and for addressing potential AI security risks, though they said the plans aren’t complete or agreed upon.

However, 45 percent of respondents said generative AI will help attackers, and 77 percent believe that it “expands the attack surface to a concerning degree.” Respondents said they think that generative AI will make existing attacks more effective and increase the volume of existing attacks. Data leakage is a major concern for organizations.

“Not all AI threats originate from outside sources; 77% of respondents agree that more data leakage will accompany increased use of generative AI,” according to the report. “However, only 49% are actively prioritizing data leakage prevention - possibly because there aren’t many solutions yet that control the flow of data in and out of generative AI tools.”