CrowdStrike Survey Highlights Security Challenges in AI Adoption

News Room

Do the security benefits of generative AI outweigh the harms? Just 39% of security professionals say the rewards outweigh the risks, according to a new report by CrowdStrike.

In 2024, CrowdStrike surveyed 1,022 security researchers and practitioners from the U.S., APAC, EMEA, and other regions. The findings revealed that cyber professionals are deeply concerned by the challenges associated with AI. While 64% of respondents have either purchased generative AI tools for work or are researching them, the majority remain cautious: 32% are still exploring the tools, while only 6% are actively using them.

What are security researchers seeking from generative AI?

According to the report:

  • The highest-ranked motivation for adopting generative AI isn’t addressing a skills shortage or meeting leadership mandates — it’s improving the ability to respond to and defend against cyberattacks.
  • AI for general use isn’t necessarily appealing to cybersecurity professionals. Instead, they want generative AI partnered with security expertise.
  • 40% of respondents said the rewards and risks of generative AI are “comparable.” Meanwhile, 39% said the rewards outweigh the risks, and 26% said the rewards do not.

“Security teams want to deploy GenAI as part of a platform to get more value from existing tools, elevate the analyst experience, accelerate onboarding and eliminate the complexity of integrating new point solutions,” the report stated.

Measuring ROI has been an ongoing challenge when adopting generative AI products. CrowdStrike found quantifying ROI to be the top economic concern among their respondents. The next two top-ranked concerns were the cost of licensing AI tools and unpredictable or confusing pricing models.

CrowdStrike divided the ways to assess AI ROI into four categories, ranked by importance:

  • Cost optimization from platform consolidation and more efficient security tool use (31%).
  • Reduced security incidents (30%).
  • Less time spent managing security tools (26%).
  • Shorter training cycles and associated costs (13%).

Adding AI to an existing platform rather than purchasing a freestanding AI product could “realize incremental savings associated with broader platform consolidation efforts,” CrowdStrike said.

SEE: A ransomware group has claimed responsibility for the late November cyberattack that disrupted operations at Starbucks and other organizations.

Could generative AI introduce more security problems than it solves?

Conversely, generative AI itself needs to be secured. CrowdStrike’s survey found that security professionals were most concerned about data exposure to the LLMs behind the AI products and attacks launched against generative AI tools.

Other concerns included:

  • A lack of guardrails or controls in generative AI tools.
  • AI hallucinations.
  • Insufficient public policy regulations for generative AI use.

Nearly all (about 9 in 10) respondents said their organizations have implemented new security policies or are developing policies around governing generative AI within the next year.

How organizations can leverage AI to protect against cyber threats

Generative AI can be used for brainstorming, research, or analysis with the understanding that its information often must be double-checked. Generative AI can pull data from disparate sources into one window in various formats, shortening the time it takes to research an incident. Many automated security platforms offer generative AI assistants, such as Microsoft’s Security Copilot.

GenAI can protect against cyber threats via:

  • Threat detection and analysis.
  • Automated incident response.
  • Phishing detection.
  • Enhanced security analytics.
  • Synthetic data for training.

However, organizations must consider safety and privacy controls as part of any generative AI purchase. Doing so can protect sensitive data, comply with regulations, and mitigate risks such as data breaches or misuse. Without proper safeguards, AI tools can expose vulnerabilities, generate harmful outputs, or violate privacy laws, leading to financial, legal, and reputational damage.

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *