Harnessing AI for corporate cybersecurity

News Room

Cybersecurity is in the midst of a fresh arms race, and the powerful weapon of choice in this new era is AI.

AI offers a classic double-edged sword: a powerful shield for defenders and a potent new tool for those with malicious intent. Navigating this complex battleground requires a steady hand and a deep understanding of both the technology and the people who would abuse it.

To get a view from the front lines, AI News caught up with Rachel James, Principal AI ML Threat Intelligence Engineer at global biopharmaceutical company AbbVie.

“In addition to the built in AI augmentation that has been vendor-provided in our current tools, we also use LLM analysis on our detections, observations, correlations and associated rules,” James explains.

James and her team are using large language models to sift through a mountain of security alerts, looking for patterns, spotting duplicates, and finding dangerous gaps in their defences before an attacker can.

“We use this to determine similarity, duplication and provide gap analysis,” she adds, noting that the next step is to weave in even more external threat data. “We are looking to enhance this with the integration of threat intelligence in our next phase.”

Central to this operation is a specialised threat intelligence platform called OpenCTI, which helps them build a unified picture of threats from a sea of digital noise.

AI is the engine that makes this cybersecurity effort possible, taking vast quantities of jumbled, unstructured text and neatly organising it into a standard format known as STIX. The grand vision, James says, is to use language models to connect this core intelligence with all other areas of their security operation, from vulnerability management to third-party risk.

Taking advantage of this power, however, comes with a healthy dose of caution. As a key contributor to a major industry initiative, James is acutely aware of the pitfalls.

“I would be remiss if I didn’t mention the work of a wonderful group of folks I am a part of – the ’OWASP Top 10 for GenAI’ as a foundational way of understanding vulnerabilities that GenAI can introduce,” she says.

Beyond specific vulnerabilities, James points at three fundamental trade-offs that business leaders must confront:

  1. Accepting the risk that comes with the creative but often unpredictable nature of generative AI.
  2. The loss of transparency in how AI reaches its conclusions, a problem that only grows as the models become more complex.
  3. The danger of poorly judging the real return on investment for any AI project, where the hype can easily lead to overestimating the benefits or underestimating the effort required in such a fast-moving field.

To build a better cybersecurity posture in the AI era, you have to understand your attacker. This is where James’ deep expertise comes into play.

“This is actually my particular expertise – I have a cyber threat intelligence background and have conducted and documented extensive research into threat actor’s interest, use, and development of AI,” she notes.

James actively tracks adversary chatter and tool development through open-source channels and her own automated collections from the dark web, sharing her findings on her cybershujin GitHub. Her work also involves getting her own hands dirty.

“As the lead for the Prompt Injection entry for OWASP, and co-author of the Guide to Red Teaming GenAI, I also spend time developing adversarial input techniques myself and maintain a network of experts also in this field,” James adds.

So, what does this all mean for the future of the industry? For James, the path forward is clear. She points to a fascinating parallel she discovered years ago: “The cyber threat intelligence lifecycle is almost identical to the data science lifecycle foundational to AI ML systems.”

This alignment is a massive opportunity. “Without a doubt, in terms of the datasets we can operate with, defenders have a unique chance to capitalise on the power of intelligence data sharing and AI,” she asserts.

Her final message offers both encouragement and a warning for her peers in the cybersecurity world: “Data science and AI will be a part of every cybersecurity professional’s life moving forward, embrace it.”

Rachel James will be sharing her insights at this year’s AI & Big Data Expo Europe in Amsterdam on 24-25 September 2025. Be sure to check out her day two presentation on ‘From Principle to Practice – Embedding AI Ethics at Scale’.

See also: Google Cloud unveils AI ally for security teams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *