top of page

Steve Wilson, Exabeam: Rapid, Real Deployments of Gen AI Technologies Coming in Cybersecurity

Steve Wilson, Chief Product Officer at Exabeam, was recently named Cyber Influencer of the Year - Innovation Leader by Enterprise Security Tech. We sat down with him to discuss GenAI in the cybersecurity landscape, his leadership in the OWASP Top 10 for Large Language Model AI Applications project, and his upcoming book.


Steve Wilson Exabeam

How do you see the landscape of AI, cybersecurity, and cloud evolving in the coming years, and what opportunities and challenges do you anticipate?

 

AI has been increasingly important in cybersecurity over the past 10 years.  Technologies such as User and Entity Behavior Analytics (UEBA) brought AI techniques into the heart of the security operations center.  However, that’s only the tip of the iceberg.  The wave of Large Language Models (LLMs) and Generative AI (Gen AI) we saw this past year is just the start.  In 2023 companies began to experiment with Gen AI in a cybersecurity context, but little was put into production. In 2024 and 2025, we’ll see rapid developments and real deployments of Gen AI technologies.

 

I expect that every security analyst will have an “AI Co-pilot” that works with them on threat detection, investigation, and response (TDIR) to security threats and security incidents.  This will dramatically improve the efficiency and scale of a security operations center.  However, that’s only half the story.  The hackers, from script kiddies to nation-states, will also be adopting AI technologies at a furious pace.  Expect to see everything from rapid, AI-assistant attacks on your cloud Infrastructure to incredibly real phishing attacks on your executives. The world is changing fast. You’ll need to invest in AI to keep up.

 

Can you share a challenging moment in your career and how you navigated it? Were there any lessons that others in the industry might find valuable?

 

As someone who’s spent their career building large-scale projects, I’ve learned to fear the term “I’ll try.”  I once faced a situation where a piece of mission-critical infrastructure for which I was responsible for was failing at multiple customer sites.  I needed to understand what had gone wrong and how a piece of software with these problems made its way into the hands of major customers.  When I traced it back, it came down to a discussion about schedules and deadlines.  When a Business Leader asked an engineering leader, “Can you hit this date?” the Engineering Leader replied, “I’ll try.”

 

Of course, the business Leader heard, “Yes, I’ll do it.”  Commitments were made, and plans were set in motion.  The engineering team tried hard, but there were challenges, and the software wasn’t ready on the deadline, but it was too late.  It was shipped to the customer, hoping we’d patch it up later.  Since then, I’ve learned to be specific in communications around topics like deadlines, requirements, and deliverables.  As Yoda once said, “Only do or do not, there is no try.”

 

Your leadership in the OWASP Top 10 for Large Language Model AI Applications project is notable. The drawbacks and security threats around Large Language Models (LLMs) have gained a good amount of media attention. What motivated your involvement and what do you find to be the most pressing risk with LLM security?

 

I started the OWASP Top 10 for LLM apps project in May 2023.  I expected to find a dozen or so like-minded people who wanted to work in this obscure corner of the cybersecurity landscape.  Instead, I had hundreds of volunteers join the expert group in the first month.  It turns out that many people saw the same problems and opportunities I saw, and wanted to help.  Also, I think everyone thought the project sounded fun!  Fortunately, we built a great group, and it has been a really fun, rewarding project..

 

LLMs, the technology that sits under OpenAI’s ChatGPT, Google Bard, and Anthropic Claude, are the most exciting technologies to come along since the invention of the World Wide Web - which is why ChatGPT became the fastest-adopted app in history.  But these technologies come with a dark side.  We see monthly headlines about another security concern about a chatbot or co-pilot.  Whether it’s a prompt-injection attack or a copyright lawsuit, the risks are real.  However, the greatest risk is our overconfidence.  These technologies appear like magic, and that makes people want to trust them.  But as Spiderman’s Uncle Ben tells us, “With great power there must also come great responsibility.”  Right now, these LLMs aren’t ready for the level of trust we’re giving them.  We must be vigilant with guardrails, limit their agency, and train users to recognize AI hallucinations.

 

You have an upcoming book, "The Developer's Playbook for Large Language Model Security.” What motivated you to focus on this specific area, and are there any key takeaways readers can expect?

 

My new book, "The Developer's Playbook for Large Language Model Security," is scheduled for release this year.  As LLMs become more embedded into our digital infrastructure, it will be critical for CISOs and security teams to have guidance on creating robust security measures. 

 

At the end of the book, I want readers to find themselves prepared to address the potential drawbacks of AI and LLMs and have the confidence to move forward and use them securely. The plan is to expand the general knowledge of LLMs and prepare developers for the realities of the future.

bottom of page