top of page

How the Future of Cybersecurity and the Role of AI is Evolving Rapidly

We sat down with Aaron Turner, Vectra AI Advisor and IANS Faculty, to explore the evolving landscape of cybersecurity and AI. Aaron shared insights into the limitations of MFA (Multi-Factor Authentication) and why it can't be the sole defense against hackers. He also delved into the implications of AI legislation and how it may impact both private and public sectors.


Aaron Turner, Vectra AI Advisor & IANS Faculty

Welcome Aaron. Can you please tell us about yourself and your role at Vectra AI?


I currently serve as an independent technology advisor to Vectra AI, but I have quite a unique and non-traditional professional background. In 1994, I was living in Mexico studying to become a Spanish teacher when I decided I was interested in studying law, but I ended up seeing that the legal profession was not what I was cut out for. I was doing white-hat hacking nights and weekends to pay my way through Law School, so I decided to drop out and just focus on penetration testing. A few years after that, I was invited to join an internet security team at Microsoft and spent eight years there, later leaving and working for the US Department of Energy, focusing on the intersection of cybersecurity and critical infrastructure. It was there that I was fortunate enough to work with Congress to start the beginnings of cybersecurity investment at the government level. After leaving the government, I founded a few companies, including Siriux Security, an identity and SaaS posture management company that was acquired by Vectra AI in 2022. Today, I focus on strategic cyber security consulting work and am an IANS Research faculty member, where I have the privilege of training and mentoring professionals within the infosec community.


Since its inception, Vectra AI has invested in the promise of AI for fighting cyber threats. How has the market shifted in the last 12 months since the onset of the generative AI boom, and how has it influenced and changed attacker behaviors, techniques and methods?


AI and LLMs will undoubtedly have an outsized influence on security operations in the coming years. The upside potential is that integrating AI and LLMs into security tools will make it possible to more successfully defend against attackers, even as attack surfaces grow and attack techniques evolve. However, the downside risks are potentially more harmful, as moderately skilled hackers can use AI to automate complex hacking techniques and methods, reducing the barrier to entry for attackers.


The coming years will likely see a proliferation of careless and hasty integrations of AI models into security products. Companies that don’t have any experience with language models are beginning to integrate them into their products to analyze and reason about security incidents without understanding how those models operate, what data they were trained on, or why LLMs can hallucinate answers to the questions they shouldn’t be able to answer. While the power of natural language processing within LLMs is a great thing, it is not necessarily the best tool to use for cyber incident analysis, investigation and prioritization.


What are your thoughts on the current trajectory of AI legislation? How do you believe it might impact the private and public sectors in the US?


The US has some of the most mature legal frameworks in the world when it comes to technology. I believe that trying to create AI-specific legislation is misguided. Anything we do to constrain artificial intelligence or create artificial boundaries will serve as that, an artificial boundary. Setting strict boundaries could let our adversaries get ahead of us when it comes to artificial intelligence, advanced cognitive services, and large language models because our adversaries don't have legislation that stops them. We need to treat this technology as the next cyber battlefield.


The U.S. has attracted entrepreneurial technologists from all over the globe due to its relatively lax regulatory environment and faster go-to-market opportunities. Introducing special oversight for AI could be detrimental to the fast-paced AI innovation that we need to stay ahead of. In conversations with a wide range of technology leaders, there is fear that any stringent intervention by the U.S. government in AI affairs might inadvertently allow our country's adversaries to use it to their advantage. The best approach to AI regulation will rely on existing technology regulations that focus on intellectual property rights and individuals’ right to privacy, and strike a balance between advocating for transparency and promoting continued innovation rather than creating artificial guardrails.


You mentioned advocating for legislation that requires transparency about the use of AI technology. Could you elaborate on why transparency is crucial and how it can be implemented effectively?


Many years ago, I was fortunate to work with a U.S. law enforcement official, Steve Murphy, who was made famous by the Netflix series Narcos. He taught me an important principle: every time a well-meaning government official tries to create a new law to focus on criminal activity, it takes time for the system to catch up, and the targets of those laws eventually bypass them anyway. The same principle holds with AI. If we attempt to create an entirely new legal framework for AI, we may be too late to make a difference in how the technology is delivered.


We can balance the regulation of AI development and not stifle it by clarifying and strengthening existing legal enforcement frameworks. The US has excelled at developing innovative technologies because our system has focused on allowing for innovation and holding technology developers accountable as part of that process. Creating an all-encompassing regulatory framework would not be a suitable approach and would be insufficient to catch up to how quickly technology is moving today. It isn't humanly possible to predict the future innovation paths that different AI platform developers will take.


Transparency plays a significant role in creating trust and holding organizations accountable for honoring existing laws. By fostering transparency, we can better balance security and innovation while ensuring we stay competitive in the evolving threat landscape.


As hackers become more evasive and sophisticated in their infiltration methods, what best practices can you share with organizations on how best to apply AI to stop attacks?


Generative AI lacks specificity and clarity, whereas behavioral AI is trained for specificity and accuracy. Organizations should be thinking and investing in combining these two types of AI. While generative AI can help organizations envision new attacks and think about new ways attacks are happening, behavioral AI can look for the artifacts associated with such attacks at a scale and speed that is difficult for a human to do on their own.


I’ve seen Vectra AI talk a lot about the “spiral of more.” Can you explain what this means and how Vectra AI is helping to solve this market challenge?


The cybersecurity landscape is increasingly plagued by unknown threats, with 72% of security practitioners unaware of their current vulnerabilities. These threats, encompassing cloud-based breaches, account takeovers, and supply chain attacks, have gained prominence in recent years due to the rapid adoption of hybrid cloud services and technologies. As organizations shift more applications, workloads and data to hybrid cloud infrastructure, security teams deal with more attack surfaces and, thus, more advanced attackers. This is exacerbated by a cycle of addressing more challenges with "more," resulting in a complex and strained security environment.


Breaking free from this cycle requires two key changes. First, the adoption of threat detection and response platforms with broad attack surface coverage, unification, and simplification. Secondly, evolving from traditional detection tools that rely on known indicators of compromise (IoCs) to embracing modern AI/ML models that can adapt to the evolving threat landscape. These models empower security teams to think like attackers, identify malicious patterns unique to their environments, and prioritize threats effectively.


At Vectra AI, we combat the unknown by doing more with less. Our approach focuses on collecting and analyzing the right data to enable security teams to understand attacker behavior, reduce noise, and respond to critical threats efficiently. The Vectra AI Platform enables organizations to integrate Vectra AI's public cloud, identity, SaaS, and network signal data with existing endpoint detection and response (EDR) to help SOC teams keep pace with attacks


What’s next for cybersecurity and AI? What do you think we’ll see more of in 2024?


2024 will be a year of continued back-and-forth in the use of AI among attackers and defenders. Take a look at how rapidly the balance of capability has shifted between platforms like ChatGPT and Google Bard in the last year. OpenAI definitely disrupted the world with their release of the GPT models, then Google had to play catch up. By the end of this year, Google has done a good job of bringing interesting features to market and, in some cases, is more useful than ChatGPT.


The same sort of pendulum swing will happen among attackers and defenders. We’re going to see attackers use LLMs for enhanced spear-phishing attacks, then defenders will catch up with behavioral AI to prevent those attacks, and then the attackers will use a different platform to focus on another delivery mechanism for their human-focused attacks. All cybersecurity professionals, from CISOs to the most junior analysts, should be prepared for rapid swings in advantage between attackers and defenders and be prepared to be flexible with the tools they use and the way they rely on point-security solutions.


One of the reasons I’m most excited about Vectra’s potential to help in the future is its holistic view of how to consume massive amounts of telemetry from a multitude of sources and then translate it into actionable intelligence.


###


bottom of page