This guest blog was contributed by Gil Geron, CEO and Co-Founder, Orca Security
In the rapidly evolving technology landscape, artificial intelligence stands as a beacon of innovation for industries across the board. This AI revolution is not just approaching; it's here, and accelerating at an unprecedented pace. New estimates project that the global AI market will reach a staggering $184 billion by the end of 2024, a remarkable 35% surge from the previous year that underscores the frenzied race among organizations to harness its transformative power.
However, as AI becomes increasingly integrated with cloud services, new findings suggest the emergence of a dangerous underbelly of security vulnerabilities. In the mad dash for innovation, the majority of organizations are focused on speed of delivery while sacrificing security measures. This imbalance, compounded by AI’s relative infancy, has given rise to a complex web of security risks that demand urgent attention.
The stakes couldn't be higher. By understanding and addressing security head-on, businesses can unlock AI’s full potential while safeguarding their digital assets and maintaining trust in this transformative technology.
Keeping Pace with Innovation
The breakneck speed of AI development is leaving security teams struggling to keep up. Cloud providers are constantly unveiling new AI models, packages, and resources—with AI innovations more often introducing features that promote ease of use over security considerations.
Due to its nascent stage, AI security lacks comprehensive resources and seasoned experts. This shouldn’t come as a surprise, considering that Google Vertex AI became generally available in May 2021, Azure OpenAI in November 2021, and Amazon Bedrock in September 2023. Organizations often find themselves blazing their own trails, developing solutions without the benefit of established best practices or external guidance.
Maintaining pace with these advancements requires ongoing research, development and cutting-edge security protocols. However, this has not deterred organizations from widely embracing AI. Orca Security’s research shows that more than half (56%) are using AI to build custom applications with broad exposures to API keys, excessive access permissions, misconfigurations, and more. This number is concerning considering the relative nascency of the technology and the substantial capital investments required. It signals both a long-term commitment to the technology and conditions that demand enhanced AI security.
Taming Shadow AI
The challenge doesn't end there. Lurking in the digital shadows is the growing threat of “shadow AI” - unauthorized AI usage that flies under the radar of security teams. These blind spots increase an organization’s attack surface and risk profile, creating vulnerabilities that malicious actors could exploit.
Shadow AI can manifest in various forms, from employees using unsanctioned AI tools for productivity to developers integrating unapproved AI models into applications. A new survey by the US National Cybersecurity Alliance (NCA) finds that over a third (38%) of employees share sensitive work information with AI tools without their employer's permission.
The risks associated with shadow AI are multifaceted. They encompass potential data breaches, compliance violations, and the inadvertent introduction of biases or errors into critical business processes. The gravity of the issue is underscored by the latest Ponemon Cost of a Data Breach Report, which reveals that 40% of breaches compromised data stored across multiple environments, while more than one-third involved shadow data.
To address the challenges posed by shadow AI, organizations must take proactive steps to secure their AI environments. It starts with gaining visibility into all AI projects within their environments in order to be aware of default settings in AI resources, always checking and restricting them when appropriate. Limiting privileges is crucial to prevent lateral movement and other threats. Managing vulnerabilities in AI packages is essential, as 62% of organizations have deployed AI packages with at least one CVE. Securing data through more restrictive settings, such as self-managed encryption keys and encryption at rest, is vital - especially considering that 98% of organizations using Google Vertex AI have not enabled encryption at rest for their self-managed encryption keys. Finally, isolating networks by limiting access to AI assets and precisely defining allowed network traffic can significantly reduce risk.
Navigating Regulatory Mazes
Adding to this complexity is the ever-shifting regulatory landscape. Navigating evolving compliance requirements requires a delicate balance between fostering innovation, ensuring security, and adhering to emerging legal standards.
The recent enactment of the EU's AI Act is just the opening salvo in what promises to be an evolving legal environment. Achieving multi-cloud compliance in this context requires full visibility into AI models, resources, and usage - a task made exponentially more difficult by the presence of shadow AI.
AI resources, much like cloud assets, are characterized by their ephemeral nature, rapidly spinning up and down at a scale and frequency that traditional manual processes cannot effectively manage. This dynamic environment necessitates the implementation of automation across the entire compliance lifecycle. Several critical aspects include:
continuous inventorying of AI assets, associated risks, and ongoing activities.
systematic mapping of these resources and risks to relevant compliance frameworks and controls.
continual monitoring and updating of compliance status for all controls, while facilitating the swift resolution of any non-compliance issues as they arise.
regular and accurate reporting of compliance progress to key stakeholders.
Battling Resource Misconfigurations
In the rush to deploy new AI services, organizations often overlook the critical step of properly configuring security settings. Orca's State of AI Security Report reveals alarming statistics: 98% of organizations using Amazon SageMaker have a notebook instance with root access enabled; 77% of organizations using Amazon SageMaker have not configured session authentication (IMDSv2) for their notebook instances; and 45% of Amazon SageMaker buckets are still using the non-randomized default bucket name, making them potentially discoverable.
These misconfigurations are like ticking time bombs, waiting to be exploited by cybercriminals. Meanwhile, the pressure to maintain compliance with an ever-growing list of regulations adds another layer of complexity to the AI security puzzle. The report found that 98% of organizations have not configured Azure OpenAI accounts with private endpoints, increasing compliance risks.
To combat these issues, organizations should implement strict configuration management processes for AI services. Regularly auditing and reviewing AI resource configurations is essential.
Forging a Path Forward
Despite these daunting challenges, there's hope on the horizon. Innovative solutions like AI Security Posture Management (AI-SPM), integrated with Cloud Native Application Protection Platforms (CNAPP), are emerging as powerful allies in the fight for AI security. These tools offer comprehensive visibility into all AI deployments, including shadow AI, advanced risk detection and prioritization capabilities, and automated compliance processes to streamline regulatory adherence.
As we continue to push the boundaries of what's possible with AI in cloud services, it's crucial to remember that security isn't just a checkbox—it's a fundamental pillar of innovation. By addressing these challenges head-on, we can unlock the full potential of AI while safeguarding our digital assets and maintaining trust in this transformative technology.
-------------------------------------
About the author
Gil Geron is CEO & Co-founder of Orca Security. Gil has more than 20 years of experience leading and delivering cybersecurity products. Previous to his role as CEO, Gil was chief product officer from the inception of Orca. He’s passionate about customer satisfaction and has worked closely with customers to ensure they are able to thrive securely in the cloud. Gil is committed to providing seamless cybersecurity solutions without compromising on efficiency. Prior to co-founding Orca Security, Gil directed a large team of cyber professionals at Check Point Software Technologies.