top of page

Automation and Efficiency Are Shaping the New Era of Endpoint Management

The complexities of patching—an essential process for maintaining the security and performance of software—often go unnoticed until a critical issue arises. Recent incidents, such as the Microsoft outage from the CrowdStrike incident, have brought this topic to the forefront. It’s also prompting organizations to re-evaluate their patching strategies.

Patching is a daily challenge for IT operations teams who must balance the urgency of applying updates to mitigate security risks with the necessity of ensuring stability and continuity in business operations. This challenge is compounded by the increasing volume and severity of vulnerabilities, making it more important than ever for organizations to adopt more efficient and reliable patching processes.


Anne Baker CMO Adaptiva

In this Q&A, we speak with Anne Baker, the Chief Marketing Officer of Adaptiva, a leader in autonomous endpoint management and patching that supports some of the world's largest enterprises in navigating these challenges.

Why is patching so challenging for IT teams?

AB: Patching is an incredibly manual and time-consuming process in the way companies have traditionally been executing it for decades. The Ponemon Institute reports that 65% of IT pros spend an average of 10 hours to more than 25 hours deploying patches weekly. 

First, there are thousands of applications, operating systems, and drivers that every organization has to maintain and patch continuously. We have over 1,600 applications in our third-party patching catalog alone, for example. And each of those products have dozens of patches released every month that must be downloaded, configured, and rolled out. It is an endless cycle of patching to stay compliant and reduce risk. 

Second, the vulnerabilities and risks continue to grow. There were 26,447 vulnerabilities disclosed last year and 75% of vulnerabilities were exploited within 19 days. Additionally, 25% of high-risk vulnerabilities were exploited the same day they were published. This growth in exposure risks is putting tremendous pressure on IT and most organizations who are following a manual approach to patching simply don’t have the resources to keep up with the pace of the threats.

Third, the manual patching process that many organizations are using is filled with delays and roadblocks that leave critical vulnerabilities unpatched. IT administrators are tasked with the thankless job of having to locate the correct patch and research it to determine if it is a priority, configure it, roll it out, test it, troubleshoot it, and perform dozens of other tasks before it can be safely deployed company-wide. Then when a breach happens due to an unpatched device the blame is squarely placed on IT for not moving fast enough. Today’s patching process is a no-win situation for IT teams and it needs to be completely re-engineered and automated if organizations want to stay ahead of the risk.

In your opinion, how should an organization deploy a patch, like the one CrowdStrike delivered, across their organization?

AB: There is no doubt that a phased patch deployment approach could have significantly minimized the negative impacts of what happened last week. Leading organizations typically employ the use of deployment rings to roll out software updates and patches to a limited number of test devices before distributing company-wide. CrowdStrike even recommended an N-2 or N-1 deployment approach for updates, encouraging customers to use a release two or one version behind the latest release in wide-scale production. The challenge was that many organizations chose not to implement that recommendation for routine threat intelligence updates and used it only for major updates of the CrowdStrike Falcon agent.

At Adaptiva, we created a concept of deployment waves that goes beyond the simple, static nature of a phased deployment mechanism to a more dynamic approach that adjusts automatically to the changing landscape of an IT environment. As machines get updated, replaced, or their configurations change, the waves adapt in real time, ensuring that each deployment phase is always optimally configured. That level of automation speeds patching while also shifting patching rollouts from a labor-intensive approach to one where IT can move to a role of defining their patching strategy and overseeing the automated execution of it.

Should organizations implement greater approval processes for patches and software updates before a patch is rolled out widely across all of the company’s devices?

AB: Risk tolerance levels vary widely and every organization is different when it comes to patching approvals. Putting into place a multi-step approval process before rolling out a patch is one way an organization can ensure there are checks and balances in place. For example, one of our Adaptiva customers is a large automotive manufacturer that has many different production plants at different locations with varying schedules.​ The patching can only be done outside of production times so that manufacturing is not interrupted. In this case, plant managers must be automatically notified when a patch update is available, and they must approve it prior to deploying the patch in their plant to avoid causing production delays.​ This is the type of custom, multi-tiered approval approach that enables organizations to manage exposure risk while maintaining productivity.

Organizations should look for patching and software update products that enable them to set their own unique approval workflows and timelines to offer greater control over these types of deployments. This also enables both Security teams and IT teams to work more closely together on critical patches, like those that address zero-day threats.

Why was it so hard to recover quickly from this patch? And what can organizations do to improve recovery times from bad patch rollouts in the future?

AB: One of the challenges of the recent outage is that it invoked a crash before many recovery applications could be loaded to help roll out the fix. As a technology community, we need to acknowledge that patches break things and mistakes happen. This situation CrowdStrike found themselves in won’t be the last time we see a software update bug or faulty configuration update get rolled out at a global scale by a vendor. However, that doesn’t mean we stop patching or patch slower. This means that the patch management tools companies use need to provide flexible controls to pause, cancel, or roll back a patch when inevitable issues arise. Having the capabilities to limit and control the impact of problematic patches is key to recovering quickly.

Specifically when it comes to patching, what can IT and security leaders learn from this situation?

AB: The first lesson is not to react based on fear. I’m seeing some conversations about turning away from automation for fear of outage disruptions and toward slower, manual updates because leaders assume this is the safer approach. Respectfully, this is the wrong instinct. Manual patching guarantees vulnerabilities go unaddressed which makes it far more likely to be exploited by cybercriminals and nation-state actors. Their concerns are valid, but manual, reactive patching is not the answer.

Organizations need automated tools that roll out patches in a timely manner, but with controls. The number of attack surfaces organizations have to protect far outnumber what employees can handle manually, and that’s why automation is a must. A proactive approach means IT and security teams leverage automated platforms that have shared dashboards so both teams can have visibility into an organization’s vulnerabilities. The right hand needs to know what the left hand is doing. IT and security teams can work in true lockstep when they see the same critical details such as CVSS scores, asset risk, and patching status in real-time and communicate their objectives and expectations. Real-time collaborative access to data allows teams to track how quickly vulnerabilities are being eliminated and which applications are currently being patched. This reduces the lag time and potential security gaps associated with communication delays, point-in-time scans, and outdated spreadsheet reporting.

Convincing leaders to embrace a proactive approach by investing in the right tools and people can be a hurdle, but relying on manual or reactive vulnerability remediation all but guarantees a cyberattack.

What advice do you have for IT and security teams moving forward?

AB: This experience has been jarring, but it’s also an opportunity for a reset. I encourage leaders of organizations of all industries and sizes to take a critical look at their approach to patching. Every organization should be asking how they can better balance the risks and rewards of patching aggressively. An audit can identify the organization's strengths, potential weaknesses, and opportunities to address potential issues. I also hope people consider whether they have a crisis plan in place. The reality is that organizations will always be exposed to some level of risk and vulnerabilities. But when leaders build a culture around security, prioritize the strength of their teams, and invest in the appropriate technologies and processes, their organizations’ security posture will be significantly improved and more able to prevent or withstand disruptions that come their way.

Thanks for your insights, Anne.

bottom of page