From legacy to SaaS: Why complexity is the enemy of enterprise security

In this Help Net Security interview, Robert Buljevic, Technology Consultant at Bridge IT, discusses how the coexistence of legacy systems and SaaS applications is changing the way organizations approach security. He explains why finding the right balance between old and new technology is essential for maintaining protection.

legacy SaaS security

As more companies move from traditional on-prem setups to SaaS, how is that changing the way they approach security? Are most organizations handling that shift well, or are they struggling?

Most organizations are dealing with a messy reality: legacy systems combined with SaaS applications. The latter are often driven by business process owners, i.e. departmental leads or “power users”: business needs now dictate the speed and create demand for ever more applications and technology solutions. In the past, it was IT dictating the pace of innovation. Nowadays, the IT staff is often caught on the back foot, struggling to support the business while maintaining the legacy or “core” systems.

These pressures only add to the complexity and operational challenges. And as we know, complexity is the enemy of security: when resources are limited and IT environments become more complicated, the risk of human error, or even simple oversight, leading to a security incident increases significantly.

The outcome is often a sense of being stuck somewhere in between, with costs pressures mounting and attention being diverted to a “thousand” priorities, widening the security gaps.

Add to that the “sunk cost fallacy”, an entirely self-inflicted IT issue, especially during technology transitions. IT departments continue investing in outdated or inefficient systems simply because they’ve already spent a lot of money, time, or resources on them, even when switching to a setup would be smarter. You might hear something like “We spent millions building our data center. We can’t just move everything to the cloud now.” Yet continuing to pour resources into maintaining a legacy system can cost more long-term, while supporting evolving business needs and productivity is increasingly difficult within that setup. Other sunk cost fallacy symptoms include clinging to custom-built internal tools and renewing legacy software licenses for what is in reality underused and expensive enterprise software.

Finally, organizations face external pressures as threat actors’ efforts are centered around two main areas: first, social engineering targeting employees lacking strong multifactor authentication. Second, they focus on on-premises infrastructure by weaponizing wormable vulnerabilities (no user interaction required) in systems exposed on the internet, including next-gen firewalls, VPN gateways and remote access tools, or self-hosted public-facing applications in general (think of the recent Sharepoint on-prem vulnerability), or the myriad of vulnerabilities affecting firewall and VPN vendors.

IT security budgets are still somewhat misaligned with these trends. Money is being spent on addressing network and endpoint detection and response capabilities. While detecting intruders is important, it fails to recognize the factors which make intruders succeed in first place: and key predictors of success for attackers are self hosted infrastructure and relying on weak or phishable authentication (passwords only).

ZTNA is often cited as a best practice. Why is it so difficult to apply ZTNA principles in traditional enterprise environments, and what alternatives are feasible in the short term?

Here were I work (Central and Eastern European region), it’s safe to bet most organizations’ core infrastructure is based on three key on-prem technologies: Microsoft Active Directory (AD) for authenticating and managing networked assets; a VPN and firewall for remote access into the organization network; and some kind of self-managed IaaS, usually based on VMware. Of course, this is an extremely simplified outline, but it holds true across a wide variety of organizations.

This mix of technologies is actually something threat actors have been specializing in breaching: from automating discovery, to infiltration and privilege escalation, aided by wormable vulnerabilities and weak authentication. In fact, ransomware operators thrive on these systems, and you will hardly find a successful breach where some of all of the mentioned on-prem technologies are not involved.

Start with Active Directory network: securing it and applying ZTNA principles is exceedingly difficult, as the architecture offers countless lateral movement opportunities within the network, with little or no MFA. Once attackers are inside the network perimeter, persistence is easy and a variety of living-off-the-land techniques are available for them.

Then consider VPN and remote access scenarios: these are not easy to upgrade or modernize, so hardly any includes device posture and user identity checks before allowing granular access to an application, all key ingredients of ZTNA. Also, on-prem self-managed VPN/firewall appliances are under unprecedented scrutiny by threat actors: firewall vendors are being extensively researched for vulnerabilities by well-funded threat actor groups, often backed by nation-states (for example Sophos has offered a candid report). These groups develop exploits and initiate threat campaigns to compromise as many devices as possible, well before the vendor becomes aware of the exploitation, let alone patches are provided.

ZTNA vendors will try to address all those gaps around traditional enterprise systems, but the whole implementation becomes much more costlier and complicated, a true example of bolt-on security, rather than a secure-by-design system.

By modernizing, i.e., moving applications to a more SaaS-like consumption model, the network perimeter and associated on-prem complexity tends to dissipate, which is actually a good thing, as it makes ZTNA easier to implement. As the main entry point into an organization’s IT system becomes the web application URL (and browser), this reduces attackers’ opportunities and forces them to focus on the identity layer, subverting authentication, phishing, etc.

Of course, a higher degree of trust has to be placed (and tolerated) in SaaS providers, but at least we now have clear guidance on what to look for when transitioning to SaaS and cloud: identity protection, MFA, and phishing-resistant authentication mechanisms become critical—and these are often enforced by default or at least much easier to implement compared to traditional systems.

As an example, see the British Library (BL) report on the ransomware attack they suffered in late 2023: BL indicates how their on-prem technology made it harder to implement MFA, which was identified as a key contributor to attackers’ success. BL is also using cloud-based systems, apparently Microsoft 365 and also applications for finance and payroll functions, and these “have functioned normally throughout the incident”, protected by the MFA-enabled cloud identity provider. Finally, BL staff concludes: “We expect the balance between cloud-based and onsite technologies to shift substantially towards the former in the next 18 months, which will come with its own risks that need to be actively managed, even as we substantially reduce security and other risks by making this change.”

There’s little to add to that. With a willingness to move to SaaS, I believe the organization is much better positioned in terms of ZTNA principles implementation and overall security risk profile.

In your experience, what are the most common mistakes organizations make when transitioning from legacy to SaaS or cloud-native models, particularly from a security standpoint?

The sunk cost fallacy I mentioned earlier is perhaps the most common mistake. Being overly invested into existing technologies (yes, even emotionally) creates an unwillingness to move and modernize. In larger organizations, one has to include politics as well: there are powerful “constituencies” or lobbies pushing for a particular technology stack.

In our region, this is certainly a preference for the on-prem technology mix discussed above: Active Directory, self managed Windows and Linux servers running on an internal IaaS platform such as VMware, VPN and even remote desktop (RDP) access. It is also the stack preferred by ransomware operators and other threat actors, as it offers so much more opportunities to infiltrate and persist within an IT environment.

The unwillingness to simplify technology stack by moving to SaaS is then combined with a reluctant and forced move to the cloud for some applications, usually dictated by business priorities or even ransomware attacks (as in the BL case above). This is a toxic mix which increases complexity and reduces the ability for a resource-constrained organization to keep security risks at bay.

Another mistake is not fully recognizing the human factor, as attackers are forced to focus on subverting employee identity to infiltrate. Recall how strong authentication becomes all important when moving to SaaS and cloud. It also means security awareness, i.e. employee preparedness and capacity to recognize novel phishing and scam attempts, becomes more important.

As infrastructure is simplified, the most advanced cyberattacks aren’t necessarily those that exploit particular technical vulnerabilities, they target people. Industry reports such as Verizon’s Data Breach Investigation Report consistently find the vast majority of breaches are caused by the human element: stolen credentials or phishing.

This underscores the importance of placing emphasis on continuous security awareness training and identity security within the IT security budget.

We’ve seen uneven adoption of cloud services across the EU. In what ways does this lag create a wider attack surface or increase exposure to threats like ransomware or data exfiltration?

Indeed, official Eurostat data on cloud computing usage by enterprises confirms more developed EU countries are also more avid cloud adopters, in contrast with eastern and south EU member states, which lag behind.

I suspect this reflects cultural differences, where a more pervasive lack of trust within the economy and society in general prevents more change and innovation, including transitions to the cloud. It is true SaaS implies more reliance on vendors, including delegating more trust to a 3rd party. But increasing levels of trust underlie the division of labor and specialization, which are required for productivity growth and a thriving economy.

There are also more immediate security implications here: I’ve mentioned earlier how threat actors thrive on existing on-prem and self-managed infrastructure, used precisely where organizations are more cloud averse. The opportunities for attackers there are more plentiful, so the incentives to attack are stronger.

SaaS service providers are not perfect nor they guarantee security. But there is a difference in terms of attack surface exposure when using a solution consumed as SaaS vs consumed the traditional way (i.e. self-installed software from the same vendor). In case when the vendor is providing the SaaS service, any security issue directly and simultaneously impacts its service for all customers and therefore has a more immediate consequence on revenue and reputation – so the incentive to secure is much stronger (no wonder SaaS providers are more interested in MFA strong authentication adoption).

Contrast that with software packaged to be implemented on customer side the traditional way: here most of the responsibility is on the customer side, including patching, securing the setup, monitoring for intrusions, etc. while the incentive to secure is much more diluted.

Also, on-prem installed software does not provide the vendor timely information when threat actors discover and weaponize a new zero-day vulnerability. For the vendor, this means slower detection and telemetry feedback, followed by research and development of security patches, which then require more time for distribution to all the customers and finally installation. So it’s no wonder we sometimes see months elapse from vulnerability being widely exploited in the wild to being patched by a majority of customers.

In the case of SaaS, this feedback loop is much shorter and immediate, which is especially important now that threat actors increasingly focus on zero-day vulnerabilities, especially in internet facing services such as VPN devices and web applications.

The earlier mentioned Data Breach Investigation Report for 2025 confirms this trend. The authors notice a significant rise in the exploitation of vulnerabilities as an initial access vector for breaches during last year: “This was an increase of 34% in relation to last year’s report and was supported, in part, by zeroday exploits targeting edge devices and VPNs.”

It’s worth stressing all these exploitations keep occurring on self-managed software or devices on customer premises (as opposed to equivalent SaaS offerings by the same vendors); the vulnerabilities are exploited days or weeks before being noticed by vendors, let alone a patch released; and it usually takes months for patches to be installed by customers (if ever).

Finally, these events require costly security professional services for cleanup and assurance that attackers have not infiltrated the network prior to patching (as a recent example of this dynamics, see the recent Sharepoint on-prem vulnerability).

How can CISOs in mid-sized or resource-constrained organizations balance the need to modernize with the need to secure high-risk legacy systems in the interim?

Although it might not be a case of “low hanging fruit” or “quick wins”, there are some areas where focus can yield significant improvement.

First, tackle email, office productivity and collaboration software – this is the part most often first migrated to a SaaS consumption model and there shouldn’t really be a reason for running and maintaining those on-prem. The migration tooling here is widely available and knowledge on how to migrate is quite common among implementation partners.

Second, reduce VPN and remote access usage by exposing key applications via solutions such as SASE and managed remote access offerings. This will reduce the need for self-managed VPNs and remote connectivity, also reducing entry points for attackers, and most importantly, more or less automatically increase MFA adoption.

Going further, you should rethink your identity infrastructure (most likely based on Active Directory), by decoupling and later switching to a cloud identity provider, which will make it more easier to pervasively support MFA across the entire application estate.

Ask yourself, what business apps can be consumed as SaaS and do your market research. Any new business needs should be handled by consuming SaaS applications. As outlined earlier, a key evaluation criteria here should be whether the application supports MFA, ideally the phishing resistant kind.

With all this in mind, the time and resources required to manage and maintain the IT environment will remain manageable. In fact, this approach allows for more focused attention on the legacy infrastructure as well, including its security.

Don't miss