Why companies can no longer hide keys under the doormat

For good reason, companies trust in encryption, blockchain, zero trust access, distributed or multi-party strategies, and other core technologies. At the same time, companies are effectively hiding the keys that could undermine all these protections under a (figurative) doormat.

private keys protection

Strong encryption is of little use when an insider or attacker can gain control of the private keys that protect it. This vulnerability exists when keys need to be executed on servers for processing. Encryption can protect bits and bytes in storage or transit, but when they need to be executed on a CPU, they are “in the clear” to perform the necessary computation. As a result, they are accessible to rogue insiders, attackers and third parties, such as consultants, partners or even providers of software or hardware components used in data center infrastructure.

This is the nature of encryption. It provides strong security for storage and transit, but when execution is eventually required—and it is always required at some point for data, code, or digital assets to be useful, or to enable a transaction—the process faces its Achilles heel.

Private keys require execution, using a CPU, for their initial creation, the encryption or decryption required for key exchange, the process of digital signatures, and some aspects of key management, such as dealing with expired public keys. This same principle—the need for execution in the clear on a CPU—applies to certain blockchain and multi-party computation (MPC) tasks. Even more generally, just the execution of encrypted application code or data exposes them since CPUs require data and code to be in the clear.

CIOs need to be asking questions to their teams to assess this potential exposure and understand the risk, as well as putting plans in place to address it.

Fortunately, recent breakthroughs have been able to eliminate this encryption gap and maintain full protection for private keys. Leading CPU vendors have added security hardware within their advanced microprocessors that prevents any unauthorized access to code or data during execution or afterwards in what remains in memory caches. The chips are now in most servers, particularly those used by public cloud vendors, involving a technology generally known as confidential computing.

This “secure enclave” technology closes the encryption gap and protects private keys, but it has required changes to code and IT processes that can involve a significant amount of technical work. It is specific to a particular cloud provider (meaning it must be altered for use in other clouds) and complicates future changes in code or operational processes. Fortunately, new “go-between” technology eliminates the need for such modifications and potentially offers multi-cloud portability with unlimited scale. In other words, the technical drawbacks have been virtually eliminated.

CIOs need to ask their management leads or teams how private keys are protected and what exposure gap they might face during processing. The same is true for executing data and code that is otherwise encrypted at rest and in motion. What gap or exposure is data or code potentially facing?

Companies using proprietary application code with a secret key need to ask how the secret key is protected and what kind of risk it might face. If applications involve the use of AI or machine learning, the algorithms it has developed are likely exceedingly valuable and sensitive.

How are these secured during runtime? Even the testing of algorithms, often done using MPC to utilize real data (perhaps from customers or partners), may involve exposure of data, code, or both. What protections are now in place to secure them? Blockchain, too, involves that execution exposure—how is this being managed?

The execution gap is not limited to public cloud. Private cloud and on-premises data centers face the same issues. CIOs need to ask if the gap is being mitigated, and how. Perhaps it is counterintuitive, but public cloud, with the use of confidential computing, may be the most secure location for executing code, algorithms, and data. If an organization is not currently using public cloud—over concerns of potential exposure of regulated or proprietary data—perhaps it is time to reexamine its use.

Eschewing public cloud has often been due to control and access issues. With private clouds and on-premises data centers, organizations generally know and can control who has access to what through use of combinations of physical, network and application security, logging or monitoring and various forms of zero trust access. The concern over public cloud has been how to prevent access to unauthorized insiders, third parties, various hardware, or software components from third parties and even would-be attackers. Now, with confidential computing, those concerns could potentially be fully eliminated.

CIOs must challenge popular notions of encryption being fully secure—and even the surety of blockchain and MPC. With so much hinging on private keys, leaders must ensure that these are being protected using the best practices and best technologies available.

Don't miss