How CISOs navigate policies and access across enterprises

In this Help Net Security interview, Marco Eggerling, Global CISO at Check Point, discusses the challenge of balancing data protection with diverse policies, devices, and access controls in a distributed enterprise. He also highlights the significance of security validations, especially internal testing, to understand the organization’s security posture.

Eggerling recommends a preventive approach, leveraging advanced AI tools in the borderless enterprise landscape, and emphasizes the importance of comprehensive authorization and access control in implementing a zero-trust framework.

security program

How can organizations balance the need for data protection with the challenges of managing a diverse array of policies, devices, and access controls across a distributed enterprise?

This balancing act is just one of many that the CISO is orchestrating daily! Risk-based analysis should help the security executive prioritize efforts to secure data and manage the broader security program defending the enterprise. Data classification will determine the appropriate levels of data protection necessary and will be another CISO challenge to get the proper funding for tools and personnel to address.

The essential requirement is to first structure and prioritize the data sources and assets using this data according to internal criticality requirements resp. the importance of the data to the enterprise. Breaking down the question, policies regulate the use of data, devices consume data, and access control manages who can engage with the data, so it focuses more on the user than anything else in many companies. Of course, this could also focus on device access, which is less often represented for historical reasons. The balance is introduced when access and availability are put before everything else.

How do security validations contribute to an organization’s understanding of its security posture and risk profile?

Third party security validation is a helpful tool, but typically is only part of the analysis required to understand the entire posture and effectiveness of a security program. Security validations typically only look at the external posture of an organization and do not look past the external facing security layers.

Deeper, pen tests and internal credentialed vulnerability scanning are necessary to determine the entire risk profile. Third party reports are helpful to compare your program to other peers and can be used to differentiate your program from others… that’s assuming that your program rates better! Validation of security controls aid in both confirming the current state as well getting ready for future requirements of the security program.

Many cases exist where a comprehensive review triggers immediate action for renovation of the program, as some risk vectors were either ignored, forgotten, or have recently emerged and were not yet part of the control framework resp. the risk register.

So, a cadence of regular review of the control framework is imperative to staying on top of the ever-changing risk landscape. Also, introduction of new technology, usually, brings with it the need for updating or modernizing the control framework and quite often introduce brand new capabilities and risk vectors.

With the increasing sophistication of cyber threats, what are the key components of effective network security in the cloud, and how do they differ from traditional security tools?

Simply speaking, if existing network controls are now being moved to the cloud, the scope of technical controls does not drastically differ from legacy approaches. The technology, however, has massively evolved towards platform-centric controls, and that for a good reason. Isolated controls cause complexity, and if you are moving your perimeter to a hyperscaler, both your users and their devices will no longer be managed by the corporate on-prem security controls either.

A good CASB to broker between user and data is key, as is identity and access management. What’s now new is workload protection requirements á la CSAP technology. In addition to increasing sophistication and the number of security threats and successful breaches, most enterprises further increase risk by “rouge IT” teams leveraging cloud environments without the awareness and management by security teams. Cloud deployments are typically deployed faster and with less planning and oversight than data center or on-site environment deployments.

Cloud security tools should be an extension of your other premise-based tools for ease of management, consistency of policy enforcement and cost savings due to additional purchase commitments, training, and certification non-duplicity. Cloud security tools need to be able to spot and “auto-remediate” risky issues so that the enterprise can safely and quickly deploy resources, and not experience a breach.

What are the top security risks that CISOs need to be aware of in today’s technology-dependent world, and how can they effectively mitigate them?

The big ones are AI and next-gen ransomware waves, both will merge sooner than later and at this point in time Check Point Research have not yet seen an industrialization resp. a weaponization of the two, this is, however, just a matter of time and needs to be thoroughly understood by those steering the strategic direction of the security program inside a company.

In general, there are many risks encountered by all enterprises, but the more effective strategy is to understand and rank the risks that your company and vertical is encountering and then classifying those. Address those risks as necessary with appropriate tools and processes. For all enterprises, email risks remain the number #1 threat vector and companies need to consider replacing old Gateway technology with the latest API and AI-enhanced email protection.

What strategies do you recommend in a borderless enterprise landscape for securing identities, data, code, and cloud infrastructure against evolving threats and vulnerabilities?

Old school security programs could succeed much easier when they were protecting eggs in one basket in one place. Now that enterprises have resources and data everywhere, including third parties, the security program needs to expand to wherever those resources and associated risks are located.

One critical strategy to embrace is “prevention” rather than the old school “detection” based program. Prevention of an event is typically 100X more cost-effective than detecting a breach. In addition, leveraging the most advanced AI-enhanced security tools is critical to address new AI-based threats. You must bring a bigger AI gun to the AI gunfight!

Considering the challenges in implementing a zero-trust framework, how can organizations ensure comprehensive authorization and access control to protect their infrastructure from threat actors?

Authorization and access (AAA) control is typically the very first step in an enterprise’s zero-trust journey and forms the cornerstone of that path. Yes, a fully mature zero-trust environment is difficult to attain quickly and involves multiple technologies, partners, processes, time, cost, and effort, but without any doubt, is a very effective strategy and is in all cases worth the effort.

Once a good AAA program is in place, the next step is segmentation. Threat actors have easily exploited enterprises that do not have an effective AAA program or network segmentation, just ask the teams at MGM and Caesars in the US.

Don't miss