Introducing a new security model into your existing infrastructure can be challenging. The task becomes even more daunting when starting with a new host-based or micro-segmentation solution. If you’ve decided on a host-based approach to segmentation, I’d like to share, based on personal experience, some advice and best practices on using this type of solution in your organization.
The business case that drove your organization to adopt a host-based segmentation solution will serve as an anchor for your initial design. However, it’s essential to consider how it will interact with your overall IT security strategy.
I recommend taking the time early on to identify and define who within your organization will be involved as users, support, governance, or owners. The solution you chose may have an agent component installed on the host operating system and a cluster for management of the agents, policy, and metadata collection. As a result, the lines of responsibility between security, networks, and platforms teams may become blurred.
Additionally, there are likely critical programs in flight that may influence the final product selection, so be sure to investigate other areas where your organization is investing its technology budget. For example, consider your company’s current cloud strategy and how host-based segmentation might tie into it.
- Will you target containers, bare-metal, and virtual servers? If so, which operating systems must be supported?
- How will the existing traditional firewall policies be managed with the introduction of host-based policies?
- Are there any regulatory requirements or findings that a micro-segmentation solution or visibility can solve?
Over time, as more visibility information is collected, it will become easier to identify areas where host-based visibility data can provide value and help you define edge use cases. Armed with segmentation use cases and the answers to these questions, it becomes easier to select a product and begin the initial design and testing phases.
Having full saturation of agent deployment in your organization will outline any potential limitations and special considerations before you enter the policy design phase. Attempting to deploy and implement policy as you go is possible but, in my experience, can pose significant challenges that can lead to a need to redesign your policy and labeling structure down the road. Moreover, the nature of label-based rules allows for hosts to update their policy dynamically, so if visibility data is missed, and a policy is inherited without correct profiling, the application may be impacted.
I’ve found that the product testing phase is key to determining whether the solution you’ve selected meets expectations and functions as advertised. Pay particular attention to validating the solution against your criteria for adherence to security organizational security controls, disaster recovery, alerting, OS compatibility, and performance. This time will also be your best opportunity to gather insight into the specifics on how a solution functions.
Test enforcement and labeling thoroughly to see how the solution interacts with your organization’s infrastructure. Try out the potential use cases and analyze how visibility data is managed through load balancers. Also, keep your test engineers closely involved with the policy design. They will be able to identify potential gaps, and any necessary workarounds can be developed before production roll-out.
It is good practice to keep representatives from any partnered teams involved in testing as well. This ensures they have an opportunity to familiarize themselves with the product and get a chance to validate the solution against the platforms they support. Once testing is completed, include any necessary sign-off beyond stakeholders. Consider network, security, server engineering, and deployment teams. Missing signatures can delay deployment down the road.
Refining your design
The lessons you learned during testing will almost certainly lead to revisions in both the project and its design. Situations such as the management of unidentified (unlabeled) assets, policy limitations, existing IP tables, and agent functionality may need to be addressed. Any infrastructure design changes to support the sizing, scalability, and disaster recovery environment can be finalized. Use cases and scope can be locked in, but also make sure contextual labeling and policy strategy is well defined.
Many application teams will perform certain functions on a monthly, quarterly, or yearly basis. Take time to reach out to your application teams and determine if and when they perform disaster recovery or special batch processing. Design the profiling timeline to take into account these additional factors if they cannot be addressed in the policy design from the start.
Additionally, before production deployment, finalize your support model. Identify who will ultimately be responsible for managing the product. Consider the different components of the solution and address how each will need to be supported.
For example, make sure you have documented answers to the following scenarios:
- How do you plan to support the agent residing on the host, the management components, disaster recovery, deployment of agents, and policies?
- Will application owners be aware of the security solution and, if so, how can they reach out for assistance in the event of impact?
- If a connection will be added to a host already in enforcement, what will the process look like?
- How will policy exceptions and governance be tracked?
- If suspicious connectivity is detected, what process will that follow and whose responsibility is it to investigate?
- What is your policy recertification strategy?
Micro-segmentation solution: Defining success
After you’ve finalized your use cases and built a strong understanding of the solution, it’s time to define your success criteria and tracking. Consider using built-in reporting functions or APIs to collect and parse the relevant data. You may want to report on the number of individual systems or applications in enforcement, the number of deployments, or the overall percentage of threat vector reduction. Utilizing a Top Talkers report based on application or host creates an actionable item enriched with host visibility information.
In many organizations, there will be less time to implement integrations and automation once you have entered the enforcement phase. This means that the more time and effort you spend building the foundational components of your host-based segmentation solution project, the easier management will be in the long run. I also recommend that organizations automate early and often.
Create the solution designs and integrate into your CMDB, SOAR, SIEM, and alerting platforms as quickly as possible. Once you begin to enforce segmentation policies, be sure to use the visibility data of the host-based solution to identify infrastructure services and designate labels for them. Applying the policy in layers means it doesn’t need to be the most secure at first. Focus on accuracy and keeping the label driven policy as dynamic as possible. Over time you can tighten the controls and apply your specific use cases.
Faced with the reality of an ever-evolving cybersecurity landscape, organizations are increasingly considering a host-based micro-segmentation solution. Adoption is growing as organizations seek to improve security postures with a defense in depth strategy, reduce cost and complexity by leveraging a flat network design, or simplify compliance efforts. Hopefully, the advice above will help anyone looking to operationalize a new solution of this type avoid common challenges and realize project success.