The momentum behind cloud computing couldn’t be stronger as companies, governments and other organizations move to the cloud to lower costs and improve agility. However, you need look no further than headlines about the latest data breach to know how extremely important security architectures are amid this rapid cloud adoption.
The question is on every CIO’s and security officer’s mind: What are the most efficient techniques to detect threats to cloud services?
Security technology is advancing to answer the challenge. Machine learning, threat intelligence and predictive analytics are among the combination of techniques being used to catch bad actors. Enterprises also can efficiently detect threats by using application and situational context in conjunction with machine learning techniques to reduce one of the biggest pitfalls of threat detection – false positives – and ultimately heighten security across the board.
The first place to start in wrapping your head around cloud security architecture is deciding what to monitor. Remember, threat landscape is dynamic in cloud workloads. Every source of activity should be monitored. This includes configurations, APIs, end users, administrators/privileged users, external federated users, service accounts and type of transactions made by users. Everything.
Second, it’s essential to understand why context is important. It’s the only way to understand the threat severity and decide whether a specific event or particular user behavior is anomalous. Examples of context are: a business user performing mass delete of objects after hours, a part-time contractor performing administrative operations in multiple cloud applications, an engineer cloning a source code repository from an unknown location.
By implementing a comprehensive approach – activity monitoring and user behavior analysis, and considering the context in which those events happen – organizations can be confident that their clouds are secure. The strategy should follow these six tactics:
1. Threat analytics and detection architecture. It starts with an architecture that can analyze data from various sources to derive early indicators of threats. This architecture should accept data feeds from all of the sources mentioned above. Analytics architecture should leverage machine-learning techniques to efficiently consume data to identify anomalies. A combination of supervised and unsupervised techniques should be used.
2. Security configurations. The security posture of a service depends on how stringent security configurations are. A weak security configuration provides an entry point for malicious users. Examples of risks due to weak configurations are: administrator users with weak passwords, over-permissive access to servers, and anonymous users accessing sensitive content. It is important to configure stringent values and continuously monitor those values for drifts.
3. Contextual data feeds. A particular risk event should be analyzed in the context of occurrence. If the context is not used, then one will end up with high false positive rate. For example, alerting about a user with anomalous behavior by just looking at her login data in AD is insufficient. For improved accuracy, the user login behavior in the login session should correlate user attributes such as: transaction type, how sensitive the transaction is, is the user travelling, is the user a part-time employee, and what user roles are. Contextual data helps improve threat detection accuracy.
4. User behavior analytics. User behavior analytics model and analyze user-centric behavior. Users in the analysis include both end users and privileged users. A highly privileged user or an end user with access to lot of cloud services is in general a high risk. It is important that high-risk users are monitored continuously by adding them to a watch list. Their behavior, the strength of their passwords, the authentication policy and all sensitive privileges should be monitored and adjusted to avoid risks created by their activity.
5. Supervised and unsupervised machine learning techniques. Machine-learning techniques should be used to define a baseline and detect outliers. A practical approach is to use both unsupervised and supervised models to improve accuracy and reduce false positives. Many implementations use one or the other, causing a high false positive rate or an issue when their solution does not scale by demanding high volume of labeled data.
To improve accuracy and scalability of threat detection, use unsupervised learning to model clusters of users with normal behavior. Statistical and probabilistic mixture models are practically proven for this purpose and subsequently detect outliers that represent users with abnormal behavior — i.e., risky users. Also, use supervised models to get hints from security experts for determining risky users’ patterns and actions. Based on these hints, build benchmark datasets for training, validation and testing of supervised models.
Though supervised models require more security expertise and manual efforts, they tend to present a lower false positive rate than the unsupervised. The best practice is to increase the effectiveness of supervised modeling by an unsupervised data pre-processing step that usually identifies highly risk users with a fair false positive rate that is minimized with the subsequent supervised learning models.
6. Threat intelligence feeds. Real-time collaboration with security communities and commercial intelligence feeds help detect threats at an early stage. For example, a hacker accessing an application using compromised user credentials from a blacklisted IP address can be detected if external intelligence feeds provide blacklisted network information.
As an organization’s cloud footprint grows, it’s vital to take a comprehensive approach to security that encompasses machine learning, threat intelligence, predictive analytics and context.