In this interview with Help Net Security, Scott Laliberte, Managing Director at Protiviti, talks about the implementation of AI and ML in cybersecurity programs, why this is a good practice and how it can advance cybersecurity overall.
AI and ML have been widely praised as useful technologies, particularly when it comes to cybersecurity programs. Still, they remain highly underutilized. Why is this so?
AI and ML are underutilized partly because, frankly, change is hard. To adopt these new technologies, the organization must not only change its existing approaches, but also change the mindset of its people and its culture in order to really embrace them. Many security practitioners are often resistant initially because they’re concerned about potentially creating another security issue by introducing a hole or weakness through the employment of AI or ML.
ML is also underutilized because for it to be effective, an organization must have access to good data. Historically, security teams have been plagued with a lack of reliable data, except in certain areas such as log review and SIEM (Security Information and Event Management) where ML is already being successfully deployed (though there is still room for growth here).
Many security issues are intricate and deal with high-risk issues and situations. For example, an organization can use AI/ML to analyze log data to detect potential issues and could then use AI to make the decision to block the threat, but the consequence could be that it blocks legitimate traffic or acts on the wrong target, with possible legal ramifications. Because many of the actions that could be automated with AI in security have similar high stakes, organizations are understandably hesitant to employ it for this purpose.
The problem is compounded by skepticism on the part of organizations. Many security tools and products claim to use “AI and ML” in their product offering, but too often these capabilities don’t work well or are oversold, resulting in resistance to the technologies.
What can be done to bolster AI and ML usage?
It helps to start small. Organizations can begin by applying AI/ML to lower risk tasks designed to save security personnel’s time. Measurable success in these areas will help them gain confidence in these key technologies, without creating new security concerns. For example, an organization can automate the review of security tickets for trends and root cause analysis to help managers identify process improvements.
Other examples might be to automate user access reviews to automatically identify changes for managers to approve, or to automate user role creation and assignment so that the analyst and manager only need to approve the account rather than create it from scratch. Once these easier, low-hanging fruits have been tackled successfully, the security team can advance to more complex tasks, such as the identification of anomalous activity and attacks.
One of the greatest issues facing security departments today is the lack of resources and the difficulty of finding and hiring qualified professionals. If AI, ML and intelligent process automation can help alleviate some of an organization’s mundane, time-consuming tasks, security resources can be freed up to focus on higher value activities. Security issues and needs are only going to grow as organizations become increasingly dependent on technology. Ultimately, security teams will have to leverage AI and ML for the organization to survive.
How can AI and ML contribute to the efficiency of cybersecurity programs?
Security teams need to focus first on where value can best be achieved and clearly demonstrated to leadership. Value can be measured in terms of time saved, efficiency gains and process speed-ups, so concentrating on activities such as the ability to close tickets faster, provision users in less time, and respond to incidents faster will yield tangible results quickly.
Although value in terms of reduced risk is also an extremely important metric in security, this is typically much more difficult to quantify. Once the organization sees the tangible benefits of the easily measured tasks, the AI and ML activities can be scaled up to start addressing areas where value quantification is more difficult and less clear.
What could be the top use cases for AI and ML? What are the scenarios you have seen them excel in?
Across the industry, we’ve seen many scenarios where using AI and ML are making a real difference in cost and process efficiencies. One good example is where an organization’s SOC (Security Operations Center) uses ML for its operations around log review and incident identification. ML can monitor the process, identify patterns and call out incidents and trends. There are now several specialist companies that have really honed their tools for maximum efficiency and effectiveness in this area.
We’ve also seen success with process automation, for example, to help organizations automate DSAR (Data Subject Access Requests) for GDPR. This has been achieved by leveraging ML to perform a ‘triage’ service by reviewing a high volume of requests, looking up the customer, completing an initial identification of data on the customer, and creating a ticket for the analyst to review. The tool was also capable of identifying requests that couldn’t be triaged and sending them instead to a queue for analyst action. The automation sped the process up by over 100%, enabling fewer resources to handle the high volume of requests.
In another example, we’ve seen organizations partner with specialist product companies that use AI/ML capabilities to address third-party risk by searching for changes to a company’s risk profile, negative news about the company or a change in its credit rating. These tools help organizations continuously monitor their many third-party vendors and focus precious resources on assessing those that carry the most risk and need attention.
How do you see AI and ML advancing cybersecurity programs in the future?
We anticipate that AI/ML will be extensively employed in the future for process automation in areas with high transaction volumes. One example is in identity and access management around account provisioning, deprovisioning and role definition.
We’re also seeing promise in using NLP (Natural Language Processing) capability to review third-party risk requests, examine responses to identify appropriate answers, match evidence to the answer and assign a confidence score. This increased efficiency will enable the analyst to focus on those questions that appear to be out of line. Another cool use case will be when AI/ML can identify a user by behavior, such as keystrokes, voice and unique behavior patterns, alleviating the need for passwords. We aren’t there yet, but could be in the not-too-distant future.
In general, organizations need to focus initially on process automation and optimization to alleviate the pain on limited resources. In parallel with this, they should continue to build more complex models to identify anomalous behavior and threats and eventually respond accordingly to that activity.