In this Help Net Security podcast, Maurits Lucas, Director of Intelligence Solutions at Intel 471, discusses the benefits of cyber threat intelligence. He also talks about how Intel 471 approaches adversary and malware intelligence.
Based on your experience, what do you see as the most significant challenges related to intelligence collection?
I think there are several, one of the key ones being the amount of time and effort it takes to get to a position where you were able to connect meaningful intelligence. That really are no shortcuts here. In some cases, gaining access and insights into closed ecosystems literally takes years of hard work. So, you need to plan and invest both time and resources well ahead of time to make sure you’re at the right position at the right time to collect intelligence.
The second challenge is to build the infrastructure that allows you to collect at scale, 24 hours a day, seven days a week, all around the globe, all while blending into the background, and then making sense of all the data you have collected to identify relevant developments, connect the dots and draw the right conclusions. There is no substitute for having experienced researchers to make this possible.
Another challenge is in the reporting. Bringing structure to reports and extracting and linking entities, so that reports and observations don’t just stand on their own but can be easily linked to previous reporting or sightings of the same or related subjects. In this way, you can see a particular subject in a broader context or easily observed developments over time, which obviously helps with deducing possible future developments.
And the final challenge is taking a structured approach to how you use intelligence. Being able to use the “plan, do, measure and adjust” approach, where you identify what it is you want to achieve, plan how you aim to achieve it, execute on that plan and measure your performance through that process so that you can then adjust the approach for the next cycle of the process so that you are continuously improving the efficiency and effectiveness, but also able to measure the value.
We’ve actually done a lot of work internally around this subject and have shared the methodology we have developed with our customers. Some of which are now using that same methodology for their own internal CTI process.
The challenges around investments in time, efforts and experience required mean that for almost all organizations, regardless of size, it makes sense to engage a specialist party, when it comes to collecting intelligence at scale.
How do SOCs benefit from the timely cyber threat intelligence? How does it make their work easier?
It allows them to be proactive when it comes to mitigating threats and it gives them invaluable additional context around what they are seeing, and that makes it easier to prioritize, facilitates an effective response, and helps them understand what it is they are looking at. All of this saves time and helps them be more effective at mitigating threats and reducing risks.
CTI allows the SOC to see beyond the perimeter, so they are aware of threats before they hit their infrastructure. That allows the SOC time to prepare, tweak defenses, such as deploying specific monitoring rules or knowing what to be on the lookout for. And when dealing with incidents or alerts, having this additional context allows them to place the individual alert, or maybe alerts they are dealing with, in the wider context of who is behind it, what their aims are, while typical next steps would be, or maybe even what must have gone before for this to occur. All of that makes it easier to determine how to respond.
And when dealing with multiple alerts or incidents, as SOCs do, having this context allows you to prioritize, separating the wheat from the chaff as it were. And that’s critical as many SOCs are resource strained, and so knowing which items to focus on can help with making the most effective use of limited resources.
How does Intel 471 approach adversary and malware intelligence?
For both adversary and malware intelligence, we have invested in a globally dispersed collection infrastructure, which allows us to connect, ingest, analyze, and make available to our researchers and our customers a huge amount of raw data, such as active communications and malware behaviors.
For the research team, we believe in a “boots on the ground” model. Our research teams are located in the same geographical region as the actors they are tracking. We have multiple teams located and grounded around the globe, each with members which have excellent local cultural knowledge, as well as native language skills. In this way, they are able to very effectively blend into the background but can also capture and understand the subtle local cultural references as they come across them. This is not something we think can be achieved from an air conditioned office, 6,000 miles away, no matter how good your language skills may be.
For malware intelligence, we decided to take a different approach than many others who try and collect and analyze as many samples as they can. The issue is that analyzing a sample gives you a very brief snapshot in time of particular malware behavior. We want near real-time continuous overviews of malware activity. So instead, we developed an emulation-based approach, where we have emulators for each of the more than 50 families we track.
Those emulators connect the malware infrastructure, such as command and control servers, through a network of global proxies, so we can pretend to come from any country we like. These emulators receive instructions from these command and control infrastructures, and then analyze these and act upon them, downloading configurations, updates, payloads, et cetera. All of this data is analyzed and fed back into the same automated systems. So, for example, if a dropper malware we are tracking drops an instance of an info speeding malware family which we also track, the sample downloaded by the dropper emulator is identified and fed to the emulation and analysis framework for that particular info speeder.
That self-feeding setup means the system is great at finding, identifying, and commencing tracking of new malware instances fully, automatically, and all by itself. Additionally, we feed it with samples we obtained through other sources, so we’re always increasing our coverage. Once we have made a start with tracking a particular instance, we then capture all of the details in real-time and add those to our collection. So, it’s a continuously drawing archive of worldwide malware activity.
Security leaders will undoubtedly ask where is the ROI when it comes to intelligence?
Yes. And that’s a very good question. And rather than coming up with marketing stories or specific examples, we developed our CUGIR methodology to allow us to unlock and demonstrate that ROI to each of our customers. So, using the Cyber Underground General Intelligence Requirements program, to give it its full name, we can work with our customers to identify stakeholders for CTI, those stakeholders’ use cases and the primary intelligence requirements that correspond to those use cases.
Based on those PIRs we build collection plans, collect and produce intelligence and map that product reports or data back onto their PIRs. This means that each customer can quickly find those products that are relevant for their intelligence requirements, their use cases and their stakeholders. And we can measure how well we are performing in relation to those same stakeholders, use cases and PIRs, adjusting where necessary. And of course from those same measurements, we can demonstrate the ROI.
It is a very powerful methodology and some of our customers are now actually using it internally themselves because the challenges that we have as a CTI vendor are the same challenges that many CTI teams have inside organizations. Who are my stakeholders? What are their requirements? How do I meet those requirements? And how do I demonstrate the ROI?
Can you tell us more about TITAN? What differentiates it from other intelligence platforms?
TITAN is our intelligence platform. All of the data and reporting we share with our customers goes through TITAN. It’s our single source of truth, whether you are looking at it through the GUI or connecting to TITAN through our API, or off the shelf integrations, you’re always looking at the same data. And at the same time, you’re looking at all of our data. We share as much data as we can with our customers, both our intelligence, but also the raw data that underpins it. So, you can go back and check our research if you want, or do your own research, using all of the data that we connect. We structure our data and our report, so that items are linked, and it becomes easy to pivot from one aspect to another.
The final feature worth mentioning is the alerting, or watchers as we call them. You can turn any search into a watcher so that TITAN will alert you if new results come in matching your query. So, for instance, you can be alerted if we publish a new report around a particular intelligence requirement, but you can also be alerted if a particular actor pops up again on a particular forum or someone uses certain keywords in a conversation, for instance.