Security operations and the evolving landscape of threat intelligence

In this podcast recorded at RSA Conference 2020, we’re joined by the ThreatQuotient team talking about a threat-centric approach to security operations, the evolution of threat intelligence and the issues surrounding it.

threat intelligence perspective

Our guests are: Chris Jacob, VP of Threat Intelligence Engineering, Michel Huffaker, Director of Threat Intelligence at ThreatQuotient, and Ryan Trost, CTO at ThreatQuotient.




Here’s a transcript of the podcast for your convenience.

We are here today with the ThreatQuotient team to talk about all things security operations, the human element of cybersecurity, and the evolving landscape of threat intelligence. I am joined by Ryan Trost, Chris Jacob and Michel Huffaker. Will you all please introduce yourselves?

Ryan Trost, co-founder and CTO at ThreatQuotient. Ultimately kind of a SOC dweller for most of my career – from system administration, up to security analyst, up to incident response and then SOC manager. Most formally at General Dynamics.

Michel Huffaker, I’m the Director of Threat Intelligence at ThreatQuotient. I started my career in the air force and kind of moved up through government, eventually landing in the private sector at iSIGHT Partners for five years, and then ultimately came to ThreatQuotient.

I’m Chris Jacob. I’m the Vice President of Threat Intelligence Engineering. I’ve been on the cyber side of things for about the last five or six years, before that grew up more in the infosec side of the world, spending most of my time at Sourcefire.

The first question for today’s discussion is about customer challenges. I know at ThreatQuotient you hear a lot about, and this is a direct quote I believe, your “customers struggle with ingesting all the stuff”. Let’s dissect this a little bit. What is the stuff that these customers are referring to that they’re challenged by?

Ryan: From my experiences, threat intelligence teams that didn’t come through the military and didn’t have formal training, ultimately ended up being pack rats and basically getting their hands on anything and everything they could, which has its benefits but also has a lot of deep dark skeletons from a collection standpoint, how to sort through it.

And I think teams have to really set goals on “this is my objective, this is what I want to do, this is the data that I need to do it”. You start to really look at data from a “nice to have” versus a “must have”. And then as you meet those objectives, you can widen that net, as they say, versus just trying to boil the ocean, which gets teams in lots and lots of trouble.

Michel: Yeah, I agree. There are a lot of data hoarders. People just wanted to have as much information as they could, but it’s very difficult to operationalize that. I think it you still need as much information as you can get, but it needs to be the right information. I think that as the industry has matured over time, people are really starting to understand, you still have to deal with a lot of data, but you have the relevant data, you get the right data, and you can actually take action on that.

Chris: Unsurprisingly, I agree with both these guys. I think it’s not a bad thing to have all the data, as long as you can get to the data you need easily, as long as it’s not masked by, you know, it’s got to be the needle in the haystack and not which haystack do I even look in? So as long as you can get to the data quickly, having it all can be good in some instances because, depending on the tools that you’re using to operationalize the data, if you’re using SIEMs for instance, you can cast a much wider net. They handle big pieces of, or large amounts of data.

But if you’re dealing person to person, or you’re dealing with tools that are firewalls, things that have a lower threshold for the amount of data they can handle, you need to make sure that you’re sending the right data there and using that lens. It’s capture it all, but make sure you can bubble up to the top what’s really important to your organization.

So, all of these points remind me a lot of the highly debated “which came first, the chicken or the egg” discussion as it relates to threat intelligence. So, when it comes to security operations, which should a company be implementing first, the threat intelligence feeds or an actual platform? Or does that even matter?

Ryan: Optimally, both. However, teams have to have somewhat of a strategy and a roadmap to it. In previous lives we had the same build it or buy it. And you need to really create those milestones or justification to get the approval to buy certain things and certain tools. So, a lot of teams ultimately focused on “okay, let’s start with open source”. It’s, it’s free, it’s widely available, there’s so many open source feeds out there, and they’ll have to figure out where to put that stuff.

Early analysts were putting it just into a spreadsheet, so every analyst had their own spreadsheet and it got to the point where there’s benefit in that. However, you quickly reached the ceiling of value and you hopefully hit a couple milestones that you can really get traction on with the executives, and then escalate to buying something. In conclusion, it’s ultimately both, but it ultimately kind of depends on the team and the logistics, and so forth.

Chris: I think we focus so much on incoming information, and that being the purpose for having a platform. But I think we need to spend some more time talking about the delivery of it. That’s the reason that a platform like this is so important, isn’t just for the analyst to have a tool to store things in and to work in, but ultimately for them to deliver that product, that intel that they’ve refined and sort of polished up.

How do they get that to the security teams? That’s an important part of the platform that, I think, gets overlooked quite a bit. In my opinion, you have to start with a platform. Obviously, they’re intel feeds out there, whether they’re open source all the way up to very expensive types of feeds. But you have to have the infrastructure in place for the analyst to be able to work in number one, but also, again, ultimately be able to deliver that finished product to their customers, which would be the security teams.

Michel: I agree that bringing external information and intelligence in is important, but at the same time it’s often overlooked – the wealth of information you have internally. If you have the right tools, the right platforms to pull that kind of metadata out of your own security stack, that’s the best way to understand who’s actually coming after you, who are the people who’ve been there before.

If you, like Ryan was saying, if you don’t have the budget tolerance to do both, if you bring the platform in first, then you can at least see what’s happened in your organization in the past, and then kind of predict based on that. Then you kind of create your own feed at the same time that you bring the platform in.

Michel, I heard you say “knowing who’s coming after you”. On that note, attribution has always been a hot topic related to threat intelligence. To some of us, it’s more important to know the motivation behind an attack rather than know exactly who that attacker is. What, between three of you are your thoughts on this, and how does the theme of the human element tie into that topic of attribution?

Michel: Attribution matters to some people. There are some organizations that have the maturity to care, and I say that because in the end it doesn’t matter. If you’re head down and you’re looking at your organization, you’re trying to figure out who’s coming after you, that’s less important than what they’re after, what their motivations are.

There are some benefits to it, in the sense of an internal marketing effort. If you could put a scary face or a scary mascot on top of something as a threat intel team, it gives you the ability to communicate internally really well. You can say scary guy one, two, three is after us, and that means something to your C-suite.

But on the whole, there’s a huge level of effort for very little gain, in terms of just finding out who that is. From the human perspective, it’s easy for us in the industry to batch all these actions together under one adversary group. But I think it’s important to remember these are humans on the other side, right? It’s humans fighting humans in this weird cyberspace.

If you think about it in that sense, it gives you a little bit of a leg up understanding operational patterns and things like that. It’s important to remember that they’re actually people.

Ryan: I completely agree with Michel. I think adversaries are just human by nature, and humans are creatures of habit. A lot of the adversaries, they’ll become experts in one attack vector, maybe one or two, and they’ll stick with that because that’s benefited them and that’s what they know.

The more the defenders know about that person, that human element, and what they gravitate towards, it’s much easier to defend against. So, I think that it’s very important to know who it is. Maybe not the attribution, unless you’re prosecuting, in that capacity, it doesn’t really make any sense. But again, it’s helping you organize your defense and organize your tools and technologies, to stop the adversary left of boom.

Chris: I think to that point, who it is, doesn’t really matter. To be able to put a box around it, to be able to say: “This is the container I’m using to track the tactics and techniques that I see here”. That allows you to test your theories: “This looks familiar to me. I think it’s this adversary and let me deploy these countermeasures to defend.” And also, the test proved that this is in fact the same group, the same organization or this is someone different.

I think the vast majority of people in the commercial world aren’t directly facing named adversaries. That said, you shouldn’t minimize it. Again, it’s good to be able to group things together so that you can recognize the patterns and know how to protect your organization from specific types of threats.

Pulling on that thread a little bit more. When we actually talk about a security incident as it’s unfolding, who is responsible for coordinating actions within a company? Is this more of a human response or an automated response from technology? Is it both putting ThreatQ into the conversation at this point? Can you guys walk us through what that process might be like internally? How does a tool like ThreatQ Investigations play into this? Who is responsible for those security incidents as they’re happening?

Ryan: In my experience, it ranges drastically based on the team, the budget, the technologies involved, and so on and so forth. In two previous roles, largely the incident is triggered or the event is triggered from a SIEM correlation or some type of hunting expedition. The technology raises the red flags, as this is suspicious.

That’s ultimately going to trigger an analyst to really look at it and dive in information gathering, to see if their spidey sense is triggered, or potentially an automated playbook will gather that information, whether it’s snapshotting the host and running it through a couple of smoke tests, and so forth.

Ultimately, an analyst is going to see it and review the information to determine does this event or alert need to be escalated to an incident. Once that handoff is given, then the incident response team usually gets involved, and then that’s run through a team lead who ultimately runs it for the life cycle of the case, and so forth. But again, it ranges drastically whether your team is two, whether your team is 50, geographically spread out, it really unfortunately is all over the place.

threat intelligence perspective

Chris: The better question there to dig into is how this is all coordinated, right? Because there are multiple teams involved, and those teams don’t necessarily communicate well with each other. Having a platform that allows those teams to just perform their work but capture all that information so that all of them are singing off the same sheet of music.

If the SOC is going through SIEM matches and adding color, adding information, then the incident response team has that information at their fingertips through using a platform and having integrations. Because ultimately, it’s all about the context. Team A might have this piece of information that doesn’t mean anything to them, so they don’t think to share it with the team down the hall that’s working the same incident. But if the team down the hall had that little piece of information, it would change their view of the incident altogether.

It’s about really coordinating across the teams because, you talked about the human element, people don’t communicate with each other well. So if we can do it machine to machine, it works out a lot better. And then to get into investigations TQI, that is a chance for all those teams to come back together, after each one has worked their incidents separately. Let’s get together and build out the evidence map of how we’re going through the incident and uncover those little pieces that we may not see if we work in our own silos.

Ryan: And Chris is absolutely right, where you get multiple teams working together, and this is where IR tabletop exercises really are critical for a team success, because a lot of times the IR coordinating it, but they don’t have access to the financial databases. So, they need to go to the financial team, or they don’t have certain access to the apps, or certain things that require you to reach out to a completely different department that isn’t security focused and ask for help. And usually they’re completely open, especially when it’s wrapped around an incident. It’s essential.

Michel: And there’s a pacing element to that as well. All these teams work at different paces, right? If you think of the difference between emergency responder from a fire perspective, there’s the people that come in and put the fire out, and then there are the people that do the investigation to see what caused it. And those are two drastically different paces to address two drastically different problems that ultimately come together.

When you’re talking about who handles things, having a place where people can work at their own pace, but still benefit from each other’s work at the pace that’s necessary for their specific job function is critical. Because if you allow that investigation to go on too long from the threat intelligence perspective, you lose sight of the urgency where you can get the cooperation from the other business units. So, you need those people who can go out and tactically respond, and then those that come in overarching and do the in-depth investigation.

What I’m hearing you all talk about is really how security operations help internally orchestrate all of the technology, all of the people, and ultimately help an organization make better business decisions. So, changing gears a bit, let’s talk about another important piece of that, which is most security teams have to do some sort of reporting. How has this evolved over the years? Where is the process of reporting metrics to executive leadership today? And how important is the ability to generate metrics from threat intelligence tools that organizations are using?

Ryan: From my experience, reporting is a huge benefit to an organization or a tool when it’s done correctly. I think a decade ago, reporting was purely quantitative. How many alerts, how many incidents, how many investigations, how many vulnerabilities, so on and so forth, and that was it. And it only got to the director level, it never went up.

However, with more security in the focus and more “okay, why and what next?”, a lot of reporting has matured to the sense of you get the traditional quantitative stuff. But now it’s “okay, let’s break down those numbers of alerts” based on the attack vector or based on the adversary attribution. So, it’s a lot more of trending versus a point in time. And that’s making it up to the C-levels, if not board of director levels. And that’s huge.

And a lot of security teams, historically, again, it wasn’t a primary focus for them. I was running a government SOC, and literally we had two FTEs dedicated to reporting to the point where the reports were beautiful brochures. But that’s what the government wanted. They wanted that sexy eye candy and eye charts that were in the reports, the infographics and stuff like that. That’s what spoke to them.

I think a lot more teams need that little bolster and something that escalates in visibility, and really shows the larger organization “this is what I’ve done for you lately, this is how I’m helping, this is when I’m predicting”. And hopefully hit a couple of those milestones.

Chris: I think reports, in my mind, fall into two different buckets. You have on one side, the more human consumable where you’re writing about a trend, maybe you are tracking a specific adversary or TTPs. And those are more human consumable type reports. But the other side that I think could be very interesting is reporting on the efficacy of the tools.

It’s interesting to do a before and after report based on implementing a threat intelligence platform. “What effect am I having on the efficacy of my security tools? I had X amount of alerts before I started to apply this threat intelligence. Now do I have Y? Did it get better? Did it get worse?” That’s an interesting side of reporting that I don’t think people spend a lot of time thinking about.

Michel: Going back to what Ryan was saying a little bit, the curse of well-done security just like with well-done intelligence is that you don’t hear anything about it. If everything is effective, there’s nothing to say. It’s just all quiet, everything’s good. It’s expensive to implement a really well-done security operations team including threat intelligence.

For a lot of time there were C-suite that were questioning this huge investment without any sort of feedback and what was happening. And I think that view of security as a cost center has changed a lot with people actually being able to say: “Look at the loss that we prevented, had this incident occurred within our network. It didn’t, because we have these platforms, we have this intelligence in play, but look what it would have done. Look what we saved you.”

I think changing it from a cost center to a loss prevention perspective has really helped. And that’s all built around qualitative metrics of how effective is your threat intelligence program, how effective are your tools, and how well is everything operationalized and working together.

Thank you all so much for the discussion today. Before we wrap up, is there anything else that you would like to add or share with the listeners?

Chris: If you’re interested in learning more, we’ve actually broken down different use cases for different teams, and have that all written up on our website. Whether you’re live in the SOC, whether you’re an incident response person, check out the different use cases, different write-ups, and the different videos that we have for each of those personas.

Don't miss