The dangers of bad cyber threat intelligence programs

Carl Herberger, VP of Security Solutions at RadwareI love a surprise ending in a movie. Whether I’m watching drama, action, or sci-fi, there’s nothing better than a plot twist you can’t predict.

At work, however, I feel the exact opposite. Movies are one thing, but surprise endings in the real world are rarely as welcome or harmless. Much has been written about cyber threat intelligence (CTI), including proposed standards on how to share the information (e.g., TAXI and STIX), what the information should look like, what role governments and private industries play as key stakeholders of information, and how actionable the intelligence should be, what formats they should be published, etc. In the U.S., the Federal Financial Institutions Examination Council (FFIEC) is asking member financial institutions to seriously consider adding CTI as one of the key attributes of their overall risk management strategies.

While I largely agree with the need for academic solutions, in many cases these programs introduce more questions and unknowns into an organization’s security environment. To date, there hasn’t been enough discussion about the key success criteria of a CTI program or enough documentation about its potential risks.

I’ll attempt to remedy that below by outlining four key areas where a CTI program can actually harm your organization by exposing vulnerabilities, a surprise ending neither welcome nor easily remedied.

CTI vulnerability 1: Failure to establish bona fides

Many of my security brethren believe deeply in sharing all information available, but they don’t seem to understand the potential threat of doing so. We don’t need to look far for a powerful analogy: The lessons honed over the centuries by key nation-state intelligence-gathering apparats. These programs work in an effective manner because they establish how worthy the information is. Nation-states have learned, often the hard way, to understand the worthiness of the intelligence they’ve gathered and to tag the data based on its source and methods used in collecting it.

In fact the military definition of bona fides is:

1. The lack of fraud or deceit: a determination that a person is who he/she says he/she is.
2. In personnel recovery, the use of verbal or visual communication by individuals who are unknown to one another, to establish their authenticity, sincerity, honesty, and truthfulness.

The military and intelligence community have relied on bona fides because simply trusting information because we want to believe it is fraught with potential failures. As a result, most security organizations have a process to evaluate the bona fides of the information they collect to protect themselves from low-quality or purposely false information. This system requires trust and has produced much of the classified information in government today.

We also hear a lot about a Pollyanna, almost innocent cry for the sharing of data, as if open sharing of information will enable us to reach a higher state of awareness and a higher level of security. History begs to differ.

Free information sharing services would likely be the least reliable. Meanwhile organizations that could afford to pay for high-quality, highly vetted, and actionable intelligence would obtain the best level of CTI in the industry.

CTI vulnerability 2: Ethno-centric information and not enough data points

There are, of course, other fallacies of a highly functioning CTI program:

  • Government-provided data is trustworthy.
  • Government-provided data is good enough and covers enough.
  • There is no risk to government-sourced data.

Data derived by one nation-state is skewed to that nation-state and is qualified for the threat to the resident nation-state. As a result, the information may serve some companies well, but not all. Why?

The closer a company works with one nation-state, the less likely others will follow as they don’t want to risk losing their own data to another nation-state through that company. For example, if a company works closely with the U.S., it stands that intel from China and Russia will be hard to come by. But even countries like Switzerland, Canada, Germany, and France have publicly stated reservations on information-sharing with the U.S.

Data from a nation-state has numerous restrictions and is mired by many laws and quagmires on gathering information or releasing the data. As a result, the data may be non-actionable, may have been altered in some way to benefit the nation-state, or may not be available at all. In any of these scenarios, threats to the company come from relying either too heavily or too centrally on government-driven data.

CTI vulnerability 3: Failure to establish “backout” criteria

Imagine a bad guy acting as a good guy shares hot information with a security community, and advocates certain actions, such as the onboarding of certain signatures, changes in configurations, automating a certain defense, etc. If executed successfully, companies would be implementing security protocols and initiatives that, instead of protecting themselves against the supposed threats, play right into the hands of the bad guy. These perpetrators now have a wonderful new way to quickly open up an industry or set of companies.

CTI vulnerability 4: The system is too automated (or too manual)

A proper CTI program needs a key decider to determine whether the information is relevant for your organization, not for an industry or particular technology. This is a necessary step in the process that’s often overlooked when the industry discusses fast implementations. Either the system is automated and brushes past the issue, or conversely, builds complicated change-management profiles to implement changes, negating the whole point of actionable and fast-moving CTI.

In the end, the information security function has taken on more roles and responsibilities, including intelligence gathering and risk weighting. These new functions act more like modern day war-fighting functions, and we’d all be advantaged to learn the deep historic lessons of nation-state intelligence organizations before standing up well-heeled intelligence functions or CTI ourselves.

Modern day CTI must evolve to include the following key attributes – a process to understand what information needs to be harvested, such as:

  • Operational
  • Technical – Tools, techniques, types.
  • Source / Attacker Profile
  • Destination / Victim Profile
  • Trajectory Data – In transit
  • Motivation.

As you can tell from the list above, today’s way of looking at the data sharing conundrum is flawed and needs to evolve. You must evolve too! That way, you can better avoid a surprise ending.

Share this
You are reading
danger

The dangers of bad cyber threat intelligence programs