Behavioural profiling: Spotting the signs of cyber attacks and misuse

behavioural profilingBehavioural profiling is increasingly recognised as a new level of protection against cyber attacks and systems abuse, offering the potential to pick out new and unknown attacks, or to spot activities that may be missed. The basic premise is to establish a sense of how the system and its users behave, and provide a basis to protect against compromise by watching out for unwanted activities.

The fundamental value of profiling is that while we may not know who the attackers and cybercriminals are, we know what they’re likely to be doing. Similarly, we ought to be able to develop a picture of what our legitimate users should normally be doing, and then pick out things that appear unusual. While we cannot monitor and inspect everything manually, automating the process enables the system to keep a watch on itself.

Developing an understanding of behaviour is not a new idea in security terms, and it already has uses in a variety of related contexts. For example, behavioural monitoring of some form is a long-standing technique in the context of Intrusion Detection Systems (IDS).

Similarly, a link can be drawn to the use of heuristic analysis in malware detection where unknown code is assessed to determine if it performs malware-like actions when executed (i.e. essentially looking to see whether it behaves in ways that have been established by profiling previous known malware examples). In addition, it doesn’t just have a role to play in combatting external attackers. Profiling can offer a means to identify insider threats, such as fraudulent behaviour and other misuse of privileges.

The IDS context provides a good example of the contrast between profiling ‘normal’ activity versus spotting the signs of known bad behaviour (termed anomaly-based and misuse-based detection). Both essentially rely upon monitoring current activity in order to spot potential attacks, but they approach the task in different ways. With misuse-based detection, the attacker behaviour has essentially been profiled in advance and then codified as signatures that attempt to describe attacks, misuse and other unwanted activity.

Meanwhile, anomaly detection, attempts to characterise normal behaviour, and then flags significant departures on the basis that they may denote something bad, and are worthy of further examination. The latter option is more explicitly linked to building a profile of behaviour, and is also referred to as behaviour-based detection.

Profiling can also be applied at different levels – for example, looking at the behaviour of the system, the network traffic, and the users (which may be done an individual basis or as groups). Looking deeper again, there are also different indicators that may provide a basis for profiling. Indeed, various factors can be used to distinguish between users, and the types of activity they may be engaging with. For example, in prior research at the University of Plymouth, we have looked at typing rhythms, application usage, linguistic characteristics, and network traffic patterns as elements that can be profiled.

Taking the last one as an example, in recent work funded through the Engineering and Physical Sciences Research Council, we profiled network traffic meta-data as a means of identifying users and their application-level interactions, yielding recognition rates of 86%. Applied in an investigative context, this leads to an enormous reduction in the volume of traffic that an investigator needs to analyse, with the use of profiling enabling attention to be directed towards a particular suspect, by disregarding unrelated traffic and focusing upon the rest.

If we can spot an impostor then it’s a frontline defence against unauthorised access. However, a further option is to apply profiling to our own users, either individually (i.e. how does this person normally behave?) or as a group (i.e. how do staff in this role or department usually behave?). Application in this context would be particularly relevant to spotting insider threats, where users have the rights to access whatever they are accessing, but are using those rights in an inappropriate way.

Of course, the idea that they are being monitored may not be entirely palatable if applied to our own users. However, behavioural monitoring is arguably an extension of the type of manager-level observation of staff that is regularly advocated in standard security guidance. The difference is the automation that allows it to scale up, and enables the profiles to be identified in the first place. At the implementation level, this can be achieved via machine learning and other artificial intelligence and statistical techniques for data analysis. This in turn enables pattern identification and classification, often enabling characteristics to be profiled that would be too subtle for human observation to identify.

Unfortunately, while there are many opportunities to use it, behavioural profiling is not perfect in terms of accuracy. As with any fuzzy matching process, there is the potential for false positives and false negatives. Clearly neither are desirable, with the former leading to unnecessary challenges, whereas the latter means we actually miss a security-relevant incident. The potential for false positives means that a measured approach is needed in terms of how to respond to things that are flagged; for example, using the ‘detection’ as a prompt for further investigation rather than imposing an immediate sanction or restriction on the user concerned.

Nonetheless, if accepted within its limitations, behavioural profiling can still make a valuable contribution by highlighting the activities of most interest, urgency or priority. As such, it can clearly help in focusing potentially limited security investigation or response resources towards where they are most likely to be needed.

Steven Furnell is a senior member of the Institute of Electrical and Electronics Engineers (IEEE).

Don't miss