The anatomy of fake news: Rise of the bots

Waterfall Security: Trust issues with your firewalls? Eliminating vulnerabilities that accompany firewalls is a click away.

Spreading misinformation has become a mainstream topic to the extent that even the term ‘Twitter bot’ is a well-recognised term establishing itself into the modern lexicon. Whilst the term is well known, it can be argued that the development and inner workings of Twitter bots are less well understood.

Indeed, even identifying accounts that are attributed to being a bot is considerably more difficult, and with good reason since their objective to appear as legitimate interactions require constant refinement. This continuous innovation from botnet operators are necessary as social media companies get better at identifying automated accounts.

A recent study conducted by SafeGuard Cyber analysed the impact and techniques leveraged by such bots, and in particular looked at bots attributed to Russian disinformation campaigns on Twitter. The concept of bot armies is challenged in the research, of the 320,000 accounts identified the bots were divided into thematic categories presenting both sides of the story.

fake news bots

These bot battalions were activated wherever insertion or manipulation of a particular message was needed, but perhaps more fascinating is their effectiveness. Assumptions that a particular account would simply flood the conversation are not necessarily true.

Whilst the bots often will take both sides of the story to spur debate, their effectiveness is remarkable. Leveraging a system of amplification nodes, as well as testing of messaging (including hashtags) to determine success rates the botnet operators demonstrate a real understanding on manipulating popular opinion on critical issues.

In one example, an account that was only two weeks old with 279 followers most of which were bots themselves began a harassment campaign against an organization. By leveraging a model of amplification, the account had generated 1,500 followers in only four weeks by simply tweeting malicious content about their target. Whilst their activities to manipulate popular opinion has been well documented, a recent campaign shows that cybercriminals can use this expertise to extort companies into protecting themselves from such practices.

Ransomware has been considered extortion by criminals to hold data hostage, this new trend demonstrates that reputations are now firmly in their sights. With existing battalions of bots already well versed in manipulating conversations related to driving political agendas, it is clear that turning their attentions to individual company trends represents a very concerning practice.