When it comes to social engineering attacks, one of the main challenges for the attackers is how to maximize the number of targets and the number of victims.
In order to reach a great number of targets, the attack must be automated. But, in order to dupe as many people possible, the attack must also not seem to be automated – and this is precisely the reason why many messages sent by spam bots fail to fool careful users.
Four researchers at the EURECOM Institute in France – Tobias Lauinger, Veikko Pankakoski, Davide Balzarotti, and Engin Kirda – have managed to put together an “automated system that supports human-like conversation”, and have achieved a 76.1% click-through rate on “malicious” links introduced in the “conversation”.
They have succeeded in taking control of conversations between human users in order to implement their attack, and they regard it as a kind of man-in-the-middle attack. According to their paper, the were able to insert themselves into a conversation between two human users and influence the topic of the conversation, make it last longer, and make the human users click on the links they provided.
They named this PoC implementation “Honeybot”, and they evaluated its efficiency on IRC and Facebook – although, it can be carried out on any chat system where users can exchange private messages.
The bot inserts itself in the middle by initiating the conversation with the two users and forwarding the messages back and forth between them, modifying them occasionally or inserting its own messages between them:
The bot is capable of changing the perceived gender of the chat participants by replacing words with corresponding expressions that relate to the opposite sex through use of a “gender change” algorithm. To decide if the algorithm needs to be used at all, the bot analyzes usernames and compares them against a list of female and male first names, and of words typically associated with gender (male, female, etc.). It is unable to discern the gender of users using usernames that contain terms that have no relation to said list.
The “malicious” links that the Honeybot wants the users to click on can be inserted randomly (after a certain amount of messages is exchanged), as a reaction to keywords or as a replacement to other links contained in a message.
To avoid detection, the bot never contacts users that have administrative privileges, and filters out spam, messages containing email addresses, IM contact data or links that he hasn’t changed with its own.
To test the attack on Facebook, they created one male and one female profile, then sent friend requests to students of a local university. Usually, the female profile would send friend requests to male users, and vice versa. The results? Five conversations were bootstrapped, and 4 out of ten targets clicked on the offered TinyURL link.
As regards countermeasures that can be employed to avoid falling prey to this kind of attack, the researchers list some but believe that “any technical countermeasure can be invalidated by an attacker with sufficient persuasive power on the victim. Technical countermeasures are necessary to assist users in making their decisions, but they can only work if the users are aware of the problem. For that reason, user education is a primordial part of any countermeasure.”