What analytics can unveil about bot mitigation tactics

25% of internet traffic on any given day is made up of bots, the Kasada Research Team has found. In fact, there is a synthetic counterpart for almost every human interaction online.

bot mitigation tactics

Bot mitigation tactics

These bots work to expose and take advantage of vulnerabilities at a rapid pace, stealing critical personal and financial data, scraping intellectual property, installing malware, contributing to DDoS attacks, distorting web analytics and damaging SEO.

Luckily, tools, approaches, solutions and best practices exist to help companies combat these malicious bots, but cybercriminals have not been resting on their laurels and are constantly working on ways to bypass the protections used to block bot activity.

It is important to regularly review what tactics you are using to combat bot traffic and analyze your success rate, as this process will help you understand whether your mitigation approach has already been figured out and worked around by cybercriminals. If you’re not continually evolving your defense along with the attackers, then you’re still a good target for bots.

The shortcomings of traditional approaches

Shortcomings have recently come to light about even the most common and accepted bot mitigation technologies. For example, solutions offering CAPTCHA challenges are not only ineffective at detecting and stopping automated attacks, but they often lead to a friction-filled experience, frustrating customers and leading to lower conversion rates.

Many online retailers and e-commerce providers will actually forgo implementing security due to fear that this friction will have a negative impact on sales.

Bot mitigation approaches that are based on observations from historical and contextual data (e.g., IP addresses and analysis of known behaviors) and then rely on taking steps to block similar behavior can often block IP addresses or stop specific user behavior that might not actually indicate an attack (e.g., late night banking or shopping). These methods trigger poor experiences and have been shown through analysis to not produce the desired mitigation or prevention results.

More recently, use of a rules-based architecture to prevent attacks has grown in popularity. Unfortunately, a rules-based solution falls short when faced with advanced AI- and ML- equipped bots that can morph on the spot to evade an organization’s cyber defenses. As a result, rules-based solutions are always playing catch up, as they rely on a cache of collected data to make real-time decisions on who is human and who is a bot.

The slow response of a rules-based solution creates gaps within an organization’s defense that can use up bandwidth and resources and slow web servers. This tactic can also impede the customer experience.

Analyzing your traffic

“You can’t manage what you can’t measure.” – Peter Drucker

Analyzing the success rate of bot attacks on your network is critical. Even if you’ve found that your preferred approach to bot mitigation is stopping 99% of bad bot requests, that 1% can still be considerable and damaging. Say you have a bot attack that’s launching an average of 100,000 attacks an hour on your site. A 1% success rate means that there have been about 24,000 successful attacks that day. One successful attack can obtain customer information – 24,000 can ruin your business forever.

This simple yet devastating equation illustrates why full visibility in to and analysis of your traffic is so important: you don’t stand a chance at solving the problem until you know for sure how much of your traffic is made up of good bots, bad bots, or human visitors.

Having accurate analytics is essential for informed decision-making – both about how to solve your bot problem, and how to optimize your business operations.

To illustrate one effect that unchecked bots can have on a business, say an organization’s sales and marketing teams depend on analytics from their web and mobile applications to understand the market and the audiences that are using their service.

The introduction of synthetic traffic makes it difficult to gauge the true performance of marketing campaigns, which in turn makes it difficult to be agile and adjust marketing strategies on the fly if they’re not working. Without proper analysis of your traffic, bots make it seem as if every campaign is successful.

What to look for

When evaluating the traffic of your website, you can often glean summary information about potential bot activity just by analyzing basic site metrics. Key metrics to look for that could indicate you’re being attacked by bots include:

  • Average session duration: when the average session length is just a few seconds.
  • Geo-location: when the geo-location of the traffic is either non-discernible or from all over the world.
  • Traffic source: when the traffic source is mostly direct for that particular day and it usually isn’t.
  • Bounce rate: when the bounce rate is more than 95%.
  • Service provider: when the majority of the traffic is from the same service provider.

While your analytics provider might alert you to your organization’s problem with bots, they do not help manage or mitigate the problem. At the same time, a typical bot mitigation report is a compilation of what was detected and blocked (in comparison to all your traffic). This information skews the results and creates a false narrative as to how successful your organization has been in defending assets, as it doesn’t show how much bot traffic was successful in breaching your systems.

Insight into all of your traffic is necessary to fix the problem.

Zero trust and proactive bot mitigation tactics

One tactic that’s growing in popularity to overcome the shortcomings of the aforementioned tactics is the use of a zero trust philosophy. In adopting a zero trust approach, every bot is treated as “guilty until proven innocent”. This tactic starts with interrogation and detection capabilities at the very first request. Then, once a bot is classified as good or bad, an organization can determine how it wants to manage it. With this approach, no bots make it through to your site unless they have been approved.

There’s also something to be said for efforts to proactively respond to bot attacks by wasting the attacker’s time. This can be done with ever-increasing challenges that occupy the bot’s resources and waste the bot operator’s computing power, essentially ruining the economics of an automated attack.

Proactive management dissuades future attacks by bot operators and allows organizations to invest resources elsewhere.

Conclusion

Analytics, and the transparency that they provide, are at the heart of successful bot mitigation. The insight afforded by analytics allows organizations to improve customer access and experience, help report accurate KPIs, optimize marketing return on investment, increase sales, protect brand reputation, and defend shareholder value.

Understanding where the bot attacks are originating from and identifying what is synthetic traffic versus human traffic has implications across your entire business. With increased insight and a zero trust philosophy to bot mitigation, organizations can plan accordingly and commit resources to improving their customer experience, product offerings, and application speed instead of wasting time, energy and resources fighting ever-evolving bots with outdated tactics.

Don't miss