Proliferation of sneakerbots across industries: The long tail of DIY bot operators
Many people’s first exposure to bots came from so-called sneakerbots. Sneakerbots are used to scan websites for inventory and automatically complete the checkout process. Combined with proxy services that provide IP addresses and user agents to make them appear as legitimate customers, they can be an effective means to score Yeezy, Air Jordans and other highly desired, limited edition footwear. The use of sneakerbots has also been extended to other consumer goods (e.g., for the highly visible launch of PS5 this past holiday season).
Despite providing an arguably unfair advantage, the use of bots to purchase in-demand items is still legal.
Initially, these sneakerbots were used mostly by large, financially motivated groups that sought a big initial payoff. They’d purchase large quantities of inventory with plans to resell it, typically on auction sites such as eBay. More recently, though, the use of these bots has shifted to the long-tail of DIYers that simply want merchandise for themselves or are purchasing goods for resale as a side-hustle.
There are many ways for DIYers to gain access to bots: they can download free or inexpensive plugins, join inexpensive Cook Groups, or purchase turnkey bots-as-a-service, a move that requires zero technical knowledge. But just because these tools are free or inexpensive, it doesn’t mean that they aren’t sophisticated. Online communities continue to be driven by the chase of potential profit to build effective, state-of-the-art bots.
A sneakerbot by any other name
What we are observing now is the increasing proliferation of sneakerbots across all industries. As it currently stands, more than 30% of all internet traffic is generated by unwanted bots, a number which will exceed 50% within the next few years. The rapid digital transformation brought about over the past several years has acted as a catalyst for this substantial growth in synthetic traffic.
Whether they are large, organized groups or DIYers, bot operators leverage automation because it’s cheap, easy to use, generates large amounts of profit, and makes success at scale viable.
Here are some recent examples of sneakerbots being used in different industries:
COVID-19 vaccine line jumping: As doses are pushed out to the local health departments, medical practices, and chains such as CVS or Walgreens, fears have risen about bots being used to jump places in line and secure vaccinations before those in greater need. Getting the COVID-19 vaccine to the individuals that need it first is imperative to protect those at most at risk. Allowing people to line jump for the vaccine is unfair and will only prolong the pandemic recovery.
Stock manipulation: During the recent short squeeze of the GameStop stock, bots were used to insert posts on Reddit’s WallStreetBets board to make specific stock recommendations, in an attempt to sway public opinion and drive community members to purchase shares, thereby driving up the price. An analysis of 30,000 posts published over 24 hours has shown that 97% of the posts appeared to have been generated by bots.
Disinformation spreading: The 2020 election was ripe with disinformation, shared at scale to influence public opinion. Swing states were heavily targeted with false information posted by fake social media accounts created by bot operators. During the election season, thousands of different social accounts were posting the same messages at the same time.
In addition to that, HBO Max recently premiered a documentary called Fake Famous, where they look at the industry of creating Instagram influencers. In the documentary, the producers actually utilized bots to add fake followers to three test subjects, showing the ease with which bots-for-hire can manufacture fame (and eventually free gifts) for just about anyone.
What can be done?
Stopping unwanted bots is an increasingly difficult challenge, which is why it continues to make headlines across industries and varying aspects of online businesses.
There are two fundamentally different aspects to stopping bad bots that need to be implemented to help detect and mitigate them:
1. Architecture: Zero tolerance by design
The first is an architectural shift in how bots are detected, before they are mitigated.
Most solutions today are reactionary, as they use rules to decide what to block. Even applying machine learning and AI requires the use of data from the past to make decisions for the future. This approach requires letting bad bots in to look for suspicious behavior before stopping them. Since bots typically only require 1-2 requests to acquire information and leverage scripting tools and proxy services, they are able to disguise themselves as humans and fly beneath the radar to do their damage before being detected.
Bot mitigation needs to be ruleless and future-proofed in a way that can detect bots on the first request, before letting them into your infrastructure. New bots that have never been seen before need to be detected as well.
This process is analogous to the zero trust philosophy that is becoming increasingly necessary to secure enterprise networks – a more proactive approach where all requests are considered “guilty until proven innocent.” Suppose that you require automated bots to present themselves within the context of a legitimate browser, proving that it’s from a legitimate source that’s controlled by a human. In that case, they can be stopped without having to inspect their network, device and behavioral attributes.
2. Ability to strike back against bad actors
The second requirement is a recognition that we’re up against people, not robots. If it continues to be easy and profitable for criminals to leverage bots at scale, they will continue to do so. There is currently little being done to strike back against or even frustrate bot operators.
One example of striking back is the use of an asymmetric cryptographic proof-of-work challenge that’s able to increase in difficulty. This is designed to exhaust the compute resources of automated attacks, without informing the attacker. The use of bot-driven automation then becomes too costly to continue, pushing operators to redirect their efforts elsewhere.
Another method is to make it very difficult for bot operators to retool and reverse-engineer defenses. Many bot operators are skilled in JavaScript and use this knowledge to interpret bot mitigation defenses and bypass obfuscation methods used to hide them.
Instead of using JavaScript, a custom interpretive language can be used that changes dynamically and appears as bytecode. This changes the playing field by making it much more difficult for bot operators to build new bots that can bypass defenses. When bot operators get frustrated, they will find other websites that provide an easier path to the desired outcome.
Conclusion
Similar to how eCommerce, media and banking have been democratized with the internet, it was inevitable that everyday users would also gain easy access to stealthy, specialized and economical bots. What started with sneakerbots and large financially motivated groups has now expanded into a chain of intelligent, independent bot operators that exploit online businesses.
A fundamental change in how bots are detected and mitigated is needed, as not getting your limited-edition sneakers is only one small aspect of the detrimental impact bots are having on society.