Bad bot traffic continues to rise year-over-year, accounting for nearly a third of all internet traffic in 2023. Bad bots access sensitive data, perpetrate fraud, steal proprietary information, and degrade site performance. New technologies are enabling fraudsters to strike faster and inflict more damage. Bots’ indiscriminate and large-scale attacks pose a risk to businesses of all sizes in all industries.
But there are techniques your business can adopt to address this malicious activity. By leveraging advanced, multi-layered strategies to block bots, the following categories of techniques will highlight who—or what—is visiting your website, enabling you to restrict access to unwanted visitors.
Unfortunately, there is no magic, one-size-fits-all solution. Combining these approaches empowers you to create a robust defense against bots.
Techniques to detect bots
While not all bots are malicious, even “good” bots (such as search engine crawlers) can potentially hinder performance and skew analytics. Visitor insight is critical to appropriately managing all threat types and generating accurate visitor analytics.
To identify bot activity, companies have traditionally relied on red flags like:
- Traffic spikes
- High bounce rates
- Short sessions
- Strange conversion patterns
- Impossible analytics (such as billions of page views)
Unfortunately, by the time you spot these signs, it’s often too late to prevent damage. Advanced bots may not even set off these alarms because many detection tools fail to keep up with changing bot technology.
Turning to more robust techniques that evaluate technical characteristics and behavioral data gives you the power to turn back malicious or uninvited bots.
Device characteristics
Browser and device attributes can be an indication of bots. There are several facets to consider.
IP addresses
Specific IP addresses and proxies are known to host bots. A robust bot detection system should leverage a frequently updated database of identified bot-associated IPs, data centers, malicious proxies, and other sources linked to automated activity. While constantly changing bot IPs mean this solution is not foolproof, a dynamic blocklist adds a strong verification signal.
Hardware and software configurations
Analyzing a device or browser’s characteristics and settings uncovers suspicious visitors. Sites can examine device attributes like screen dimensions, OS, storage, memory, processors, and graphics rendering capabilities to identify configurations that deviate from baselines. Browser-related factors include how a client executes JavaScript, renders pages, and handles other interactive tasks.
Significant variances from expected behavior are strong indicators of bot-generated traffic. Inconsistencies between reported attributes, such as a mismatched time zone and IP address, also indicate potential manipulation.
Leaked data
Bots leak data that human users do not, such as errors, network overrides, and API changes. Looking for this information allows websites to block unwanted visitors.
Device fingerprinting aids bot detection by using device and browser attributes to create a unique identifier. This approach reveals inconsistencies and unusual configurations that could signal bot activity. To escape detection, bots would need to create a different and realistic device fingerprint per visit to the website.
Authentication and verification techniques
Robust authentication and verification techniques help block automated bots from accessing accounts, filling out forms, or contributing content (e.g., product reviews).
CAPTCHAs and challenge-response tests
These tests are a longstanding strategy against bots, but they may have outlived their usefulness. We’ve all selected the pictures of cars or typed in characters from an image. Not only are CAPTCHA tests annoying to users, but they aren’t that effective. Studies show robots are actually better than humans at solving these puzzles.
Challenge-response tests can be slightly more secure but still create significant friction for real users. If you choose to use these tests, you should also employ additional security measures like risk-based authentication.
Multi-factor authentication (MFA)
Bots can easily circumvent passwords through credential stuffing. MFA enhances security by requiring additional verification steps, such as providing a code or a biometric. Bots may be able to guess a password, but they likely won’t have access to the second factor, making this a solid additional layer of security.
Device fingerprinting enhances these authentication strategies. When a login attempt comes from a new device or location, you can enable additional security steps, such as MFA. This approach also allows you to catch logins for multiple accounts coming from a single device, which can be another sign of bots.
Behavioral analysis
A site visitor’s behavior gives insight into its legitimacy. Automated programs act much differently than a real person would. There are several ways to evaluate behavior.
Page interactions
Mouse movements, scrolling cadence, and page element engagements are key indicators. Humans complete these actions intermittently and randomly, while bots are systematic and consistent. Rapid scrolling, clicking, and login attempts signal potential bot activity.
Navigation
Examine user movement between pages and time spent on each page. Bots quickly move through many pages, following predictable URL patterns. Humans spend longer on each page and navigate more randomly as they deliberately search for information.
Form completion
Bots can fill out multiple fields instantaneously, usually with repetitive, predictable or nonsensical information. Telltale signs of a human filling out a form include making and fixing typos or skipping optional fields.
However, evaluating behavior manually is slow, prone to error, and resource-intensive. Detecting bots in real time requires data collection and analysis tools. Machine learning (ML) enhances the capabilities of these platforms. By analyzing billions of data points, ML programs continuously learn and adapt to identify bot-like behaviors as techniques evolve.
You can also exploit bots’ automation by setting traps with a “honeypot.” These decoy websites mimic real sites but are isolated and monitored. Humans won’t find them, but bots will. If a visitor interacts with the site, such as clicking or filling a field, you will know it is an automated program and can take appropriate action, like blocking the IP address from your site.
A multi-layered approach
Relying on only one of these approaches is inadequate to detect bots and has a high chance of impacting many legitimate users, all while missing a significant proportion of advanced automated scripts.
The ideal strategy encompasses behavior, device characteristics and authentication techniques. Bot detection tools leveraging device intelligence provide detection capabilities by combining fingerprinting with intention analysis.
When you can assess device attributes and user behavior together, suspicious user detection becomes more accurate. A solution with ML further enhances analysis capabilities and keeps pace with growing bot sophistication. With this level of precision, you can confidently flag or block bots while reducing friction for legitimate users.
Bots are getting more advanced, but so are the tools to thwart them. Instead of taking an outdated approach with legacy tools and mindsets that have not kept up with evolving technology, businesses need to adopt a new, more updated approach to detect bad bots. Using technology like device intelligence can enable businesses to proactively take action to prevent malicious activity rather than just mitigating damage.