Fail-Open Architecture for Secure Inline Protection on Azure

Fail-Open Architecture for Secure Inline Protection on Azure

Every inline deployment introduces a tradeoff: enhanced inspection versus increased risk of downtime. Inline protection is important, especially for APIs, which are now the most targeted attack surface, but so is consistent uptime and performance. This is where a fail-open architecture comes in. 

This Wallarm How-To blog outlines how to deploy Wallarm’s Security Edge platform on Azure using a fail-open design, ensuring high availability and zero disruption, even if the filtering infrastructure becomes unresponsive. 

The Challenge: Inline Security Without the Downtime

APIs drive business-critical operations. As such, their availability is non-negotiable. Any inline solution, no matter how effective, introduces the possibility of becoming a single point of failure. If the traffic filtering node goes offline or becomes unresponsive, users could face delays, broken integrations, or full application outages. 

This is one of the most common objections to inline deployments. While legacy WAFs might require tradeoffs between protection and availability, modern cloud architectures allow for both. By using Azure Front Door alongside Wallarm’s distributed Security Edge nodes, organizations can architect a highly available, auto-failover system that maintains protection without jeopardizing performance. 

What is Wallarm Security Edge? 

Wallarm Security Edge is a cloud-native, managed service that deploys filtering nodes across multiple geographic regions. These nodes inspect traffic inline in real time, identifying and blocking malicious API calls before they can reach your origin servers. 

Unlike traditional security appliances, Security Edge doesn’t require you to install or manage any on-prem hardware. You simply route your API and web traffic through the Wallarm filtering nodes and benefit from real-time detection of OWASP Top 10 threats, API exploits, and emerging attacks like LLM prompt injections. 

Fail-Open Architecture for Secure Inline Protection on Azure
Fail-Open Architecture for Secure Inline Protection on Azure 4

But what happens if the filtering cluster becomes unreachable? 

Introducing Fail-Open Logic with Azure Front Door 

By integrating Azure Front Door’s active/passive routing capabilities, organizations can implement a resilient, fail-open architecture that bypasses the filtering nodes in the rare event of failure, thus ensuring uninterrupted API availability. 

  1. Set Up Azure Front Door with Origin Groups

Azure Front Door acts as the global entry point for incoming traffic. When you create a Front Door instance, it provides a fully qualified domain name (FQDN) – for example, azureFrontDoor-a7ajbwefb6bza6ez.z01.azurefd.net. 

Typically, you’d configure a CNAME record that maps your public subdomain (api.example.com, for example) to this FQDN, allowing all requests to route through Front Door. 

To enable automatic failover, you’ll configure two origin endpoints into a single origin group: 

  • Primary origin: Points to the Wallarm filtering node cluster.
  • Secondary origin: Points directly to your application backend, bypassing Wallarm. 
  1. Configure Priority-Based Routing

Azure Front Door lets you assign priority levels to each origin. A lower number means higher priority: 

  • Priority 1: Wallarm filtering node cluster. 
  • Priority 2: Direct-to-origin backup path. 

Traffic is always routed to the highest-priority healthy origin. If the Wallarm node cluster becomes unavailable, Azure Front Door automatically switches to the secondary origin, ensuring continuous service. 

This is the essence of a fail-open architecture: if security infrastructure fails, availability wins by design. 

  1. Customize Health Probes for Fine-Grained Control

To detect failure conditions, Azure Front Door relies on health probes. These periodic checks validate whether the filtering node cluster is responsive. If the probe fails for a set number of consecutive attempts, traffic is redirected to the healthy fallback origin. 

You can customize these probes with: 

  • Specific HTTP paths or headers
  • Timeouts and response thresholds
  • Frequency of health checks

This flexibility gives your security and infrastructure teams precise control over failover behavior. 

Once deployed, here’s what a typical request flow looks like: 

  1. A user makes a request to api.example.com
  2. The DNS CNAME record points the request to the Azure Front Door FQDN.
  3. Azure Front Door checks the health of the primary origin (Wallarm filtering cluster).
  4. If healthy, traffic is routed through Wallarm for inline inspection.
  5. Wallarm forwards clean requests to the actual backend server as defined in the configuration.
  6. If the Wallarm cluster is unavailable, Azure front Door automatically reroutes traffic to the direct origin path, without the need for manual intervention. 
Fail-Open Architecture for Secure Inline Protection on Azure
Fail-Open Architecture for Secure Inline Protection on Azure 5

This architecture provides the best of both worlds: 

  • Real-time, inline protection: All traffic is inspected for threats when the Wallarm cluster is healthy. 
  • High availability by default: If filtering fails, users still get uninterrupted access to your APIs and applications. 
  • Fully managed deployments: No appliances, no manual patching, no maintenance headaches. 

Together, Wallarm’s Security Edge nodes and Azure Front Door offer a resilient, cloud-native security model tailored for modern API environments. To learn more about deploying Wallarm Security Edge inline with Azure and building your own fail-open architecture, check out the official Wallarm documentation.


Source link