By Jason Dover, VP of Product Strategy at Progress
In today’s competitive landscape, every business seeks to find ways to carve advantages that give them a small edge over their counterparts. The large number of new digital and web 2.0 entrants into just about every industry is one of the key drivers that’s made this more important than ever. Out of this operating environment have emerged the concepts of digital transformation and business agility, where enterprises seek to use IT as a revenue generator and differentiator as opposed to simply being a necessary utility. This environment has proven to be the perfect incubation ecosystem for cloud adoption.
Drivers for Hybrid Cloud
While many approaches and patterns for cloud usage exist, the most common is hybrid cloud. By definition, a hybrid cloud includes at least one private cloud ecosystem along with one or more public cloud environments that are managed using unified models, tooling and playbooks. The primary benefits of this model are agility and the ability to provide the best platform for given applications and services while minimizing swivel chair management.
Despite these benefits, not every workload should be deployed in a public cloud. On the other hand, other services and applications are much more efficient when coupled with the just-in-time deployment options and horizontal scale options that a public cloud natively facilitates. In order to be efficient, the additional complexity of a hybrid environment must be tied together with common management, unified policies and a consistent security model.
Load Balancing in a Cloudy World
In cloud operating environments, whether on-premises or hosted, the principles around load balancing have changed significantly. While the core of load balancing is still fundamentally centered on providing intelligent traffic distribution across endpoints, data centers or clouds, there are new considerations that must be taken into account.
In the past, the common approach was to consolidate as many applications as possible onto a centralized multi-tenant set of physical appliances. With the advancements made in x86 architecture and efficiency, virtualized load balancers have become more popular given they are now able to deliver significantly more performance. The idea of creating smaller blast radii that allow for more frequent changes with micro-impact footprints has driven the adoption of per-app or per-service load balancers that are only responsible for proxying a very small part of the environment.
With this approach, modifications and updates that have unintended impacts have a minimal impact on the overall environment. Similarly, when this architecture is used to facilitate segmentation and support the limiting of lateral movement, successful breaches from threat actors only impact a small amount of the ecosystem.
Cloud-native application architecture is another key driver for enterprise strategy around load balancing. When building new apps with cloud-native architecture principles as the cornerstone, it’s very likely that these workloads will be deployed alongside traditional infrastructure. Load balancers with the right capabilities can help to bridge gaps in this scenario, for example, by enabling the scaling dynamism that exists within a containerized environment to be reflected in the physical network automatically.
The need for this is reflected by the increase in customer RFPs that call out the need for load balancers under consideration to have the ability to understand the context and schema of Kubernetes container management environments.
Blueprint for a Sustainable Strategy
Principles are key in IT because they allow operators to be flexible and respond in a way that’s not possible with a fixed set of static rules. Good principles drive good decision-making in agile and dynamic environments.
Given the complexity of modern IT, the demands on the office of the CIO and the operational challenges facing I&O teams, it is vital to have a set of foundational principles to drive the strategy around the selection and use of the critical component of load balancing.
Here are a few principles to consider:
1: Identify Application Business Goals
Load balancing selection must be based on the outcomes of the applications, services and workloads being serviced. Despite the general trend towards virtualizing network functions such as load balancing, if a specific application or environment requires compliance with higher-level versions of standards such as FIPS 140-2 or a very high level of TLS transactions, a hardware solution may be the ideal option.
On the other hand, a highly scalable and modernized enterprise deployment that is looking for high levels of isolation combined with the ability to prevent independent tenants from impacting their neighbor’s performance may prefer a virtual deployment of a fabric of micro-per service instances. The main point is that instead of letting your incumbent vendor drive the development of your RFP, it’s important to first evaluate key outcomes and objectives.
2: Consider How What You Implement Will Impact Security Posture
With the increase of cyber threats, it’s become more popular for organizations to consider how they can apply existing components within their environments to improve their security posture. One of the most under-utilized components is the load balancer. As the point of ingress for all client application requests and egress for all service responses, the load balancer occupies a privileged position. When optimally implemented with the right product capabilities, this position can be leveraged to help address security requirements.
As an example, certain key PCI DSS compliance requirements can be addressed with the implementation of a web application firewall (WAF). Most security-minded load balancer vendors have implemented WAF functionality as a core load balancing function. By design, a load balancer serves as a rudimentary firewall by preventing access to proxied services other than what’s explicitly defined to be allowed. When combined with embedded authentication and authorization services that can be integrated with third-party identify providers, a properly equipped load balancer can serve as a key supporting pillar of a zero trust strategy for application access.
Additionally, as a common consolidation point for certificate management, a load balancer can further be used as an enforcement point for the prevention of the use of insecure ciphers that provide potential conduits for threat actors. The ability to identify the characteristics of incoming requests can also be used to control access policies to applications and services for internal traffic versus external traffic and to bolster a defense-in-depth strategy.
3: Ensure Licensing and Consumption Flexibility
Today’s approach to IT requires that flexibility and future-proofing are integral to all implemented solutions. This is a critical buying criterion to support the typical office of the CIO’s objective for achieving greater agility. One way that this emerges in the context of load balancing is around licensing and consumption. Historically, the primary licensing of load-balancing solutions was based on purchasing perpetual licenses on a per-instance basis combined with an annual or multi-year maintenance contract. When the environment scaled and capacity limits were breached, a “rip-and-replace” approach was required to scale the environment vertically with higher-performant instances.
While this is often still sufficient, there are many cases where flexible approaches, such as buying access to capacity pools or having scale-out/scale-in mechanisms directly tied to actual usage, are desirable as well. When assessing a provider, ensure that these options exist and that there is a way to transition between models based on the changes in your business’s requirements. This will ensure that you will be able to achieve maximum agility.
What load balancing looks like now is the result of a progressive evolution over the past couple of decades. It’s likely to change even more significantly over the next few years thanks to a convergence of cloud and digital transformation trends combined with foundational changes to how modern applications are built. The underlying principles that made these solutions such a critical part of current application infrastructure, however, still remain applications and services that need to be highly available, performant and secure, and certain functions need to remain best handled outside of the application itself.
This means that when considering what solutions to implement, enterprises should consider how the options they review will support those needs in the environment they expect to have over the next several years. Given the likelihood of having a hybrid cloud model, businesses should significantly factor in the characteristics associated with a heterogeneous cloud ecosystem. In doing so, they can ensure an optimal application experience enabled by a well-configured and adaptable load balancer and can evolve their infrastructure wherever their cloud infrastructure strategy takes them.
About the Author
Jason Dover, Vice President of Product Strategy at Progress, and have over a decade of technology leadership experience working across enterprise organizations. At NYSE Euronext and Deutsche Bank, I provided consultative services for directory and messaging integration projects including the integration of key systems between Liffe, AEMS, AMEX and Euronext with the New York Stock Exchange. At Kemp Technologies, I held various roles across sales, marketing and product management. I am currently President of Product Strategy, responsible for overall application experience (AX) product portfolio direction, product marketing, support of corporate development activities, strategic partner engagement and Horizon 2 initiatives.
Jason can be reached online at (jason.dover@progress.com, https://twitter.com/jaysdover) and at our company website https://www.progress.com/