When one of our customers approached us, their priorities were clear – high availability and smooth traffic distribution were essential. Therefore, we designed a solution to meet both. They needed a solution that could manage an application composed of microservices running on Azure VMs, ensuring every service could communicate seamlessly and scale on demand.
After analysing their environment, we implemented a dual-layer design using Azure Load Balancer (ALB) for raw TCP/UDP traffic and Azure Application Gateway (AGW) for HTTP/HTTPS workloads. This hybrid model gave them performance and reliability for backend services and intelligence and security for frontend web workloads.
Why Two Azure Load Balancing Solutions Were Required

The customer’s setup supported a mix of workloads – non-web and web-based.
Non-web workloads (custom TCP-based apps) needed fast, efficient distribution without protocol overhead.
Web-facing applications demanded features like SSL offloading, routing rules, and WAF protection.
By combining Layer 4 (ALB) and Layer 7 (AGW) capabilities, we ensured every traffic type was handled through the most optimised path, improving performance and resilience.
Implementing Azure Load Balancer for Non-Web Workloads
We deployed Azure Load Balancer to manage the customer’s TCP-based workloads. Its pass-through architecture evenly distributed traffic across backend VMs without additional latency.
Our configuration included:
Frontend IPs to receive inbound TCP connections.
Backend pools made up of VM Scale Sets.
Load balancing rules mapping frontend and backend ports.
In addition, we set up TCP health probes that continuously checked VM availability to ensure consistent uptime. If a probe failed, traffic was automatically rerouted. Through Azure Metrics monitoring, the customer gained visibility into the health of every instance, reinforcing uptime and operational confidence.
Deploying Azure Application Gateway for Web Applications
For web workloads, we implemented Azure Application Gateway to handle HTTP/HTTPS traffic.
Configuration highlights:
Listeners on ports 80 and 443 with SSL/TLS termination for encryption offload.
Backend settings enabling HTTP/HTTPS communication with VMs.
Routing rules to direct traffic based on URL paths.
We also deployed custom health probes to validate endpoint health (e.g., /healthcheck
), ensuring routing decisions reflected application availability, not just server status.
Intelligent routing was a major benefit: for instance, /images
requests were handled by one backend pool, while /api
requests were directed elsewhere. This improved performance isolation and workload efficiency.
Our Decision Framework for Choosing Azure Load Balancing Services
We applied a straightforward decision framework to ensure scalability and future readiness:

- For non-web protocols (TCP/UDP) → use Azure Load Balancer.
- For web applications requiring SSL offload, WAF, or path-based routing → use Azure Application Gateway.
This approach provided flexibility and clear operational boundaries between traffic types.
The Result: Resilient, Secure, and Scalable
By deploying both solutions, we delivered an infrastructure that’s:
Resilient: load-balanced traffic across multiple instances to prevent downtime.
Secure: protected web workloads with SSL offload and WAF features.
Scalable: ready to handle future growth and service expansion.
The customer now benefits from a highly available, monitored, and intelligent load balancing setup that supports their microservices architecture seamlessly.

Not sure which Azure load balancing setup suits your workloads? Get in touch via our contact page to discuss architectures that improve performance, resilience, and scalability.
At Cloud Elemental, we design cloud environments that are secure, efficient, and built to grow with your business.