AWS Instant Delivery AWS Elastic Load Balancing Setup
Introduction: The Load Balancer Is Your Traffic Cop (Without the Attitude)
If you’ve ever wondered how a handful of EC2 instances can serve an embarrassing number of users without collapsing into a pile of timeouts and tears, welcome to the world of Elastic Load Balancing. In plain English: AWS Elastic Load Balancing (usually shortened to ELB) helps route incoming requests to healthy back-end targets. It does this while scaling, detecting failures, and generally behaving like a responsible adult—at least compared to many developers at 2 a.m.
But “setting up a load balancer” can sound like one of those cloud tasks that begins with confidence and ends with a screenshot of a console error you can’t interpret. This article is your guide to setting up AWS Elastic Load Balancing with clarity, structure, and minimal mystery. We’ll cover ALB (Application Load Balancer), NLB (Network Load Balancer), and the legacy Classic Load Balancer (which you probably shouldn’t choose unless you’re maintaining someone else’s museum exhibit).
By the end, you’ll know how to create a load balancer, configure listeners and target groups, tune health checks, and wire everything together with sensible defaults. We’ll also include common pitfalls—because reality is a cruel teacher and health checks are often the final boss.
Before You Click “Create Load Balancer”: Pick the Right ELB Type
AWS offers multiple load balancer types, and choosing the wrong one can turn your setup into a “why is nothing working” comedy show. The best type depends on your application and traffic patterns.
Application Load Balancer (ALB)
Use ALB when you’re working with HTTP/HTTPS and want advanced routing based on content. ALB can make decisions using hostnames and paths (for example: send /api to one target group and /images to another). It’s also a great fit for modern web applications, microservices, and Kubernetes ingress.
Think: “smart traffic cop” who reads the street signs and changes routes accordingly.
Network Load Balancer (NLB)
Use NLB for TCP/UDP traffic and when you need high performance with extremely low latency. NLB is less about HTTP features and more about fast, efficient forwarding at the network layer.
Think: “broad-shouldered bouncer” who simply directs traffic to the right door without asking what the visitor is doing.
Classic Load Balancer (Classic ELB)
Classic ELB is older. It’s generally not the first choice for new projects. If you’re starting fresh, aim for ALB or NLB depending on your needs. Classic ELB is like a flip phone: it technically works, but you probably don’t want to build your life on it.
The Big Picture: How ELB Works
It helps to visualize the components, because ELB setups often fail due to “I thought those steps were optional.” Here’s the conceptual model:
- Load Balancer: The front door that receives connections from clients.
- Listeners: Rules that define what the load balancer listens for (protocol and port) and what to do with incoming traffic.
- Target Groups: Collections of back-end targets (EC2 instances, IPs, or other services) that receive traffic.
- Health Checks: Mechanisms that determine whether targets are healthy enough to receive traffic.
- Routing Rules (ALB especially): Logic to forward traffic to the appropriate target group.
If you keep these pieces in mind, configuration errors become easier to diagnose. It’s like knowing which part of your car is making the noise: engine, wheel, or the mystery grinding from the trunk.
Step-by-Step: Set Up an ALB for a Typical Web App
We’ll walk through a common scenario: you have an application running on instances (or container tasks) and you want to expose it over HTTPS with health checks and clean routing.
Step 1: Decide Where Your Targets Live
Target groups typically point to EC2 instances in certain subnets, or to IP addresses, or to resources like ECS tasks depending on your architecture.
For EC2 targets, you’ll generally ensure:
- Your instances are reachable on the target port (for example, 80 or 8080).
- Your security groups allow traffic from the load balancer to the instances.
- Your application is actually listening on the expected port.
Yes, it sounds obvious. No, it’s not always obvious at 11:47 p.m. when you realize the app runs on 3000 but you configured 8080. Humans are resilient, but servers are not.
Step 2: Create the Application Load Balancer
In the AWS Management Console:
- Go to the EC2 service.
- Find “Load Balancing” and select “Target Groups” and “Load Balancers” depending on the flow (the UI varies slightly over time).
- Choose “Create load balancer.”
Then:
- Select Application Load Balancer.
- Choose scheme: usually “internet-facing” if you need public access, or “internal” if only private clients should access it.
- Select the VPC.
- Select subnets: typically one or more public subnets for internet-facing deployments.
Availability Zone coverage matters. A load balancer spanning multiple Availability Zones helps maintain resilience when an AZ has issues. It’s called “elastic” for a reason, even if your deployment pipeline doesn’t always behave elastically.
Step 3: Configure the Listener
Listeners define how the load balancer accepts traffic. A typical setup includes:
- Port 80 with redirect to HTTPS (optional but common)
- Port 443 using HTTPS forwarding to target group
If you want HTTPS, you’ll need an SSL certificate. AWS offers ACM (AWS Certificate Manager). You can either use an existing certificate or create/import one.
In the listener configuration:
- Choose protocol: HTTPS
- Set port: 443
- Select certificate: from ACM
- Default action: forward to a target group
If you don’t set a sensible default action, you might end up with traffic that goes nowhere. Like sending a letter without an address, except it’s your users who are stuck waiting.
Step 4: Create a Target Group
Now you define where requests should go. In the target group settings:
- AWS Instant Delivery Target type: usually “instance” if pointing to EC2 instances.
- Protocol and port: for example HTTP on port 80, or HTTP on port 8080 depending on your app.
- VPC: the same VPC as your load balancer.
Choose target group name in a way that doesn’t make you regret your life choices. “tg-prod-1” is okay; “tg-final-final-2” is the sort of naming that haunts on-call engineers.
Step 5: Health Check Configuration (The Final Boss)
Health checks are often the reason “it should work” doesn’t. A target will only receive traffic if it passes health checks.
Common health check settings include:
- Protocol: HTTP (for web apps) or HTTPS or TCP.
- Path: for HTTP health checks, often something like /health or /status.
- Port: either use the traffic port or specify a custom one.
- Healthy threshold and Unhealthy threshold: how many consecutive successes/failures mark a target as healthy/unhealthy.
- Timeout: how long to wait for a response.
- Interval: how frequently to check.
A practical recommendation:
- AWS Instant Delivery Use an explicit health endpoint that returns a fast response and a 200 status when healthy.
- Make sure the health endpoint doesn’t require heavy authentication flows.
- Ensure firewall rules allow the load balancer to reach the health check port.
Also, beware of “slow boot” applications. If your app takes 45 seconds to start but health checks run too aggressively, your instances might be marked unhealthy before they’re ready. Tweak thresholds and intervals accordingly.
Step 6: Register Targets
Once your target group exists, you attach EC2 instances (or other targets). In the target group configuration:
- AWS Instant Delivery Select “Add targets.”
- Choose instances.
- Select the port (if required).
- Confirm registration.
After registration, watch the “Health” column. Initially, you might see “initial” state. Give it time, and then verify it flips to healthy.
Step 7: Security Groups and Networking (Where Plans Go to Die)
ELB doesn’t magically bypass networking rules. You must configure security groups so that:
- The load balancer can reach the target instances on the target port.
- Clients can reach the load balancer on listener ports (80/443) depending on scheme and exposure.
AWS Instant Delivery Common approach:
- ALB security group allows inbound from 0.0.0.0/0 on ports 80 and/or 443 (if internet-facing).
- Instance security group allows inbound from the ALB security group on the app port (for example 80 or 8080).
If your target group shows unhealthy, check:
- Is the app listening on the correct port?
- Does the health check path exist and return 200?
- Are security groups allowing traffic from the ALB to the instance?
- Are the instances in the expected subnets/AZs?
- Is the target group protocol/port matching your application?
Yes, you’ll check more than once. No, that doesn’t make you a failure. It makes you a professional.
Step 8: Create Optional Routing Rules (ALB Flavor)
With ALB, you can define rules beyond the default action. Examples:
- Forward requests for /api/* to an API target group
- Forward requests for /static/* to a static asset target group
- Use host-based routing (different domains to different services)
Typical workflow:
- Edit listener rules.
- Add a rule with priority.
- Define condition (path pattern, host header, etc.).
- Choose action forward to the target group.
Be aware of rule priority. The first matching rule wins. If you add a broad catch-all rule above a more specific one, you may accidentally route everything into the wrong hallway.
Testing: Confirm Your Load Balancer Isn’t Just a Decoration
Once you create and deploy, test systematically:
- Use the load balancer DNS name (or your domain) to access the service.
- Verify HTTPS is working properly (certificate valid, correct TLS behavior).
- Check that requests arrive at the correct targets (logs, metrics, or instance response).
- Confirm health check status is healthy for all targets.
If traffic fails but health checks show healthy, your application might return errors for normal requests (like 500 for real routes) even though health endpoint returns 200. That’s not always wrong, but it’s a sign your health check endpoint and real behavior aren’t aligned.
AWS Instant Delivery Also, avoid the classic mistake: “I can open the health endpoint in a browser but the load balancer can’t.” Browsers go through the public internet; health checks go through your VPC and security groups. Different paths, different rules, same disappointment.
Tuning and Best Practices (So It Doesn’t Break When You Get Popular)
Load balancers can be set up with defaults, but serious reliability comes from thoughtful configuration. Let’s discuss the common areas you’ll likely want to tune.
Stickiness: When You Need It, and When You Don’t
Some applications rely on session stickiness (keeping a user on the same backend instance). ALB supports stickiness in target groups.
However, stickiness is like a speed bump: it can help certain setups, but it can also reduce distribution fairness. If you’re using shared session storage (like a database or cache), stickiness might be unnecessary.
Rule of thumb: prefer stateless services when possible. If you need stickiness, enable it deliberately and document why, so future you doesn’t remove it “for cleanliness” and break everything.
Connection Handling and Timeouts
Load balancers manage connections and forward traffic. Timeouts matter, especially for slow endpoints.
If you see client timeouts or gateway errors, review settings like idle timeout and ensure your application timeouts are compatible.
Also, consider what happens with long-running requests. If your architecture doesn’t support long-lived HTTP requests, you might need asynchronous patterns (queues, background jobs, and polling) rather than hoping the load balancer will wait patiently like a barista who’s seen it all.
Access Logs and Monitoring
For troubleshooting and auditing, enable access logs for your load balancer (where supported). Then use metrics and logs to understand traffic patterns.
In practice, you want visibility into:
- Request counts by target
- HTTP codes and response times
- Health check failures and reasons
- Rule routing behavior
Cloud setups get confusing when you can’t tell who is doing what. Observability turns “mystery meat” into “oh, that’s what happened.”
Common Errors and How to Fix Them (Without Crying)
Here are several frequent issues people run into while setting up AWS Elastic Load Balancing. If you’ve already encountered one, congrats: you’re human. If not, congratulations: you’re about to learn in advance.
Targets Stay Unhealthy
This is the most common. Likely causes:
- Health check path returns non-200.
- Application isn’t listening on the configured port.
- Security group blocks health check traffic.
- Wrong protocol (HTTP vs HTTPS).
- Instances are in the wrong subnets or not registered correctly.
Fix approach:
- Validate your health endpoint responds from the instance itself.
- AWS Instant Delivery Confirm the target group’s protocol/port/path match your application.
- Check security group inbound rules from the ALB security group.
- Use logs on your instances to see what requests arrive (if any).
Listener Rules Don’t Match What You Expect
Another classic: you configure a path rule like /api/* but everything still hits the default target group.
Common reasons:
- Priority is wrong, and another rule matches first.
- Pattern syntax doesn’t match your URLs (for example, missing trailing wildcard behavior).
- Host header conditions don’t match your request domain.
Fix: test with a few example URLs and confirm which rule matches in the load balancer rule evaluation (where applicable) or via logs.
HTTPS Works, but Redirect Loops Appear
If you set up HTTP-to-HTTPS redirects and also have application-level redirects, you might create a redirect loop. For example, ALB redirects to HTTPS, but the app redirects based on headers it doesn’t recognize.
Fix: ensure your app respects the correct proxy headers (for example, X-Forwarded-Proto) and that your framework is configured to treat the load balancer as a proxy.
Health Checks Pass, But Users Still Get Errors
This happens when:
- Your health endpoint is simple and always returns 200.
- Your real endpoints fail due to database connectivity, missing env vars, or auth configuration.
Fix: align health checks with real readiness. For example, have the health endpoint verify critical dependencies (carefully, so it doesn’t become too expensive). Or add separate readiness/liveness logic in more advanced architectures.
Staging Works, Production Fails (Because Production Is a Different Planet)
Usually it’s environment drift:
- Different ports or target group configuration
- Different security group rules
- Certificates missing in the target environment
- Different subnets or routing rules
Fix: keep IaC (infrastructure as code) and use environment-specific variables. If you rely on manual clicks for production changes, you will eventually discover the human “select the wrong thing” tax.
What About NLB? A Quick, Practical Contrast
If your use case fits NLB, you’ll configure similar pieces: load balancer, listeners, and target groups, but with protocol differences.
- NLB commonly uses TCP/UDP and can forward connections quickly with low latency.
- Health checks may be TCP-based or HTTP(S) based depending on your setup and supported options.
- AWS Instant Delivery Advanced HTTP routing rules are generally not the focus; you’re routing at lower layers.
In other words: ALB is great for “route based on web content.” NLB is great for “forward traffic with minimal fuss.” Picking the right one avoids unnecessary complexity.
Production-Ready Checklist: Your Setup Review Before You Go Live
Use this checklist before you declare victory and go back to your regularly scheduled programming.
Load Balancer Basics
- ALB/NLB type selected correctly for your traffic (HTTP/HTTPS for ALB, TCP/UDP for NLB).
- Internet-facing vs internal scheme matches your needs.
- Subnets cover multiple Availability Zones (where applicable).
Listeners and Certificates
- Listener protocols/ports match your expectations (80/443 typical for ALB).
- HTTPS certificate is valid and correct for your domain.
- Default action forwards to the intended target group.
Target Groups and Health Checks
- Target group protocol/port matches your application.
- Health check path exists (for HTTP checks) and returns 200 quickly.
- Health check intervals/timeouts fit your app startup time.
- Security groups allow the load balancer to reach targets.
- Targets are registered and show healthy status.
Routing Rules (If Using ALB)
- Rule priorities are correct and specific routes are not overridden.
- Host/path conditions match what clients actually send.
Security and Observability
- Least privilege security group rules are applied.
- Access logs and metrics are enabled (or at least planned).
- AWS Instant Delivery You have a way to trace requests to targets (logs, response headers, metrics).
A Small “Realistic” Example Scenario (Because Configs Love Context)
Imagine you have a Node.js service running on port 3000 on three EC2 instances. You want HTTPS on port 443 and you want to route /api to the service. You could do this with an ALB:
- Create an ALB in your VPC with public subnets.
- Create a target group using HTTP on port 3000.
- Create a health check on /health returning 200.
- Configure a listener on 443 with an ACM certificate.
- Set default rule to forward to your target group.
- Optionally add path-based rules for /api/* to forward to the same or different target groups.
The key is that port 3000 is your application port, while port 443 is your public HTTPS port. Confusing these is like mixing up your seat number and your gate number: both are numbers, neither is where you want to be.
Conclusion: You’re Now the Person Who Knows What “Healthy” Means
Setting up AWS Elastic Load Balancing isn’t just about creating a load balancer and hoping for the best. It’s about making deliberate choices: the right ELB type, correct listener configuration, accurate target group settings, and health checks that truly reflect your application’s readiness.
If you remember one thing, make it this: health checks and security groups are where most setups succeed or fail. Treat them like first-class citizens, verify your assumptions early, and you’ll save yourself from the classic “it works on my instance” tragedy.
Now go forth and balance traffic like the responsible traffic wizard you were always meant to be. And if you run into problems, don’t panic—logs, health status, and security group rules will tell you the story. You just have to read it closely, like a detective who suspects the culprit is always “the health check path.”

