You are currently browsing the tag archive for the ‘ADC’ tag.

As I discussed in my post about “Strategic Points of Control,” F5 LTMs are in a great position to capture and report on information. I’ve recently encountered several issues where I needed to log the systems sending HTTP 404/500 responses and the URLs for which they were triggered. While this information can be obtained from a packet capture, I find it much easier to simply leverage iRules to log the information.


If you don’t know too much about iRules, I’d encourage you to head over to DevCentral and do some reading. One of the first things you’ll learn is that there are several “events” in which an iRule can inspect and react to traffic. Each event has different commands that can be used. While some commands can be used in multiple events, some may not.


As an example, HTTP::host and HTTP::uri can be used in the HTTP_REQUEST event, but not in the HTTP_RESPONSE event. Since an HTTP Error Response sent by a server would occur in the HTTP_RESPONSE event (between server and LTM,) we can’t simply log the value of HTTP::host or HTTP::uri as those commands aren’t usable in the HTTP_RESPONSE context. Fortunately, variables can be set in one event and referenced in another which allows us to still access the proper information.


Here’s an overview of what we’re trying to accomplish:


1. A client makes a request to a Virtual Server on the LTM.

2. The LTM sends this request to a pool member.

3. If the pool member (server) responds with an HTTP Status code of 500, we want to log the Pool Member’s IP, the requested HTTP Host and URI, and the Client’s IP address.


We’ll be using the “HTTP::status” command to check for 500s. Since this command needs to be executed within the HTTP_RESPONSE event which doesn’t have access to HTTP::host or HTTP::uri, we’ll need to use variables.

From the HTTP_REQUEST event, we’ll utilize said variables to track the value of HTTP::host, HTTP::uri, and IP::client_addr.

The HTTP_REQUEST event in our iRule will look something like this:


set hostvar [HTTP::host]

set urivar [HTTP::uri]

set ipvar [IP::client_addr] }

Now, we’ll check the HTTP status code from within the HTTP_RESPONSE event and if it’s a 500, we’ll log the value of the variables above.


if { [HTTP::status] eq 500 } {

log local0. “$ipvar requested $hostvar $urivar and received a 500 from [IP::server_addr]” }}


Now, whenever a 500 is sent, you can simply check your LTM logs and you’ll see the client who received it, the server that sent it, and the URL that caused it. This is a fairly vanilla implementation. I’ve had several situations in which I needed to also report on the value of a JSESSIONID cookie so our app folks could also check their logs. In a situation like that, you’d simply set and call another variable.


set appvar [HTTP::cookie JSESSIONID]


log local0. “session id was $appvar”


This was a good example of how easily iRules can be leveraged to report on issues. Unfortunately though, this isn’t always a scalable option which is why I thought I’d talk about a product I’ve really enjoyed using.

The folks behind Extrahop call it an “Application Delivery Assurance” product. Since both co-founders came from F5, they have a great handle on Application Delivery and the challenges involved. Since I’m typically only concerned with HTTP traffic nowadays, I use Extrahop to track response times, alert on error responses, and also to baseline our environment. As an F5 user, I’m very pleased to see the product’s help section making recommendations on BIG-IP settings to tune if certain issues are seen.

I’d definitely encourage you to go check out some product literature. Since it’s not always fun to arrange a demo and talk to sales folks, they offer free analysis via Simply upload a packet capture, it’ll be run through an Extrahop unit, and you can see the technology in action.




One of my coworkers is doing a relatively simple infrastructure redesign for one of our sites. Essentially, the site can work over HTTP or HTTPS. As part of a new project, the application team requires that all data be sent over HTTPS so it can be encrypted. Since the current user behavior is to enter “http://site”, the user typically sends information unencrypted.

So, since the user must now send encrypted data, they must use “https://site” instead. That leaves us with the following options.

1. Instruct the user to always type “https://site”

2. Remove the HTTP Virtual Server so any traffic destined for HTTP will be dropped. This will hopefully force the user to enter “https://site”

3. Create an iRule that automatically redirects “http://site” to “https://site”

We obviously chose option 3 as it doesn’t require a change in behavior for the user and also supports them accidentally trying to hit the site over HTTP. Unfortunately, a lot of people don’t understand their infrastructure well enough to realize that such an ability exists within an Application Delivery Controller. In some environments, the project would have taken much longer and been much more troublesome for the end users.

Someone recently asked me what “application delivery” meant. For those who have read my blogs, you’ll notice many of the topics touch on the subject of application delivery but really don’t offer a simple definition of what it means.  From my perspective, it’s the effort of getting the content from the web server to the client.

It’s such a simple concept and yet there’s so much involved. An infrastructure must be designed that allows for spreading requests across multiple servers, monitors the availability of those servers to ensure they can service requests, offload tasks from the servers if able to do so efficiently, monitor the performance of transactions, and possibly even optimize delivery speed using WAN acceleration, compression, or caching.

The more requests a site handles, the more magnified each of those components becomes. While our infrastructure is small enough that we don’t notice much of an impact by altering our session dispatch method from “round-robin” to “least amount of traffic” or “fewest connections,” someone like Amazon obviously notices. For Amazon, optimizing their dispatch method can result in them requiring hundreds fewer servers and in turn realizing hundreds of thousands of dollars in cost savings. Add the ability to offload SSL and modifying Layer 7 from a server to an Application Delivery Controller and the savings is even greater, not to mention the increased revenue brought about by the client completing their transactions more quickly.

One thing I have definitely touched on before is how  most companies should have a single person manage their “application delivery design.” The manager of these technologies, which we’ve called the Application Delivery Architect, is a position that should pay for itself in reduced infrastructure costs and increased customer spending. Unfortunately, as companies have become complacent and stuck in their designs, an App Delivery Architect is rarely required because a re-design is often impossible. I’ve noticed most places simply re-design one component of their infrastructure at a time. It’s a very “year-to-year” form of thinking. For instance, if customer demand has caused stability issues, a company might simply add more servers to a farm, rather than looking at how scalable the front-end application is or whether offloading tasks to an ADC might be a more effective solution. When there isn’t anyone tasked and empowered with creating an actual vision for application delivery, a company will likely struggle to reach a truly efficient solution.

As I mentioned in this post, businesses need their IT staff to communicate well. While they also need employees who are experts at specific applications or technologies, they also need ones who might not know specific technologies in depth, but can understand how multiple systems interact. The former role is often filled by an “engineer” while the latter is filled by an “architect.”

As functions are consolidated and hardware footprints decrease, the need for architecture has increased dramatically. As companies continue to virtualize, engineers who used to be hardware experts are being asked to be “application experts.” Since Virtualization management software allows for allocation additional resources to servers as needed, server engineers no longer need to be as thorough at gathering system specs. They’re now able to just change the “hardware” configuration until the application works. This has resulted in them essentially becoming application support for their users. Given the cost of maintenance agreements with vendors like Microsoft, the Engineer is often just the liaison between the customer and vendor.

It’s like Tom from Office Space! I’m a People Person!

So far, we’ve only discussed how application delivery has changed from the server perspective. There’s still the network, security, and storage implications! Since storage and network hardware still need to exist, they should be considered when deciding how best to deliver applications and content to end-users. This is where “the application delivery architect” comes in.

If you look at the diagram below, you’ll notice the typical flow for an application.

From left to right, the user connects to the internet and if possible, uses some sort of Wan Optimization solution. Once they’re at the hosting DC, they are hitting the Infrastructure behind which the Application lives. The Application Delivery infrastructure can include the network, server, security, and storage Infrastructure as well as an Application Delivery Controller. The image below provides a bit more granularity to the traffic flow.

In “typical” environments, the Network team is responsible for Layers 2 and 3 (Switching and Routing) after which traffic is handed off to servers. Layers 4 through 7 can be handled differently depending on the environment. In some cases, the traffic might go straight from the switches and routers to a server. In other cases, it goes through an Application Delivery Controller before getting to the servers and onto the application. When an ADC comes into play, the system can perform a number of functions to offload traffic from the servers. This can include SSL Offloading, Caching, Compression, and even some Firewalling. The idea is that you can purchase “cheaper” servers if not so much is needed from them. This strategy lends itself very well to Virtualization since companies can experience much better consolidation ratios.

Since there are multiple ways to go about delivering an application, a true “application delivery” role needs to exist. Let’s consider the necessary components to properly deliver an application:

1. Availability – Application owners need to determine the availability requirements for their application. From there, the Infrastructure requirements can be derived.

2. Performance – Application owners should determine the performance requirements for their application. From there, Infrastructure requirements are again derived.

3. Security – Application owners determine security requirements for their application. Given these requirements, risks are either mitigated via the application code or an Infrastructure component like an Application Firewall.

4. Access – Here’s where an Application Delivery Architect really shines. Based on information from the developers, Infrastructure should be designed that allows the Application to reach customers as quickly and efficiently as possible.

5. Monitoring/Measurement – This is the component at which the success criteria for the other components are measured. If the Application needs to perform in such a way, advanced monitoring should be put into place to ensure the behavior is actually occurring. The more the monitoring resembles real user behavior, the better.

Given unlimited resources, it’s not difficult to excel at every one of these components. The need for an Application Delivery Architect role comes into play because in the real world, solutions need to be delivered that meet numerous constraints including cost, time, and effort. Ideally, such an architect is given an application, requirements, and a budget and told to deliver. Since the role requires a good understanding of all the pieces involved (storage, server, network, security, and monitoring,) the architect should excel.

IT employees always have a multitude of choices when designing a solution. Ask a server person to design an always available solution and they’ll probably get expensive, resilient physical servers or build out a very resilient Virtual Infrastructure. Ask a network person the same question and you’ll get expensive, resilient switches and routers. The same goes for storage. Since an Application Delivery Architect understands all the components, they’re able to choose the most effective solutions. That person should understand that if you invest in a good Application Delivery Controller, you can spend less on servers and networking gear. That person should also understand enough about the development process that they can immediately recognize when an application is designed in such a way that it will abuse the infrastructure.

In conclusion, if a company’s goal is to deliver applications and content as efficiently as possible, an Application Delivery Architect role might be just what they need. In most cases, the role should pay for itself in increased efficiency and reduced Infrastructure spends.

For those who know me in a professional fashion, you know I’m a big fan of Application Delivery. For those who don’t know me in that arena, Application Delivery is essentially the process by which content/applications get from a web server to your client machine, be it a Desktop, Laptop, or phone. Since most web-sites require constant availability, many companies utilize load balancing hardware that ensures requests are only sent to servers that can properly serve the content. This requires that load balancers are able to monitor which servers are up, active, and able to send that content. The reason most load balancers are now considered Application Delivery Controllers (ADCs) is because they now do much more than just load balancing. They can offload SSL (HTTPS) functionality from servers, rewrite requests, and even assist in compressing and caching data to minimize the load on servers while speeding up delivery time.

There are many methods by which ADCs monitor server availability. Some simply ping servers while others post actual data to ensure the resource responds correctly. Due to the company for which I work’s desire to add more intensive applications to our sites, simply load balancing requests in a round-robin fashion is no longer good enough. We now want our ADC to poll a server for information such as CPU Load, Requests Per Second, Virtual Memory Usage, and TCP Errors. By obtaining this information, the ADC will be able to send sessions to whichever system is best capable of serving them. Essentially, we want the ADC to have as much information as possible  when making its decisions.

Unfortunately, I’ve run into a bit of a complication. The ADC accesses a script on the web server which then gathers the necessary data via Windows Management Instrumentation (WMI). While granting access to the script is easy, it’s been difficult to find the necessary access to allow the script to grab the WMI information from the server. Fortunately, we’ve got a good security staff who will hopefully be able to help. I know granting the application Administrator access allows it to work, but that might have other implications. Once everything is figured out, we’ll be using “Dynamic Load Balancing.” Dynamic Load Balancing,  as opposed to static load balancing, continually gathers information from its resources and changes its load balancing algorithm accordingly. This is typically the best load balancing method because it allows for sending requests to the most capable servers.