You are currently browsing the tag archive for the ‘akamai’ tag.

For awhile now, F5 has been referring to their BIG-IP products as “Strategic Points of Control.” When I first heard that phrase, I didn’t really understand what they were trying to say and assumed it was “marketing speak.” As I’ve gotten better at leveraging F5 technologies to solve my very complicated requirements, I’ve begun understanding what they meant.

I was going to write a blog post about “Strategic Points of Control” a couple months ago, but Lori MacVittie had already beaten me to it.

She defines Strategic Points of Control as “Locations within the data center architecture at which traffic (data) is aggregated, forcing all data to traverse the point of control.”

I think that’s a great definition so I’ll happily use it here.  For our example, let’s assume we’re hosting an E-Commerce site. Naturally, traffic traverses our F5 LTMs on its way to our application instances. This means the F5s are not only a point of failure, but also a point of control. They see all inbound and outgoing content for this application. Since F5 does a wonderful job of building L7 visibility into their devices, LTM becomes a great candidate for altering or reporting on the traffic flowing through it. Of course, just because it can, doesn’t mean it should.

Someone posted a question on DevCentral (F5’s User Community) wondering when it was prudent to use iRules. Naturally, most of us answered “it depends.”

While almost everyone appreciates the flexibility of iRules, some fear that might be used when they shouldn’t be.

I recently worked on a project that required us to ensure an HTTP application only used HTTPS. Since this application was being fronted by an F5 LTM pair, it made sense to terminate the SSL there and send cleartext between the F5 and application.  While sometimes, it’s as easy as making an HTTPS Virtual Server and applying an SSL profile containing the proper cert, I wasn’t that lucky. This particular application sent redirects to the user based on how it was being accessed. If it was being hit over HTTP, it sent redirects specifying http. If it was being hit over HTTPS, it sent redirects specifying https. In this case, even though we were using HTTPS between the client and LTM, the application would still see traffic over HTTP since we weren’t re-encrypting the data between LTM and the application. Naturally, this would cause a user to stop using SSL as soon as they clicked a link.

Fortunately, since LTM sees the traffic between itself and the application, it can see these redirects and rewrite them. By using “redirect rewrite,” I was able to rewrite the redirects sent by the application to use https. Unfortunately, this application also had javascript buttons that when clicked, would cause the user to send a GET request specifying HTTP. Again, since LTM is a “strategic point of control” and sees the traffic, I simply wrote an iRule to redirect all HTTP requests for this Virtual Server to HTTPS.

After creating the iRule for the redirect, I let the application team know that we were ready for them to start testing. They were somewhat surprised that I was able to make the application use HTTPS without them making any changes. One of them actually said, “awesome, I like when it’s easy like this and we don’t have to hack crap together.” With a huge smile on my face, I said, “that’s pretty much exactly what I just did.”

It only took me about 10 minutes to brush up on “redirect rewrites” and since I had written plenty of “http-to-https” iRules, this was extremely easy. At the end of the day though, I used iRules to fix an application “issue.” While this is one of the best features of iRules, it demonstrates their potential use as a mitigation tool. What if I was the only person to have a good understanding of iRules or how we were using LTM to handle the redirects for this application? If someone accidentally altered or removed that iRule, the application would start having issues. If the application code was rewritten to only use HTTPS, there really wouldn’t be any concerns. Of course, there are a ton of application instances and by making the change on the F5s, we keep traffic from having to get to the apps just to be redirected and also are able to make a change in only one place.

One of the most enjoyable posts I’ve made dealt with using iRules to generate Heatmaps to illustrate site visitors. Even though I tested this iRule and got it working well, I ended up choosing not to use it. Because my site leverages Akamai’s DSA product, we have access to very similar information through their portals. By using their site to track this info, I essentially traded one Strategic Point of Control for another. Obviously I saved myself a performance hit on our F5s, but it really came down to whether tracking users like this was a proper use of my LTMs.  The answer, as always, is that “it depends.” For sites that don’t have Akamai or some other product that also has visibility into information like this, F5 might be your best option.

Assuming you’re using Akamai and have an F5 deployment, you’ll run into several areas of overlapping technologies:

1. Using Context to handle different users…differently.

2. Protecting application resources by throttling users based on whether cookies exist.

3. Web Application Firewalling

4. Redirects

5. Limiting access to a site to certain geographic areas/types of users

6. Compression, Caching, Acceleration

The list could easily go on, but it demonstrates some potential challengers an architect might face. Since both Akamai and F5s are strategic points of control, which should you use? I think the most accepted rule is “the closer to the user, the better.” In reality, it comes down to a cost/benefit comparison. While making these decisions in Akamai-land both limit traffic to your infrastructure and also accelerate the user experience, there’s a price for that. Assuming you already have capacity on your LTM, it would be free (save labor) to use it instead whereas Akamai would likely charge for each feature.

I had an interesting discussion with a coworker about on which systems certain application logic should lie. In this case, our dialog revolved around whether an HTTP Redirect  should lie on a Web Server or on an F5 Application Delivery Controller. Naturally, being the ADC guy, I would want it on my system. Even with that said, I think it’s pretty obvious that logic like this should lie on the F5 device.

1. The F5 is closer to the user than the web server. If the F5 handles the redirect, the Web server doesn’t have to see the initial request, just the post-redirect one.

2. Instead of the redirect existing on multiple servers, it only has to exist on 1 (or 2) F5 devices.

Today, when most people discuss serving content on the “edge” or “closer to the customer,” they’re likely saying so because of the performance implications. The initial motivation for companies to utilize CDNs like Akamai  was to reduce dependence on their own infrastructure. By offloading static content to a CDN, companies could reduce their bandwidth costs and potentially even their server footprint. As demand for content-rich applications has increased, the main motivation for utilize a CDN has changed. The price of bandwidth has dropped dramatically while server consolidation technologies like Virtualization and Blades has made server resources cheaper than ever. Now, when a company chooses to utilize a CDN, it’s likely so its content can be even closer to its clients/users. Using technologies like Geographic Delivery, a user requesting a page from California can get sent to a CDN’s resource in California. This helps to deliver the rapid response time users have come to require out of new web applications.

There’s really no disputing that compression, caching, security, and redirects should be done as closer to the user as possible. The only potentially valid argument I see for not utilizing such services is a financial one. In retail, customers demand fast response times. In some environments, that isn’t the case. If users are apathetic to load time, then the optimal cost-effective solution would likely be one that doesn’t require a CDN at all…it’s all about finding out which solution fits best for your environment.