Security Laboratory

Security Laboratory

Sec Lab: Security Products

In 1995 if you wanted a security product, you downloaded the source and compiled it on your Sun 3, today we buy supported commercial products: this series on the security lab is to introduce you to some of the products out there and, when possible, the movers and shakers that are part of the team that creates these products.

Other Related Articles in Sec Lab: Security Products

F5 is a Security Company?

By Stephen Northcutt

Ken Salchow is a manager with F5 Networks[1] and he has been kind enough to share his thoughts with us. Ken, can you please tell us a bit about your background?

I’m working on my 8th year with F5, where I started out as a field engineer in the north-central US where we closed the first $1M PO for F5, was the first Regional Security SE, and the first Security Systems Architect. I now manage our core technical marketing team that is responsible for many of the whitepapers, articles and industry association memberships. Prior to F5, I was actually a customer of F5 at Best Buy corporation where I was a member of the team that developed the hardware architecture behind—when I started there it was on one machine under a buddy’s desk and when we were done it was over 150 servers running behind F5’s BIG-IP. The team I eventually worked for became the Internet security group at Best Buy responsible for all the firewalls, VPN’s and related security infrastructure. I’ve run my own consulting company (competing directly with Geek Squad when they started) and have been a network admin, systems admin and even did component-level repair of main boards for about a month. Along the way I’ve been CNE, MCSE, CCNP and Network+ certified. I also currently hold my CISSP, Certified Ethical Hacker (C|EH) and Certified Computer Examiner (CCE) certifications—all in good standing, thank you.

Everyone in networking operations is familiar with F5, they are the load balancer provider. Can you give us the history of F5?

F5 has been around for over a decade now providing application delivery networking tools--everything that bridges the gap between the network and the application, making sure every application interaction is fast, secure and available. As you said, this started out with a simple load-balancer and has grown to encompass not only local and global load-balancing, but also compression, SSL acceleration/termination, full-session content inspection/modification, WAN optimization, secure remote access and even a web application firewall. Most recently, we also acquired a company called Acopia that does the same thing we do with applications for your data storage devices.[2] F5 headquarters is in Seattle, WA, but we have offices all around the globe.

Thanks for that Ken. In the late 90s I was writing IDS signatures to detect the presence of F5, are there still packet artifacts that make it possible to detect if an organization is using your load balancer?

If you remember back to those days, you might recall that one of the first signatures to detect F5 devices was simply the lack of response to most, if not all, of the invalid packets sent; essentially our signature was a lack of signature. By intentionally trying to account for all of the 'weird' packets that might reveal a signature, we actually ended up creating one anyway--so, of course, just like other systems, there will most likely always be artifacts or clues that you are talking to an F5 box. We aren't overly concerned about that though. We are more concerned with two things: 1) making sure that those signatures don't give any additional information (like variances that might give away exact version numbers); and, 2) making sure that that's *all* you see, i.e. we can do a lot of things to make sure that you can't fingerprint the devices behind our gear where all the real action is taking place.

I understand that F5 is moving into the security space, can you help us understand why you decided to get into this fairly crowded space?

I'm glad you asked that, because this is a common misunderstanding and there are really two answers.

First, F5 has always been in the 'security space'. We have been load-balancing firewalls, providing DoS protection, signature masking and a convenient tap point for IDS/IPS systems for years. Furthermore, anyone who has a passing familiarity with the way our devices are configured is aware that our virtual servers - which are defined by IP and Port - provide very similar functionality to traditional rule-sets in network firewalls; not to mention consolidated authentication, SSL certificate management and, as a matter of necessity, sophisticated session-state management. As far as I'm concerned, we've been an integral part of the enterprise security posture since the very beginning.

Second, our customers have told us that application availability isn't enough--it has to be fast and it has to be secure. If the server is running, but it takes 60 seconds to respond to a request--it isn't really available to process requests. In the same vein, if the application is running and performing, but the user can't access it because it is an internal, sensitive application inside the organization and they are on the road at a hotel--it isn't really available to the user. In response to this, we added optimization functionality and secure remote access capability. This process continues to this day. If the web-based application gets taken off-line by a malicious user--no amount of load-balancing and optimization is going to help, so we added a Web Application Firewall, and so on.

What it comes down to is that our customers only care about their users being able to access their applications when they need them. We can't guarantee that by just supplying HA and optimization--we also have to make sure the application is as secure and protected as it can be or everything else we’ve done is just wasted time and money. At the same time, adding security can often impact the availability and performance of the application, so we feel that they have to be done, if not an integrated manner, in concert with one another.

OK, that makes sense. Now that you are shipping security products can you tell us about them, what is your focus?

As I mentioned before—we have been part of the security infrastructure for years and we tend to think of solutions more than products. That being said, we have products/features that enable existing security devices, products/features which incrementally add security to existing security devices, and then products/features which are more clearly focused on security. Really, to have a full discussion on all the security aspects of our products would take considerable time, so why don’t I just highlight a few?

BIG-IP® Local Traffic Manager™ (LTM) - the progeny of the original load-balancer, it provides numerous security features including packet-filtering, resource masking, SSL offload and certificate management, VLAN management, Port/VLAN/Virtual Server traffic mirroring and content manipulation and many other security services. For instance LTM can run our Message Security Module which allows the LTM device to work in conjunction with Secure Computing’s TrustedSource™ source or sender IP reputation database to prevent SPAM from even entering your messaging systems. This is a great example where F5 was already providing load-balancing and optimization for our customer’s SMTP systems—and could seamlessly integrate security services as well.

BIG-IP® Application Security Manager™ (ASM) – is probably what people think of as our most specific ‘security’ product as it is our web-application firewall. ASM is available as a stand-alone appliance, but it is most often run as a module directly on the LTM device. This allows us to apply the application layer policies and filtering at the same point where we are already providing advanced application delivery policies, such as acceleration, optimization, and even SSL termination, allowing ASM to secure traffic during delivery, even inspecting SSL encrypted. Also, as I mentioned before, this allows us to apply acceleration techniques to the traffic which helps mitigate the potential latency that an application firewall might inject. This really shows the whole ‘secure, fast and available’ ideal.

FirePass™ - is another more ‘security’ centric product as it is our SSL-based VPN. It provides secure remote access using SSL rather than legacy IPSec. One of the advantages, of course, is that SSL-based systems don’t require shims in the TCP stack and can therefore be delivered dynamically, which is really a great thing—especially in disaster recovery and business continuity situations. The ease of use of SSL VPN, however, is also one of its greatest risks, and that’s why FirePass also has advanced client-integrity checking features which allow you to manage the security posture of the devices you allow access. For example, it can identify versions of OS down to the patch level and can recognize over 100 different anti-virus engines—all of which can be used to make a security decision about letting that device on the network.

That’s just a quick down and dirty—like I said, a complete rundown of the security impacts of all our products is well beyond the scope of this conversation.

Great, can I ask you to take off your marketing hat for a second and focus on process. What advice would you give a new CIO, what are three things you think are important for a new CIO to do as soon as they start on the job?

You know, the single greatest piece of advice I could give a new CIO is to immediately attempt to break down the silos between the network team, the applications team and the security team. I’ve read a lot of analyst reports that say this is starting to happen in the enterprise, but every time I visit with a customer—it isn’t the case. The applications team still builds their application with little input from the network and security teams. The network team is left to try making the application work effectively and the security team is always left to try fixing things that would have been cheaper and easier to fix if they had been involved in the application design in the first place. Cross-functional, matrix-based teams are really the only way to effectively create and deliver world-class applications and, with things like SOA and Web2.0 on the horizon, any enterprise that can’t get beyond those siloed development cycles simply won’t be successful.

Another thing that I would suggest the new CIO quickly realize is the impact things like the iPhone are going to have on the way their organization does work. The hype around the iPhone is just adding fuel to the fire of mobile computing and greatly increasing the usability of mobile interfaces. It is no longer just the geeks who will be running Web2.0 applications on their handheld—even the fashion conscious will be as well. The implications of this are huge—especially from a security standpoint. We have to quit trying to look at the world as “internal or external” and realize that all access to our data and applications must be treated with the same care and respect—and identical policies. It’s no longer enough to simply provide a username and password to access an application; the ‘context’ of that request is equally important, i.e. where is the user coming from, what devices are they using, what is the security posture of the device AND the environment they are in, etc.

I know you asked for three things—but those two should keep any new CIO pretty busy.

There has been a lot of discussion about convergence in the security space, VoIP and traditional security services running over what used to be data networks, not to mention video. What can you tell us about the impact of this on our networks in the next few years.

I can tell you one thing, it sure won’t be boring. Seriously, I’ve been spending a considerable amount of time working with the IP Multimedia Subsystem (IMS) architecture lately that deals specifically with these issues within the service provider space; the cable networks, mobile carriers and fixed-line providers. There are some potentially society-altering capabilities when you start converging all of these services—like the fact that your location or the access network you are using to access those services becomes irrelevant. Instead of calling my home, mobile or office number to reach me—you will be able to call ‘me’ and wherever I am—I’ll get the call; or the fact that video phones will most likely become the rule as opposed to the exception. Things like being able to transfer a mobile call to your landline or redirecting the video component to your plasma TV, instead of using your mobile screen or computer monitor, are all technology innovations that until now have really been relegated to science fiction and cartoons.

I know you asked me to take my marketing hat off, but this type of flexibility and the consequent complexity really requires an intelligent layer that sits between the ‘network’ and the ‘applications’; one that provides for the secure, fast and available delivery of these applications. And by intelligent, I mean the ability to dynamically change the way the applications are delivered based on that changing context mentioned above. Delivering voice or video to a mobile handset over general packet radio service (GPRS) is entirely different than sending that same content to the same device over Wifi, which is entirely different from then delivering the voice to a landline receiver and the video to a plasma over broadband; the codecs are different, the need and types of compression are different, the required sampling rates are different—everything is different. Our networks will need to be able to detect, understand and adapt to those changes—on the fly.

And then there is a different type of convergence, what some people call unified threat management, where one appliance serves as firewall, VPN concentrator, IPS, anti-malware, and so forth. This seems to be an unstoppable trend, can you share your thoughts about the good, the bad and the ugly?

I started a presentation at DoDIIS this past year by saying that “Defense in Depth is Crap”. I stole that from Mike Fratto—I’m sure he wishes I’d quit quoting him—but I think I understood where he was coming from and frankly, I agree. It’s not that the theory of defense in depth is wrong—it’s just that 95% of the time we end up deploying it in the same physical sense as you would if you were still defending castles with 15 different, *physical* devices that you must pass through in order to get to the inner sanctum. Unified Threat Management is really talking about the same thing—there is no need, or really advantage in the digital world, to physically separate everything into different devices. We can achieve nearly the exact same functionality with a unified system.

The good? That’s easy: the hope for improved performance, better reporting/correlation/auditing, reduced management complexity and a single place to apply a global policy instead of having 17 different policies in 17 different places. I always say: Complexity is the enemy of good security. Unfortunately, I think they have a ways to go to achieve these results.

The bad? While, statistically, having one policy instead of 17 makes it less likely to make mistakes (you have 16 less places to make a mistake), the fact that now it potentially only takes one mistake to completely circumvent your entire security infrastructure is the big one that everyone brings up. I’m also concerned that the UTM market seems enamored with signatures, but doesn’t seem to give much credence to any aspect of positive security measures. Part of making it easier to manage is not having to deal with so many signatures. In many cases, you can replace several hundred discrete signatures with one well crafted application policy defining the allowed interaction. Lastly, until there is some standardized method of integration, the customer is not always able to get ‘best of breed’ solutions.

The ugly? The email I’ll probably get after this goes to press. This is a really hot debate—probably one of the most religious arguments I’ve seen since the original PC vs. MAC debates or Linux vs. Windows discussions. It does get ugly sometimes.

I like to give people a bully pulpit,[3] a chance to share their thoughts about what they feel is important. What is your heart burning to say?

I’ll just take the opportunity to tie together everything we’ve been talking about. I’ve mentioned intelligence and context several times. These are the things that, if added to the UTM discussion, really make for an interesting future. What if that UTM device included client-side integrity checking (ala NAC/NAP/Etc), SSL VPN and the intelligence to dynamically react to changes in the context of a users request—on a per session, per application basis? In some respects, it makes the concept of UTM much more believable because it pushes the “depth” of your defenses all the way out to the client itself, while still being, really, a single box.

Think of it this way—you want to access a particular resource at your work. The first time you access it, it is from a corporate network, from a corporate laptop during business hours and you are, at least provisionally, authorized to use that resource. When you attempt to access the resource, the context of your access (who, when, how) are compared to the policy of the resource (what) and the device creates a session specific, dynamic ‘firewall’ policy based on that context. Now let’s say you take your laptop on the road and try to access the same resource—your context has changed. It may still be you, it may still be business hours and you are still using a corporate machine—but you are no longer accessing from a corporate network. In this case, you might want to apply a different access policy because the context has changed. Maybe you simply force the connection to be over SSL or maybe you decide that during this session you want to run all traffic through an IDS/IPS system. In any respect, what you are allowed to do changes dynamically based on the context. If you go home and use your home machine to access the resource, your context again changes and a new access policy is defined for that session. Global policy merged with individual resource policies could give the ability to have very simple policy statements that combine into very granular, specific access rules.

Take this even further—what if the policy states that your application traffic must pass through an IDS/IPS system and that system detects an attack? This changes the context of your access—and could trigger a dynamic regeneration of your access policy based on the new information and the security policy in effect. Maybe we simply terminate your ability to use that resource for x hours; maybe we kill your entire session and make you re-authenticate; or maybe we just log the activity.

And that’s not even the end. What if that policy wasn’t just about access and security? What if this new device wasn’t just a UTM, but something that also encompassed optimization, availability and/or provisioning functions? A lot of people don’t realize that applying standard web compression to traffic over broadband actually *increases* the latency and diminishes its throughput—the overhead of compression isn’t made up for because you aren’t usually bandwidth limited. At the same time, compression to traffic over dial-up, or more commonly today, over mobile networks can tremendously increase the performance of a web application. Because this device knows about the context of a request and changes in it, it could dynamically adjust the delivery of the application just as easily as it adjusts the access policy. You see how this ties into that ‘convergence’ we talked about before? As you move from a mobile device to a landline, not only could we change the security characteristics, but we could dynamically change the way we deliver the application based on the best parameters for the new device.

I’ve been calling this Unified Access and Application Delivery (UAAD), but I’m not alone as there is at least one other similar model out there: the NSA RAdAC model.[4] Their model is a little more, should I say ‘visionary’? Their implementation of ‘intelligence’ uses AI technology and I’m not sure we need to go to that extent, but hey—they have the money and resources to probably make it happen, right?

I’ll bet you’re sorry you gave me the soapbox on that one.

Not at all, Ken. Last question, can you tell us a bit about yourself? What do you do when you are not in front of a computer?

What? You can live without a computer? In all honesty, I work from home—so it seems like I’m always in front of the computer. When I’m not traveling around the world for F5, I spend the time with my wife and children (my wife and I have 5 between us)—my daughters and I play a lot of Guitar Hero. They keep me pretty busy. Other than that, I love to read (I’m rereading Cryptonomicon right now), I have a fairly large DVD and music collection (was just listening to Scott Joplin ragtime) and I have two ferrets named Mushu and Kekoa that make sure no day is uneventful. It’s a simple life.