Judging from what I saw while testing Juniper's DX3680 load balancer, it's clear that these devices have come a long way from their humble beginnings. In its simplest form, a load balancer is a device that sends TCP/IP requests to more than one host, creating a cluster of servers that all present the same Web site. In fact, basic load balancing can be accomplished by adding multiple IP addresses to a host entry Judging from what I saw while testing Juniper’s DX3680 load balancer, it’s clear that these devices have come a long way from their humble beginnings. In its simplest form, a load balancer is a device that sends TCP/IP requests to more than one host, creating a cluster of servers that all present the same Web site. In fact, basic load balancing can be accomplished by adding multiple IP addresses to a host entry in the domain name service (DNS) system. However, doing this creates a blind round-robin system that will send the next request to the next IP address, whether that IP address actually has a working server on it or not and regardless of whether or not that server is the best server to handle the next request. Most advances in load balancing technology are oriented toward ensuring that requests go to a server in the group best able to handle a request. Various algorithms decide which server gets the next request — the least loaded server or fastest responding server, for example. You can also use proprietary algorithms and even agents running on each server to provide more granular and accurate information on how heavily loaded each server might be. The process of determining whether servers are available has also grown more complex and precise, from the basic TCP/IP PING to check to see if the server has a responsive network connection to detailed checks that ensure that a particular service — whether it’s the HTTP daemon or a back-end SQL server — are running and returning a proper response to a query. As the basic capabilities expand and develop to a more useful level, load-balancer companies strive to differentiate themselves in other ways. From the early systems that were built on PCs with two Ethernet cards, load balancers have evolved to include up to 24 switched Ethernet ports and custom ASICs running routing rules at Gigabit wire-speed. Other systems add protection for Web servers and other types of application servers, guarding against buffer overflows, denial of service and other hacker attacks. Still others add the ability to route incoming traffic to specialized clusters of Web servers depending on the needs of the customer, so that e-commerce requests go to one cluster while video viewing is done on another. A recent trend is to add Web acceleration technologies, including HTTP compression, caching, and consolidation of TCP/IP requests from hundreds to a few. Many products offer a basic level of load balancing functionality and allow the customer to add additional features via add-on software modules. F5, for instance, offers functionality similar to the Juniper system I recently reviewed. On the other hand, Zeus offers basic functionality at a lower price, as does the Coyote Point system — I’ll have a review of that in the coming weeks. Technology Industry