by Chad Dickerson

RSS bandwidth blues

feature
Jul 30, 20043 mins

Making RSS more manageable on the server side takes extra effort

My recent column “RSS growing pains” provoked passionate discussion in the blogosphere, which really picked up speed when the article was linked from Slashdot. Many readers pointed out ways to make RSS more manageable on the server side. I got the sense from various Weblog posts and e-mails that the word isn’t out on these methods, so consider this column my attempt to help.

In a post on his Weblog, Dare Obasanjo suggested two approaches that would help InfoWorld and other RSS feed providers limit bandwidth consumption. The first is HTTP compression, a simple but seldom-used capability of Web servers and browsers. HTTP compression is best illustrated by a simple example. Most Web browsers send a header to Web servers indicating whether they accept compressed content. The header generally looks something like: “Accept-Encoding: gzip,” which indicates that the browser can decompress files using gzip compression. When browsers make requests to an Apache server with the mod_gzip module installed, the Apache server applies gzip compression on-the-fly as clients request files. The result is a substantially smaller file being sent to the client, thereby reducing bandwidth requirements.

The second method Dare proposed is the use of the HTTP conditional GET. A full explanation of this method requires more space than I have here, but a Google search for “HTTP conditional GET” will turn up Charles Miller’s “HTTP Conditional Get for RSS Hackers” page with all the details. To quote from Miller’s page, the logic behind a conditional GET request is simple: “If this document has changed since I last looked at it, give me the new version. If it hasn’t, just tell me it hasn’t changed and give me nothing.” The conditional GET combined with HTTP compression can make a huge performance difference — most newsreaders won’t pull an RSS feed unless it has changed, and when they do, the file will be compressed.

In my experience, the annoyances in serving RSS have less to do with bandwidth and more to do with supporting regular surges of simultaneous connections from newsreaders. This is not a new problem, and there are a number of ways to solve it. I’ll go from cheapest to most expensive. First, configuring your Web servers to handle a higher number of simultaneous connections is critical. In the Apache world, that means configuring your MaxClients setting as high as your server can realistically support. Alternately, you could use a high-speed front-end caching server such as the open source Squid to serve RSS clients more quickly. Finally, you can sign up with third-party CDN (content delivery network) services such as Akamai and Speedera to handle some or all of your RSS load.

Aside from the actions RSS providers can take to mitigate performance issues on their server farms, we can also pull for certain companies to succeed. One of the companies I’m pulling for is Bloglines, which provides a nice Web-based aggregator that I use daily. Bloglines not only acts as a proxy for a large pool of users (making one hourly request for each of our RSS feeds to serve hundreds of users) but also tells me the number of subscribers I have to each of my feeds in the requests they make to my Web server.

RSS traffic is not absolutely crushing InfoWorld’s Web servers, but scaling RSS traffic does require conscious thought and effort. With the right approach, mild annoyances can be overcome.