I got an interesting email from a client of mine today. He is the VP of IT for a far flung health organization. Most of his infrastructure is Citrix MetaFrame-driven thin clients, with travelers using laptops to connect from the office and the road. As he moves through his various locations replacing aging PC and thin client hardware with new clients, he's finding that he has quite a bit of lower-end PC hardware I got an interesting email from a client of mine today. He is the VP of IT for a far flung health organization. Most of his infrastructure is Citrix MetaFrame-driven thin clients, with travelers using laptops to connect from the office and the road. As he moves through his various locations replacing aging PC and thin client hardware with new clients, he’s finding that he has quite a bit of lower-end PC hardware at (and requiring) his disposal. One of the issues confronting the organization currently is that the image updates for the EmbeddedNT thin client devices come over the WAN, as well as virus definition files for laptops, and so on. These updates aren’t huge, but trying to update a few dozen or more clients from an update source on the other end of a 384k frame-relay link is painful at best, and adversely impacts the network during business hours.Since there are no local servers at any remote location, it’s not possible to create a local repository for this data on-site. To add another server at each location to the 100% Windows 2000 + Active Directory infrastructure would require more than $30k in licensing alone, plus the hardware. Instead, he asked me about the possibility of implementing a several dozen reclaimed P-II 450 Dell systems running Linux to handle this task. What a great idea! I was honestly surprised to get this question from him, since we’d discussed Linux in the past and he’d thought it best to remain homogenous throughout the enterprise. Once the idea was in the air, though, I started engineering it: define a reference build of RH9, kickstart every build to accommodate unattended installation on mixed hardware, deploy custom post-install scripts to streamline the build process, create static reservations in DHCP for the boxes, permitting a box to be built and shipped very quickly. In the build, integrate auth to the AD domain, permit access for a few tasks via Webmin to somewhat flatten the learning curve, use Samba for the filesharing of course, and schedule rsync synchronization of relevant directory trees after business hours, eliminating the need to push files across the WAN during the day unless it was extremely urgent. The hardest part would be the first one, of course, but once the kickstart process was streamlined, deploying the reference build on a wide variety of hardware would be very simple and the price couldn’t be better. The WAN links are better utilized, updates are much quicker, and the IT department can move quicker to implement changes throughout the enterprise.So with a whisper, entre Linux