PDC 2005, Day 3 @ 1:44 pm (Windows Compute Cluster Solution)
One of the keynote presentations today was on Windows Compute Cluster Solution. Now, I've been working with Windows Clusters in a High Availability environment for some time now, so I've been very interested in what Microsoft's message was going to be in this product space. Microsoft has been getting their lunch handed to them in this area by Linux clusters for a long, long time. While it is currently possible to build High Performance Clusters on Windows without the Compute Cluster Solution, it is certainly not straightforward.
There are a number of hurdles that Microsoft faces in this product space:
* Linux clusters are a mature solution. Most existing clusters are built on Linux, and programmers/administrators are familiar with this environment.
* Linux is more flexible and scriptable from the command line. While third party tools do make Windows relatively scriptable (for example, you may install a Unix-like shell on windows using Cygwin), these tools merely bolt-on pieces that are automatically available in Linux.
* Linux clusters are inheritably cheaper. If I am building a 40- or 100-node cluster, per-server operating licensing costs become a major portion of my budget. I'd rather spend this money on additional nodes.
* Linux clusters are easier to auto-rollout. While Windows has RIS, and third party imaging tools like ghost, it is very trivial to autoscript installation on Linux. I'm mostly a Windows guy, but I was able to use Intel network cards with PXE to download a bootloader over TFPT, and then auto-image an installation of Linux onto the box. I do understand that all of these things are possible on Windows Server, but the infrastructure overhead is a bit higher.
* Linux HPC clusters may be configured for High Availability. Windows Compute Cluster Solution is not designed for High Availability. More on this later.
--
So, I was disappointed in this mornings keynote coverage of Microsoft Compute Cluster Solution. As I'm a bit familiar with this product space, I found the demos misleading, at best.
A quick rundown of my observations:
Bob Muglia (Senior VP, Microsoft Windows Division) was demonstrating job distribution on a Microsoft Compute cluster environment, and seamlessly added a cluster node to an existing cluster. This was actually pretty impressive, though one would expect this functionality in a cluster solution. Everything worked smoothly, and the new cluster node automatically took on existing jobs.
To demonstrate that cluster nodes may be removed (fail), Bob removed the network cable to an existing node, and the head node removed that system from the list of available cluster worker nodes.
What nobody really noticed was that the failover demo failed. I think a job was stuck, and they quickly hit the kvm switch before anyone could notice.
They then went on to demo a new feature of Excel 12, with "Excel Server". The idea is that you may run an Excel spreadsheet on a server. For this demo, they ran a complicated Excel spreadsheet on the Cluster. The cool part of the demo was that you could upload the spreadsheet to the cluster, and the Scheduler would distribute this job to an available cluster node. The node completed the work, and the spreadsheet results were returned to the client.
While this demo was cool, it totally misses the purpose of a cluster. Running the Excel spreadsheet on a cluster demonstrates what may be called an "Embarrassingly Serial" problem. The idea of running on a cluster is that you have a problem that may be split into many parallel subtasks. Some problems, such as figuring Prime Numbers, are often referred to as an "Embarrassingly Parallel". This class of problem means that no piece of the problem set is dependant on another piece, and each node may work on its job in isolation.
Jobs like running an Excel spreadsheet are really not the class of problems that an HPC cluster is intended to solve. While the Excel spreadsheet logic may be re-written in C++ using similar algorithms to solve the same problem set, the use of an HPC cluster to run serial jobs is not the best use of this type resource.
There was another statement that really got to me, however. In the demo, the phrase 'high availability' was mentioned. Specifically, when allowing one of the cluster nodes to fail, we were told that Compute Clusters support high availability, as a failure of a compute cluster node does not bring down the entire system.
However, it is extremely important to understand that Microsoft Compute Cluster Solution is NOT suitable for high availability environments. As currently designed, the head node (scheduler) is a single point of failure, and this service will NOT fail over to other nodes. So, if the head node goes down, your cluster is effectively down.
This was a really misleading demonstration, and I am disappointed with the 'sleight of hand' that was used here.
Don't be mislead by the hype - Microsoft Compute Cluster Solution may have its place in number crunching in your organization. But don't think that it is a way of distributing out your existing applications into a clustered environment. Its not.