Tuesday, April 19, 2005

What does it mean to be scalable?

I just saw a reference to Linux being "scalable", but the term has been so abused in recent years that it has lost all its true meaning. To my mind, the issue should be whether the operating system can automatically re-balance system load between processors and host computers, and even dynamically re-partition individual applications (at least at the thread level), rather than the system designer or administrator having to tediously set up scripts and parameter files (or even be in the loop with a GUI) to do the balancing. In theory, you should be able to just add boxes (or "blades") to a LAN (or WAN) and the network operating system infrastructure does the balancing. Does Linux (or any other OS) actually do this today?

I believe that is an essential requirement for future robust networked applications, but I strenuously object to claims that is here today, since that pulls the rug out from under any efforts to initiate the type of research projects that are needed to underpin this important computing concept.

I recognize that a common usage of the term is that an OS is "scalable" if the designers and developers of the OS are able to repackage the OS for hardware computing platforms of different sizes, ranging from embedded systems to mainframe-class servers. Unfortunately, that does not make software applications themselves truly "scalable" to deal with loading issues.

-- Jack Krupansky

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home