What does virtualization in the cloud mean to you? Or, maybe a better question is, where will ‘the cloud’ take us over the next decade?
Today's technical news is filled with so much about cloud computing, distributed and delegated computing, server virtualization, application virtualization, Virtual Desktop Infrastructure (VDI), zero client, and so many other buzz words. Google is all about managing your information in the cloud, Twitter data uses Amazon’s AWS, the new Windows Phone 7 that was just released to the US greatly leverages cloud computing using Microsoft’s Azure platform, and websites and tools all over are moving to a cloud based model.
Virtualization is a key component of many technologies and innovations these days but no one quite understands where we will be with virtualization in the next several months let alone the next decade. But there is one thing we all can agree on; the growing and improving virtualization technologies will enable more cloud computing in the future.
The Stage is Set
I think back to when my father brought home an amber lite terminal in the mid-eighties with a 300bps modem so that he could connect to his corporate mainframe from home if he was stuck in a snowstorm where we lived in Eastern Nova Scotia, Canada. I remember him talking about the differences between thin-client and thick-client computing. Over the last 25 years since then, I’ve watched as the tech world has oscillated between thin-client, thick-client computing, and various hybrid models in-between. I think back on the first Citrix MetaFrame seminar I attended in the 90’s where I saw how a single server could host multiple desktop sessions, each user retaining their own personalized desktop user experience (that was a turn back towards thin-client computing).
Fast forward to today and entire operating systems can be virtualized so that one server can host dozens of virtual servers. The application layer is virtualized so that complex applications can be housed in the data center and the tiniest little client app is all that is needed to run it from anywhere. Companies like Vaasnet (http://www.vaasnet.com) offer training labs, delegated customizable demo machines, and R&D environments instantly available on-demand.
In addition, high speed Internet access is available for the vast majority of the modernized world, therefore virtualization is not just for corporate networks, but virtualized infrastructure can be used by anyone from anywhere (even over your home ISPs broadband connection). Cell phones provide hotspots while on the road, and even airlines are starting to provide Internet access while flying 30,000 feet in the air.
So many puzzle pieces are falling into place to create a perfect environment for the world of virtualization to progress to the next level. The question for us now is how this applies to us today and what will the state of virtualization look like in the future.
They say that history repeats itself. We saw a major technology cycle just over a decade ago with the Dot-com bubble, bust, and leveling out. I expect that we will see the same here, although I believe that we are at the early stages of this cycle for cloud computing.
The Future of Virtualization Technology in the Cloud
From a technology standpoint, what is needed to see cloud computing fully mature? I see four major underlying concepts still lacking but heading in the right direction.
1. Stabilization of Cloud-related Technologies
First, there needs to be a maturing of the cloud technologies and services that exist today. Quest Software has released a recent survey that says that 40% of respondents are not even interested in cloud services; and only 10% of IT organizations are currently using cloud services in production! The survey revealed that it is not because cloud computing does not apply to the vast majority of the IT community, but that it is still new, not fully stabilized, and the jury is still out on how security will pan out. There are a lot of unknowns, but tremendous investment is being made to see these questions answered. Do not expect full answers soon though; there will be years of maturing and fighting for standards before we see universal adoption. Yet, that won’t slow adoption for many in the meantime as more and more organizations adopt these virtualization technologies. Look at HTML5 as an example where the adoption of a technology has gone ahead before the standards have been approved. We are already tracking in the right direction; we just need a stabilizing and maturing of what is already around us today.
2. Data Deduplication
Second, with so much data, much of it identical, there is a pressing need for a solution in the area of data deduplication. Data deduplication (http://en.wikipedia.org/wiki/Data_deduplication) takes identical data and keeps just one copy of it to save valuable space. Data deduplication is a technology still in its infancy. Options are available for secondary storage like backups and archiving, but there are few solutions that can handle the line speed data deduplication at performance levels necessary to "dedupe" during virtualization processing. Instead, storage space is growing at astronomical levels to make up for the lagging data deduplication technologies. Virtual servers, one of the core tenants for the cloud, are perfect candidates for data deduplication since they have a lot of identical data for the operating system files and service packs, which are nearly identical between the different instances of virtual servers.
Not everything dedupes well though—take, for example, the 70 gigapixel picture of Budapest (http://70gigapixel.cloudapp.net/index_en.html), currently the largest picture in the world, which has very little data that will dedupe. But still, for the most part virtual servers dedupe very well.
3. Operating System Virtualization Offloading
The third area where we will see major growth as virtualization technologies mature will be operating system support in the cloud. Until now most of the progress in virtualization has been at the data center level. Even operating systems from years ago can be virtualized nearly as well as the newest operating systems. There is some client integration component support for newer operating systems, but for the most part, that is the extent of the operating system support for cloud computing.
Right now, most operating systems have no support for offloading small slices of their computing requirements to a 3rd party. Imagine if an operating system could pull resources from disparate cloud providers from around the world. Today the operating system tightly couples storage space, storage I/O, computing, CPU, memory, user state, and all aspects of the operating system software. Expect to see these become separate units that can be decoupled and offloaded to third party providers in the cloud.
As an example of operating system virtualization offloading, look at storage space within the operating system. Today I troubleshoot far too many servers that do not have enough storage to keep up with years of service packs and hotfixes. How I wish I could offload at least the C:WindowsWinSXS folder to a cheap hard drive or even a cloud provider. In fact, why not point to a trustworthy storage location in the cloud that will offer read-only access and include all of the latest operating system binaries—let Microsoft take care of that—off of the server. Of course that puts large requirements on the cloud and the Internet access to it, so offline solutions are still required. The question is how the storage can be decoupled from the local hard drives on the server. Dropbox, Windows Live Skydrive, and others are heading in the right direction, but again, have a long way to go.
Another example of operating system virtualization offloading is patching of the operating system, which is currently a mess for a number of reasons. Usually it is disruptive to the operation of the system, requiring services to be restarted or reboots performed. I realize that the *nix operating systems do better in this regard than Windows, but no operating system is without some impact during most upgrades. Additionally, the software is so tightly coupled to the operating system that issues during patches can affect unrelated technologies on the server. We need to see the operating systems decouple the software they run from the operating system and allow online patching. It is no easy matter, otherwise we would have seen it years ago, but it is possible in time, and a key requirement to the future of the cloud virtualization.
Here is one other example of operating system virtualization offloading: RAM. When a modern day client or server operating system starts up, it uses hundreds of MB of RAM just for the base operating system. Why not share that among dozens, hundreds, or thousands of servers so that only a pointer to the data is required for the base operating system. VMWare’s virtualization technologies offer memory deduplication, which helps some, but today the operating system does not support this. In fact, the latest versions of Windows Client and Server purposely randomizes the data in the RAM so that it cannot be deduped. Security is the primary reason, but there are better ways to address this. We need operating system support before cloud computing can fully evolve. Google’s soon to be released Chrome OS is heading in the right direction by making the client as thin as possible and leaving the processing to the servers. However, it still does not address delegated processing of resources when there are heavy client-side processing demands.
4. Standardization of Virtualization Units
Finally, I believe we need to see standards evolve so that standard cloud units mean the same to everyone. If I want 2 minutes of compute power to render a video, I should be able to point to any cloud provider and expect the same results. And the process should be hidden from me while it works in the background. One example of where we see this today is with hard disks. It is effortless to pull out a hard drive and replace it with another one from a different vendor. Sure, there are some weaknesses at times, but my point still stands that cloud computing needs standards so that what occurs under the hood is irrelevant to the implementation.
Wrapping It Up
The four areas described above are not an exhaustive list of the areas where we will see major growth as virtualization technologies evolve. Today, the world of cloud computing is mature enough that there are many ways that it can benefit you immediately. And as virtualization technologies mature we will certainly see startups such as Vaasnet (http://www.vaasnet.com) take advantage of these virtualization technologies by providing very useful cloud services to users. We have just barely seen the beginning of virtualization technologies and cloud computing. We are excited to see what the future may hold, and how the various hurdles that currently exist to cloud computing/virtualization are overcome. There is no doubt that cloud computing is the way of the future, so we can certainly embrace the cloud of today while looking forward with optimism for what is yet ahead.
AppCmd.exe and Configuration Editor are very useful tools for IIS 7.x administrators. Today I had a situation come up where I wanted to use AppCmd to script the trust level for all framework versions. It wasn’t immediately apparent how to target all framework versions, so I thought I would share my findings.
Changes to IIS are made to applicationHost.config or to a site’s web.config file(s). If settings are targeted at integrated mode, classic mode, bitness or framework version, that is achieved by filtering modules and handlers using the preCondition property. For example:
<add name="ManagedEngine64" preCondition="integratedMode,runtimeVersionv2.0,bitness64" image="...\Framework64\v2.0.50727\webengine.dll" />
However, changes to ASP.NET require that you update each configuration file for every framework version. If you have ASP.NET 1.1, 2.0 and 4.0 on a 64-bit server, this means that you have five configurations to manage if you want to support all five combinations.
Here’s an example of the five paths to the config folders for the five paths mentioned above. Note that ASP.NET 1.1 only has a 32-bit version.
To edit the ASP.NET config files, you can edit them directly in your favorite editor like Notepad, UltraEdit, Notepad++, etc. That can be a lot of works and doesn’t allow you to automate it. Alternately, you can use tools like Configuration Editor or AppCmd.exe.
Configuration Editor at the site, application or path level
In Configuration Editor you can make changes at the server level or site/app level. If you make a setting at the site or application level, it either applies the setting in your site’s web.config file which works for all framework versions, or if it needs to make a change in the framework config files it will automatically know where to apply it based on the framework version defined on the application pool.
Configuration Editor at the server level
However, for a server level change you potentially have five files to target. You can change the framework version that Configuration Editor uses with the “Change .NET Framework Version” link in the Actions pane.
Unfortunately this doesn’t allow you to change the bitness. Hopefully we see that level of control in future versions.
There is a solution for AppCmd, and that’s what I needed for my situation today. I asked Carlos Aguilar Mares for assistance since I couldn’t find the answer elsewhere. My blog post is inspired by his answers so he gets the credit for the answers on how to target the framework versions.
With AppCmd you can target the framework version by using a combination of /clr:2 (or 4) and two different versions of AppCmd.exe. AppCmd.exe doesn’t currently have a way to target the framework version, but there are two versions of AppCmd available for you, each targeting different bitnesses. They are located at:
%windir%\System32\inetsrv\appcmd.exe (64-bit version)
%windir%\SysWOW64\inetsrv\appcmd.exe (32-bit version)
With this knowledge in hand, it’s easy to create powerful scripts to automate installations for you. Following is my example of how to set the trust level for all configurations. Take note of the path at the beginning and the /clr at the end.
c:\windows\system32\inetsrv\appcmd.exe set config -section:system.web/httpHandlers /+"[path='example.axd',type='...',validate='False',verb='GET']" /commit:webroot /clr:2
c:\windows\system32\inetsrv\appcmd.exe set config -section:system.web/httpHandlers /+"[path='example.axd',type='...',validate='False',verb='GET']" /commit:webroot /clr:4
c:\windows\SysWOW64\inetsrv\appcmd.exe set config -section:system.web/httpHandlers /+"[path='example.axd',type='...',validate='False',verb='GET']" /commit:webroot /clr:2
c:\windows\SysWOW64\inetsrv\appcmd.exe set config -section:system.web/httpHandlers /+"[path='example.axd',type='...',validate='False',verb='GET']" /commit:webroot /clr:4
With this in mind, you have full flexibility to target any ASP.NET configuration location that you need.