This article is written as an introduction to Virtualization and to help you understand how it can be used in your schooling system, and to help explain how Virtualization leads to Cloud Computing. The first part of this article – an explanation of Virtualization – is based on the excellent work on this area by David Chappell
Virtualization currently a hot topic for three main reasons –
- It saves cost
- It allows the easier management of systems
- It’s a key component in Private Cloud Computing
To understand Virtualization, lets first look at a familiar computing scenario – running an application on top of an operating system, which in turn sits on some physical hardware. The application’s user interface is presented via a display that’s directly attached to a physical machine.
Whilst this scenario is extremely common, it’s not the only choice for how to deliver IT services – nor is it necessarily the best choice. Virtualization enables us to “uncouple” these elements, to deliver more manageable and cost effective computing services.
There are 3 principle Virtualization methods:
- Hardware Virtualization – where the Operating System is uncoupled from the Physical Machine that it runs on.
- Application Virtualization – where the Application is uncoupled from the Operating System.
- Presentation Virtualization – where the Application user interface is uncoupled from the Physical Machine that the application runs on.
These methods make the links between components easier to change and manage.
Let’s now look at each of these in turn.
With Hardware Virtualization, Virtual Machines (VMs) that emulate a physical computer are created on either a server or a client (e.g., laptop or desktop) computer. This approach allows several Operating Systems, with Applications, to run simultaneously on a single Physical Machine.
Desktop Virtualization can be used to run more than one operating system on a single computer to deal with application incompatibility. For example, an old application may not run with the latest version of Windows, so a VM can be set up to run an older version of Windows, enabling the older application to run.
Server Virtualization can bring significant economic benefits by enabling the consolidation of workloads onto a smaller number of physical (server) machines. In a Data Center it’s common to find many under-utilized servers machines, each dedicated to a specific workload. Server Virtualization allows consolidating those workloads onto a smaller number of more optimally used machines. The economic benefits are less electricity consumed, less physical hardware to purchase, house and maintain.
A school network will usually have one server for each major IT service function, such as the Management Information System (MIS), Learning Management Systems (LMS), accounts, printing, and library systems etc. When a system is virtualized, these physical servers are replaced with VMs that are housed in clusters on a smaller number of physical servers. This has significant benefits in terms of savings, efficiency and reliability.
West Hatch, a Secondary School in England, shrank the number of physical servers needed to effectively run their system from 24 to 9.Virtualization increased efficiency of their network whilst saving $18,000 a year in hardware, maintenance and electricity. A detailed case study is available here – West Hatch_Virtualization_Case_Study.
Virtualization provides the system with the ability to deal seamlessly with the failure of a server by automatically moving all its services to another – the system users wouldn’t even know it’s happened. VMs are stored as files, and so restoring a failed service can be as simple as copying the VM file onto a new machine. Since VMs can have different hardware configurations from the physical machine on which they’re running, this approach also allows restoring a failed service onto any available machine. There’s no requirement to use a physically identical system.
So what has this got to do with the Cloud? Virtualisation enables a key feature of Cloud services – elasticity. For example, a service that only happens once every academic term (e.g., processing large volumes of academic data), only needs to be hosted on a server for the amount of time it’s required for. The rest of the time, it can be stored away, thereby reducing service costs.
At West Hatch, the key technology used for Hardware Virtualization is Microsoft Hyper-V Server run from within Windows Server 2008.
Every application depends on its Operating System for a range of services, including memory allocation, device drivers, and much more. Applications commonly share various things with other applications on their system, and this can be problematic. For example, one application might require a specific version of a dynamic link library (DLL) to function, while another application on that system might require a different version of the same DLL. To avoid this, organizations often have to perform extensive testing before installing a new application – a time-consuming and expensive activity.
Application Virtualization solves this problem by creating application-specific copies of all shared resources. The objects that an application might share with other applications on its system — registry entries, specific DLLs, and more — are packaged with it in a Virtual Application (VA). When a VA is deployed, it uses its own copy of these shared resources.
Application Virtualization makes deployment significantly easier since applications no longer competes for shared resources, eliminating the need to test new applications for conflicts with existing applications before they’re rolled out. These virtual applications can run alongside ordinary applications.
Microsoft Application Virtualization, called App-V for short, is Microsoft’s technology for this area. An App-V administrator can create virtual applications, and then deploy those applications as needed.West Hatch also virtualized applications and for a detailed description from Alan Richards, ICT Technical Lead at West Hatch, on how they did this using Microsoft App-V click here.
Common applications, such as Microsoft Office, both run and present its user interface on the same machine. Sometimes, it makes sense, however, to de-couple the running and presentation of an application – presentation virtualization. This is about letting an application execute on a remote server, and displaying its user interface on another computer.
Presentation Virtualization allows applications to run in Virtual Sessions, each projecting their user interfaces to a remote client computer. Each session can run single or multiple applications.
Presentation Virtualization offers several benefits – for example, data isn’t spread across many different systems, instead it’s stored safely on a central server rather than on multiple desktop machines. Instead of updating each application on eachindividual desktop, only a single shared copy on the server needs to bechanged. This also allows using simpler desktop operating system images, “Thin Client” technology – both of which can lower management costs.
It’s sometimes easier to run an application on a central server, and then use presentation virtualization to make the application accessible to clients running any operating system. This can eliminate incompatibilities between an application and a desktop operating system. Presentation virtualization can improve performance. If a client/server application consumes large amounts of data from a central database down to the client and the network link between the client and the server is slow, this application will also be slow. One way to improve performance is to run the entire application—both client and server—on a machine with a high-bandwidth connection to the database, then use presentation virtualization to make the pplication available to its users. This way, only screen refreshes, mouse clicks and keyboard strokes are being sent over the connection.
Many schools use Thin Client computing, but it’s not without its limitations – e.g., the requirement for high bandwidth connection between the terminal and the server especially if users will require multimedia.
Presentation Virtualization technology is included in Remote Desktop Services – a standard part of Windows Server 2008 R2 with SP1.
Private Cloud exploits virtualization but takes it further. A Private Cloud shares many of the characteristics of Public Cloud computing including resource pooling, self-service, elasticity and pay-by-use delivered in a standardized manner with the additional control and customization available from dedicated resources.
While virtualization is an important technological component of private cloud, the key differentiator is the continued abstraction of computing resources from infrastructure and the machines (virtual or otherwise) used to deliver those resources.
Other key components of Private Cloud are:
- Packaging and Managing Services
- Cross Platform Capabilities
- Cross Environment Capabilities
Packaging and Managing Services
A key differentiator between an ordinary Data Center and a Private Cloud solution is how services are packaged and managed.
Let’s start with a look at Services Management. As organizations begin to move from virtualized infrastructure to private cloud implementations, their focus begins to shift from virtual machines to applications and services.
One approach is to this it to think of a service as a logical representation of an application. For example, consider a line of business application composed of a web tier, business logic tier, and database tier. You then define a “service template” which captures the blueprint of this application service – this service template would include hardware profiles, operating system profiles , application profiles, health/performance thresholds, update policies, scale out rules etc. This is how your application service can then be enabled with the Cloud attributes described above for each service tier.
Designing and operationalizing such a set of services could potentially be complex, but System Center 2012 enables a simplified and visual approach:
The health and performance of all aspects of IT infrastructure, including the physical layer, the virtualization layer, the operating system and the applications need to be managed too, something that can be accomplished with System Center Operations Manager.
Services are rarely built from the ground up, so it’s critical to make sure that there is good interoperability between different system components, which can be layered in this way:
- Application frameworks – .e.g.: Net; Java; php; Ruby
- Management – System Center; HP; CA; BMC; EMC
- Operating Systems – Windows Server; redhat; Suse; CentOS
- Virtualisation (multiple hypervisor mangagement) – Hyper-V; Citrix; VMWare
- Hardware – HP; Dell; Fujitsu; IBM; NEC; Hitachi; Cisco
Cross Environment Capabilities
As mentioned in previous “Cloud Watching” articles, it’s unlikely that any education organisation is going to want to migrate everything to a Public Cloud immediately. Rather, organisations are much more likely to spread workloads across on-premises, Virtualised Data Centers, Private and Public Clouds.
Therefore, a key Private Cloud capability is to have a “single pane of glass” view to manage and run applications across private and public cloud environments. System Center App Controller 2012 offers full visibility and control to deploy, manage, and consume applications across each service scenarios.
In conclusion, virtualisation is a good starting place for developing Cloud capabilities within a single school environment. At municipality level and above we can start thinking about using virtualisation more extensively in Data Canters and start turning Data Centres into Private Clouds by packaging and managing services, and developing cross platform and environment capabilities.