Cloud Watching # 5 – From Virtualization to Private Cloud

This article is written as an introduction to Virtualization and to help you understand how it can be used in your schooling system, and to help explain how Virtualization leads to Cloud Computing. The first part of this article – an explanation of Virtualization – is based on the excellent work on this area by David Chappell

Virtualization currently a hot topic for three main reasons –

  1. It saves cost
  2. It allows the easier management of systems
  3. It’s a key component in Private Cloud Computing

To understand Virtualization, lets first look at a familiar computing scenario – running an application on top of an operating system, which in turn sits on some physical hardware. The application’s user interface is presented via a display that’s directly attached to a physical machine.

Whilst this scenario is extremely common, it’s not the only choice for how to deliver IT services – nor is it necessarily the best choice. Virtualization enables us to “uncouple” these elements, to deliver more manageable and cost effective computing services.

There are 3 principle Virtualization methods:

  • Hardware Virtualization – where the Operating System is uncoupled  from the Physical Machine that it runs on.
  • Application Virtualization – where the Application is uncoupled from the Operating System.
  • Presentation Virtualization – where the Application user interface is uncoupled from the Physical Machine that the application runs on.

These methods make the links between components easier to change and manage.

Let’s now look at each of these in turn.

Hardware Virtualization

With Hardware Virtualization, Virtual Machines (VMs) that emulate a physical computer are created on either a server or a client (e.g., laptop or desktop) computer. This approach allows several Operating Systems, with Applications, to run simultaneously on a single Physical Machine.

Desktop Virtualization can be used to run more than one operating system on a single computer to deal with application incompatibility. For example, an old application may not run with the latest version of Windows, so a VM can be set up to run an older version of Windows, enabling the older application to run.

Server Virtualization can bring significant economic benefits by enabling the consolidation of workloads onto a smaller number of physical (server) machines. In a Data Center it’s common to find many under-utilized servers machines, each dedicated to a specific workload. Server Virtualization allows consolidating those workloads onto a smaller number of more optimally used machines. The economic benefits are less electricity consumed, less physical hardware to purchase, house and maintain.

A school network will usually have one server for each major IT service function, such as the Management Information System (MIS), Learning Management Systems (LMS), accounts, printing, and library systems etc. When a system is virtualized, these physical servers are replaced with VMs that are housed in clusters on a smaller number of physical servers. This has significant benefits in terms of savings, efficiency and reliability.

West Hatch, a Secondary School in England, shrank the number of physical servers needed to effectively run their system from 24 to 9.Virtualization increased efficiency of their network whilst saving $18,000 a year in hardware, maintenance and electricity. A detailed case study is available here – West Hatch_Virtualization_Case_Study.

Virtualization provides the system with the ability to deal seamlessly with the failure of a server by automatically moving all its services to another – the system users wouldn’t even know it’s happened. VMs are stored as files, and so restoring a failed service can be as simple as copying the VM file onto a new machine. Since VMs can have different hardware configurations from the physical machine on which they’re running, this approach also allows restoring a failed service onto any available machine. There’s no requirement to use a physically identical system.

So what has this got to do with the Cloud? Virtualisation enables a key feature of Cloud services – elasticity. For example, a service that only happens once every academic term (e.g., processing large volumes of academic data), only needs to be hosted on a server for the amount of time it’s required for. The rest of the time, it can be stored away, thereby reducing service costs.

At West Hatch, the key technology used for Hardware Virtualization is Microsoft Hyper-V Server run from within Windows Server 2008.

Application Virtualization

Every application depends on its Operating System for a range of services, including memory allocation, device drivers, and much more. Applications commonly share various things with other applications on their system, and this can be problematic. For example, one application might require a specific version of a dynamic link library (DLL) to function, while another application on that system might require a different version of the same DLL. To avoid this, organizations often have to perform extensive testing before installing a new application – a time-consuming and expensive activity.

Application Virtualization solves this problem by creating application-specific copies of all shared resources. The objects that an application might share with other applications on its system — registry entries, specific DLLs, and more — are packaged with it in a Virtual Application (VA). When a VA is deployed, it uses its own copy of these shared resources.

Application Virtualization makes deployment significantly easier since applications no longer competes for shared resources, eliminating the need to test new applications for conflicts with existing applications before they’re rolled out. These virtual applications can run alongside ordinary applications.

Microsoft Application Virtualization, called App-V for short, is Microsoft’s technology for this area. An App-V administrator can create virtual applications, and then deploy those applications as needed.West Hatch also virtualized applications and for a detailed description from Alan Richards, ICT Technical Lead at West Hatch, on how they did this using Microsoft App-V click here.

Presentation Virtualization

Common applications, such as Microsoft Office, both run and present its user interface on the same machine. Sometimes, it makes sense, however, to de-couple the running and presentation of an application – presentation virtualization. This is about letting an application execute on a remote server, and displaying its user interface on another computer.

Presentation Virtualization allows applications to run in Virtual Sessions, each projecting their user interfaces to a remote client computer. Each session can run single or multiple applications.

Presentation Virtualization offers several benefits – for example, data isn’t spread across many different systems, instead it’s stored safely on a central server rather than on multiple desktop machines. Instead of updating each application on eachindividual desktop, only a single shared copy on the server needs to bechanged. This also allows using simpler desktop operating system images, “Thin Client” technology – both of which can lower management costs.

It’s sometimes easier to run an application on a central server, and then use presentation virtualization to make the application accessible to clients running any operating system. This can eliminate incompatibilities between an application and a desktop operating system.  Presentation virtualization can improve performance. If a client/server application consumes large amounts of data from a central database down to the client and the network link between the client and the server is slow, this application will also be slow. One way to improve performance is to run the entire application—both client and server—on a machine with a high-bandwidth connection to the database, then use presentation virtualization to make the pplication available to its users. This way, only screen refreshes, mouse clicks and keyboard strokes are being sent over the connection.

Many schools use Thin Client computing, but it’s not without its limitations – e.g., the requirement for high bandwidth connection between the terminal and the server especially if users will require multimedia.

Presentation Virtualization technology is included in Remote Desktop Services – a standard part of Windows Server 2008 R2 with SP1.

Private Cloud

Private Cloud exploits virtualization but takes it further. A Private Cloud shares many of the characteristics of Public Cloud computing including resource pooling, self-service, elasticity and pay-by-use delivered in a standardized manner with the additional control and customization available from dedicated resources.

While virtualization is an important technological component of private cloud, the key differentiator is the continued abstraction of computing resources from infrastructure and the machines (virtual or otherwise) used to deliver those resources.

Other key components of Private Cloud are:

  • Packaging and Managing Services
  • Cross Platform Capabilities
  • Cross Environment Capabilities

Packaging and Managing Services

A key differentiator between an ordinary Data Center and a Private Cloud solution is how services are packaged and managed.

Let’s start with a look at Services Management. As organizations begin to move from virtualized infrastructure to private cloud implementations, their focus begins to shift from virtual machines to applications and services.

One approach is to this it to think of a service as a logical representation of an application. For example, consider a line of business application composed of a web tier, business logic tier, and database tier. You then define a “service template” which captures the blueprint of this application service – this service template would include hardware profiles, operating system profiles , application profiles, health/performance thresholds, update policies, scale out rules etc. This is how your application service can then be enabled with the Cloud attributes described above for each service tier.

Designing and operationalizing such a set of services could potentially be complex, but System Center 2012 enables a simplified and visual approach:

The health and performance of all aspects of IT infrastructure, including the physical layer, the virtualization layer, the operating system and the applications need to be managed too, something that can be accomplished with System Center Operations Manager.

Cross-Platform Capabilities

Services are rarely built from the ground up, so it’s critical to make sure that there is good interoperability between different system components, which can be layered in this way:

  • Application frameworks – .e.g.: Net; Java; php; Ruby
  • Management – System Center; HP; CA; BMC; EMC
  • Operating Systems – Windows Server; redhat; Suse; CentOS
  • Virtualisation (multiple hypervisor mangagement) – Hyper-V; Citrix; VMWare
  • Hardware – HP; Dell; Fujitsu; IBM; NEC; Hitachi; Cisco

Cross Environment Capabilities

As mentioned in previous “Cloud Watching” articles, it’s unlikely that any education organisation is going to want to migrate everything to a Public Cloud immediately. Rather, organisations are much more likely to spread workloads across on-premises, Virtualised Data Centers, Private and Public Clouds.

Therefore, a key Private Cloud capability is to have a “single pane of glass” view to manage and run applications across private and public cloud environments. System Center App Controller 2012 offers full visibility and control to deploy, manage, and consume applications across each service scenarios.

In conclusion, virtualisation is a good starting place for developing Cloud capabilities within a single school environment. At municipality level and above we can start thinking about using virtualisation more extensively in Data Canters and start turning Data Centres into Private Clouds by packaging and managing services, and developing cross platform and environment capabilities.

Further information: http://www.microsoft.com/investor/downloads/events/Microsoft_Private_Cloud_Whitepaper.pdf

Cloud Watching #4 – Managing Learning Content

In the old days it was simple. Agree a curriculum; approve and distribute the books; get teachers to push the contents into empty minds.

Since then everything has changed, especially:

  • The need for students to learn more effectively
  • Student’s appetite for active rather than passive learning experiences
  • Explosive growth of content and ease of access to it

So what does all this mean for learning content, and how it gets managed? On the one hand it could mean chaos as schooling systems deal with extreme complexity – infinite permutations of content types, authoring, storage, categorization, search, access, retrieval, and rendering methods. On the other hand, managed properly, it means the right content built or used by the right person at the right time – making learning significantly more effective. The ease with which ideas, concepts and knowledge are acquired by learners is a function of the availably of engaging learning content and how it is used, so managing content effectively is critical to improving learning effectiveness.

It’s no longer sufficient to think of learning content as a one-way street terminating in the minds of “empty headed” learners. It’s pretty clear that learning is much more effective when students create content rather than just consume it, and the proliferation of easy-to-use content development tools means that students themselves can produce professional standard learning content.

Given the explosion of web content and ease of access to it, the role of publishers is changing quickly too. Publishers have long been considered bastions of authoritative content, but back in 2005 Nature Magazine concluded that Wikipedia and Encyclopedia Britannica were virtually equal in terms of the accuracy of their scientific articles. The challenge for publishers now is to be authoritative, relevant and engaging – not just providing the answers but the conditions in which learners construct their own answers. Learning content has to become much more interactive, immersive, challenging and fun, and it also has to connect to systems that enable intelligent intervention, manage the learning process, and provide analysis.

Schooling systems are faced with bewildering choices when it comes to architecting Learning Content Management Systems (LMCS), so a good place to start is with some questions about what outcomes should be expected from investments in this space. E.g. how do we:

  • Manage content to ensure that the most effective learning takes place
  • Exploit content creation, management, and consumption technologies
  • Leverage new models of content production
  • Ensure that publishers can maintain profitability and invest in R&D
  • Minimise costs and maximise the “Content Economy”

To help frame this discussion we can look to the work of Microsoft Research and their Higher Education project entitled “Technologies for the Scholarly Communications Lifecycle”. Here they describe six distinct areas for supporting the lifecycle of scholarly content. Adapting this for managing learning content within a Schooling Enterprise Architecture we arrive at the following model:

Figure 1. Learning Content Lifecylce for Schooling Enterprises

But before we go any further, what exactly do we mean by learning content?

WHAT IS LEARNING CONTENT?

At one end of the spectrum there are widely available digital entities from which someone can learn – from sophisticated Silverlight or Flash applications to video clips to plain text. At the other end of the spectrum there are highly structured learning content packages designed to meet specific learning objectives.

A key concept in learning content is the “Learning Object” – a self-contained package, with a clear educational purpose containing –

  • Learning content – digital entities including text, images, sound, video
  • Learning tasks
  • Interface to a workflow system so the next learning task can be appropriately set
  • The means by which to assess what learning has resulted
  • Metadata including – learning objective; prerequisite skills; topic; the “interaction model”; technology requirements; educational level; relationships to other learning objects; rights

Ideally, it should be possible to:

  • Edit a Learning Object so it can be tailored to precise requirements
  • Group it into larger collections of content, including longer course structures

Conveniently, there is a standard for how learning objects should be constructed and used. The Sharable Content Object Reference Model (SCORM) is a standard that defines communications between content learning management systems, and how a learning object should be packaged into a transferable ZIP file. (See below for further details).

Advances in technology are also changing views about what actually counts as content.  For example, it could be argued that threads of dialogue through blogs, wikis and instant messaging are forms of content production.

CREATING LEARNING CONTENT

The old steps-and-stages, linear, age-cohorts and classes-dominated, subject-orientated curriculum is being superseded. Its successor is a “Thinking Curriculum”, based on a search for knowledge, on developing competencies rather than consuming content. The Thinking Curriuculum is information rich, multi-layered, and connected.

With the creation of high quality content relatively easy to accomplish, we have to ask a fundamental question – “who gets to produce learning content?” As explored in “High Performance Schools” a key way to get effective learning is to get students to create their own content then get peers to review it. With cheap webcams; basic video editing software; drawing, graphics, and productivity software; web development and portal tools, its increasingly easy to get great results from this approach.

There will always be a role for professionally produced, authoritative content. However, the world of publishing needs to embrace the idea that students and teachers will increasingly want to build their own learning resources from individual learning objects, in much the same way as building models using Lego®.

MANAGING CONTENT

There are essentially two types of content – structured and unstructured. Structured content is that which has been classified, and stored in a way that makes it easy to be found and used. Unstructured content is all other content.

Imposing structure and order on the exponentially expanding unstructured world of user-generated content is a major challenge for all organizations.

 

Figure 2. Unstructured content grows exponentially

Key concepts in Content Management include:

  • Document Management
  • Web Content Management
  • Rich Media Management
  • Archiving and Library Services
  • Scanning (Image and Capture)
  • Document Output Management
  • Workflows
  • Learning Process Management

Learning Content Management Systems (LCMS) help schooling systems organise and facilitate the collaborative creation of learning content, providing developers, authors and subject matter experts the means to create and use learning content. They enable the management of the full life cycle of content – from initial creation to consumption and re-creation by end users. They feature repositories, library systems, curriculum frameworks, curriculum systems, curriculum exemplars and resource assemblers.

A LCMS enables:

  • Efficient search and retrieval
  • Ease of authoring across a learning community
  • Rapid customisation for various audiences

An LCMS should enable seamless collaboration between subject matter experts, designers, teachers, and learners. It should enable content to be made available through a wide array of output types – such as structured e-learning courses, lesson plans, single learning objects – and output devices such as PC, phone or TV.

Learning Content Management Systems differ significantly from Learning Management Systems (LMS) in as much as an LCMS should be used to “feed” content to one or more LMS.

Figure 3. LCMS feeds learning content to LMS

Key LCMS Functions

Based on the Association of Information and Image Management’s specifications, a Learning Content Management System should have the following features and functions:

Categorization/Taxonomy

A taxonomy provides a formal structure for information, based on the specific needs of a schooling system. Categorization tools automate the placement of content (learning objects, documents, images, email, text etc) for future retrieval based on the taxonomy. A key question is who is responsible for and allowed to categorise content, and edit the categorisation data?

Indexing

Additional meta-data supporting information retrieval – this can be based on keywords or full-text.

Document Management

Document management technology helps organisations better manage the creation, revision, approval, and consumption of documents used in the learning process. It provides key features such as library services, document profiling, searching, check-in, check-out, version control, revision history, and document security.

Web Content Management

This addresses the content creation, review, approval, and publishing processes of Web-based content. Key features include creation and authoring tools, input and presentation template design and management, content re-use management, and publishing capabilities.

Digital Asset Management (DAM)

Similar in functionality to document management, DAM is focused on the storage, tracking, and use of rich media documents (video, logos, images, etc.). Digital assets typically have high intellectual property (IP) value.

Repositories

A repository can be a sophisticated system that costs hundreds of thousands of dollars, or a simple file folder system. The key is to have information that can be found once it is placed in the system.

Syndication

Distribution of content for reuse and integration into other content.

Personalization

Based upon data about student learning history, their learning styles and what they next need to learn, types of content and specific learning objects can delivered to best match the student’s needs.

Search/Retrieval

One of the greatest benefits of a well architected LCMS is the ability to get out what you put in with the minimum of effort. Indexing; taxonomy; repository services; relevance; and social cues should make locating specific content in a schooling system easy. Search functions should include:

  • Best Bets
  • Metadata-based Refinement
  • People and Expertise Search
  • Recently Authored Content
  • Defined Scopes
  • Focused Search – site, local, enterprise and web
  • Taxonomy and Term Store Integration
  • View in Browser

Infrastructure Technologies

Supporting these functions are core infrastructure technologies including:

  • Storage
  • Content Integration
  • Migration
  • Backup/Recovery

DRM

Protecting copyrighted content is essential to drive a vibrant “Content Economy”. Ensuring that creators of content get what they deserve for their work is a cornerstone of the Knowledge Economy – the development of which is the aim of many governments. DRM does this by encrypting content to limit usage and copying to limits agreed between the publisher and the customer.

EXPOSING CONTENT

Producing content and storing it is relatively easy, but organizing it to make it easy to find is an altogether different matter. People in large enterprises spend huge amounts of time looking for content, and making it easier to find specific content in schooling systems is core to making them more effective.

Search can help, of course, but the key to making content easy to find is in structuring it well. There is no one right answer for this, but one way of thinking about it is to start by categorising people first and then categorising the content:

Communities

Ideally, content should be exposed to people according to what role they have in the organisation – this is known as “role-based” knowledge architecture. A teacher, for example, should be able to access different content to learners.

Sites

Once communities of users have been defined, sites can be created to serve their specific content needs. Sites are aggregation points for a mix of types of content and methods for surfacing this content.

Libraries

Within a site there can be several libraries, each one categorising content by subject, topic, phase of learning, etc. Categorised content should contain metadata making it easier to find what the user is looking for.

Galleries

For more visual content, it may be easier to flick through a set of images for the user to find what they are looking for – galleries provide this function.

Wikis

A wiki is a website that allows the collaborative creation and editing of interlinked web pages via a browser. This technology has been around for at least 15 years, but its use as a general teaching tool is still in its infancy. However, an increasing number of universities are now adopting them as a teaching tool – see http://www.nytimes.com/2011/05/02/education/02iht-educSide.html?ref=education.

Blogs

Personal spaces for building and publishing content such as blogs or “MySites” give users a way of quickly exposing their thinking to a wider audience to express viewpoints and get feedback.

Figure 4. Structuring content starts with classifying users

LEARNING CONTENT MANAGEMENT ARCHITECTURES

Key Concepts

Roles

A key starting point in architecting a LCMS is determining who the users of the system are and what roles can be assigned to them.

Across the schooling enterprise publishing house staff, experts, teachers, teaching assistants, administrators, students, even parents could all – in theory at least – take on one or more of these roles:

  • Creator – responsible for creating and editing content.
  • Editor – responsible for tuning the content message and the style of delivery, including translation and localisation.
  • Publisher – responsible for releasing the content for use.
  • Administrator – responsible for managing access permissions to folders and files, usually accomplished by assigning access rights to user groups or roles. Admins may also assist and support users in various ways.
  • Consumer, viewer or guest – the person who uses the content after it is published or shared.

Questions raised by the SULINET experience, suggest the following considerations:

  • Who is the principle audience – teachers, students, parents?
  • Who can publish – teachers, students, parents, experts, 3rd party publishers?
  • What incentives are there to encourage contributions?
  • How will Quality Assurance work?
  • What about peer review/rating systems?
  • Should all contributors be allowed to create, publish or edit a Learning Object?
  • Who is the legal owner of a Learning Object – teacher, school, and district?
  • How will logical groupings work? Is it possible/desirable to have national level admin and users, or should groupings work at lower levels such as:
    • District or conglomerate of schools
    • Individual School
    • Grade levels (Eg Year 10)
    • Subject areas (Eg Maths)

Standards

Another key consideration is the role of standards. There are many standards covering content, and the following are the key standards specifically designed for learning content:

SCORM – Sharable Content Object Reference Model – is a collection of standards and specifications for learning objects (Shareable Content Objects, or SCOs). It defines communications between learning objects and a host learning management system. SCORM also defines how content can be packaged into a transferable ZIP file called “Package Interchange Format”. SCORM defines:

  • Content Aggregation Model
  • Runtime Environment
  • Sequencing & Navigation

IMS Global Learning Consortium is concerned with establishing interoperability for learning systems and learning content. IMS publishes specifications for content packaging, enterprise services and digital repositories.

Dublin Core. Defined by the International Organization for Standardization (ISO) The Dublin Core provides metadata descriptions for most learning resources – digital and physical – so they can be described and catalogued. Implementations of Dublin Core typically make use of XML.

CDN

A content delivery network or content distribution network (CDN) caches data at various nodes of a network. A CDN can improve access to the data it caches by increasing access bandwidth and redundancy and reducing access latency. Data content types often cached in CDNs include web objects, downloadable objects, applications, realtime media streams, and database queries.

Blobs

A blob (alternately known as a binary large object, basic large object, BLOB, or BLOb) is a collection of binary data stored as a single entity in a database management system. Blobs are typically images, audio or other multimedia objects, though sometimes binary executable code is stored as a blob.

Scenarios

In the simplest model, the “industrial schooling” approach of pushing book based content into the “empty minds” of learners is digitized:

1. Government sets the curriculum

2. Publishers convert curriculum into content

3. Schools buy content

4. Teacher delivers content

5. Students receive content

Figure 5. Top down approach has limited effectiveness

The SULINET example featured earlier in this blog offers a more sophisticated, “connected learning community” approach. Here, reusable combinations of learning units are stored in a central database. Classification, and the use of metadata and sophisticated enterprise search, makes it easy for users to locate and retrieve content. The smallest digital objects can be independently used or combined together to form learning objects. A curriculum editor application enables users to develop their own learning content.

Extending this further still, in the model below the central repository is connected to external content publishers, online content market places and the worldwide web.  It exploits Cloud technology to drive out infrastructure and management costs; enable flexible scale; and increase reliability and speed.

1. Publishers research and develop new learning packages and make these available for different learning styles

2. Teachers look for materials for specific learning opportunities, and assemble objects into packages for students

3. Teacher assigns learning packages to students

4. Students work in teams to create new content from learning packages

5. Students submits assignment to teacher

6. The best new content from teachers and students gets added to content repository

7. The repository receives content through online market places and the web

8. Standards and processes are overseen by curriculum content committee which uses data to make editorial decisions

Figure 6. An integrated “learning content economy”

Conceptual Design

Converting this usage scenario into a high level conceptual design, we can break down the key processes into three chunks – Creation; Management and Consumption. As discussed at the outset, however Consumption and Creation should increasingly be seen as part of the same process – ie learning is part-consuming and part-producing content.

Figure 7. Conceptual design for a Cloud based Learning Content Management System

Key Products

Creation

Technologies such as Expressions, Visual Studio, and the Adobe Creative Suite are used extensively by professional content developers. DreamSpark is enabling a growing number of students to produce professional quality content too.

Management

Windows and SQL Azure

In the above Schooling Enterprise Architecture Learning Content Management model the core Cloud based content management technologies are Windows and SQL Azure, and the following features are exploited:

  • Compute is a service which runs managed applications in an Internet-scale hosting environment.
  • Storage stores data including blobs – large binary objects, such as videos and images.
  • AppFabric manages users’ permissions and authenticated use of web applications and services, integrated with Active Directory and web based identity systems including Windows Live ID, Google, Yahoo! and Facebook.
  • Content Delivery Network – places copies of web objects (images and scripts), downloadable objects (media files, software, and documents), applications, real time media streams, and other components, close to users. This results, for example, in the smooth streaming of video to Silverlight and Android clients without requiring any software development, management or configuration.

Figure 8. Windows Azure CDN speeds up delivery of content

  • Marketplace – data, imagery, and real-time web services from leading commercial data providers and authoritative public data sources. The Windows Azure Data Marketplace will also contain demographic, environmental, weather and financial datasets. An Application Marketplace will enable developers to easily build applications for Azure.

SQL Azure can also be exploited to provide the following services:

  • Database relational database, providing services to multiple organisations.
  • Data Sync – synchronisation between an organisation’s current SQL on-premises databases and SQL Azure Databases in the Cloud.
  • Reporting – a complete reporting infrastructure that enables users to see reports with visualizations such as maps, charts, gauges, sparklines etc.

Live@Edu

Live@Edu provides a suite of communication, collaboration and storage services for students. It also provides a single account and password for access to many Microsoft Cloud services including Windows Azure. Later this year, Live@Edu will be superseded by Office 365 for Education.

SharePoint Online

SharePoint Online offers a core set of Content Management capabilities including:

  • Document Management
  • Collaboration (team sites), Extranet
  • People Search
  • Content Search
  • Social Computing – including wikis and blogs
  • Publishing Portal (custom theming/branding)
  • Rich Media Management
  • Data Visualization
  • Workflows

 

Figure 9. Through SharePoint, end users get a “control panel” for consuming and creating learning content

Through the SharePoint portal, end users can quickly find the learning content they need, consume and create new content with others, and publish this to a wider connected learning community.

Consumption (and recreation)

Silverlight

Silverlight is a great way for learners to experience learning content. A free, cross-platform browser plug-in, Silverlight is designed for Web, desktop, and mobile applications – online and offline. It supports multimedia, enhanced animation, webcam, microphone, and printing.

Microsoft Learning Content Development System (LCDS)

LCDS is a free tool that enables users to create interactive, online courses and Silverlight learning objects. It can be used to create highly customized content, interactive activities, quizzes, games, assessments, animations, demos, and other multimedia.

Office

PowerPoint is the most widely used content creation tool in schools, and many schools create highly interactive and challenging content with it, eg: see this archive at the University of North Carolina Wilmington

MediaWiki extension for Word allows learning materials developed in Microsoft Office to be saved directly to MediaWiki-based repositories such as WikiEducator.

To create SCORM objects with relatively low levels of technical skill, Hunterstone’s Thesis “Light” is available as a free download with Learning Essentials for integration into Microsoft Office for easy application of the (SCORM) learning content standards to Office documents.

OneNote

Whilst designed as a personal productivity application, OneNote isn’t an Enterprise wide content management solution – however used in the right way, it can be a quick and cost effective way to enable content development, management, search and retrieval amongst small, distributed groups. For example, a teacher could have a “master” OneNote file held on a Windows Live SkyDrive site (in the Cloud). This can contain several “books”, each book sub divided into classes with learning content – videos, links, text etc. Each class can then be further subdivided with an area for each learner. In this way, a Science class – students and teacher, for example, can collaborate with Science classes in other schools.

 

Figure 10. OneNote enables small-scale learning content management

Looking to the Future

HTML 5

The next version of HTML – a language for structuring and presenting content for the World Wide Web – will have profound implications for how learning content can be consumed. It will encourage more interoperable learning content solutions, and will make it easier to include and handle multimedia and graphical content on the web without having to resort to proprietary plugins and APIs.

Conclusion

Providing students with the right kind of learning content at scale is a critical component in making schooling more effective. It’s no longer sufficient to think of content systems as delivery mechanisms, rather they should be thought of as integrated “learning content economies” where learning value is added by all participants and stakeholders. Cloud computing can help facilitate this new approach, driving down costs, increasing connectivity and collaboration, and enabling scalable, flexible and highly available learning content management systems to emerge.

Thanks to David Langridge, Brad Tipp and Sven Reinhardt for support in writing this article.

Cloud Watching #3 – Managing Student Relationships

How could junk-mail and schooling effectiveness possibly be linked? The answer is “CRM” – Customer Relationship Management software. CRM is now firmly entrenched across a vast spectrum of businesses as a way of managing sales and marketing relationships with customers. Anyone possessing a loyalty (rewards or club) card will have their purchasing behaviours tracked by CRM, which then automatically triggers direct marketing activities such as special offers and tailored messages. But CRM is being increasingly used to support the learning process too.

Derivatives of CRM – known as XRM solutions – have been developed for a range of sectors. In healthcare for example XRM is used for a range of activities such as notifying patients of upcoming appointments and how to manage their illnesses.

As the schooling process is getting more data driven we are seeing a sharp increase in the use of CRM in education too. SRM – adaptations of CRM for students – i.e. Student Relationship Management, is rapidly on the increase.

SRM has been extensively used in Higher Education for a long time for a variety of purposes – e.g. implementing targeted marketing campaigns to prospective students and alumni. SRM is used in HE to support enrolment and to track financial matters such as the payment of fees. For similar reasons, SRM is also used extensively in private schooling.

In Brazil, Gestar—an independent software vendor—built an SRM system for private schools that not only handles the administrative “mechanics”, but academic matters too. The objective was to apply the concepts of “marketing one-to-one” to the complete relationship cycle with students – from the initial recruiting process to completion of school and beyond. By gathering and using the information generated in Management Information System (MIS) and Learning Management System (LMS) – eg attendance and individual assessments – it was possible for the schools served by Gestar to improve their effectiveness.

In schools using the Gestar SRM system dropout rates are reduced by cross-checking data across a range of “risk factors”. This makes it possible to identify students at risk of dropping out, and this automatically triggers processes such as setting up interviews, identifying the causes of dissatisfaction, and aligning the student’s objectives with what the school can offer.

Through linking with the LMS, SRM is able to determine if students are accessing the e-Learning tools, completing assignments within given deadlines, and if they are satisfied with their learning activities. Through automated workflows, “intelligent intervention” can be used to address specific problems.

Pre-defined workflows and escalations, in some cases completely automated, make it easier for a teacher to be more “granular” in how they address students’ individual needs. The benefit for the teacher is that their administrative burden is reduced. The benefit to the student is that they get a more personalised service.

So, as SRM is based on software used to manage sales and marketing, a key question is “what is the difference between a learning programme and a marketing campaign?” The answer, actually, is “not a lot”. The mechanics are very similar – place people into groups according to what you want them to learn or do; then step them through a series of linked actions until the goal is reached; then recycle the data to make ever improving interventions.

Another company offering SRM solutions for schooling systems is UK company lookred®. Working with New Line Learning Academy (NLL) – a consortium of public schools – in Kent, UK, founders Chris Poole and Matthew Woodruff had the innovative insight that it’s practically impossible to personalise relationships with thousands of students without using technology. To meet the goal of tailoring learning experiences for all students in the NLL consortium, Chris and Matthew designed a solution centred on SRM and the extensive use of Business Intelligence software.

Crucially, Chris and Matthew made the link between SRM and Intelligent Intervention. This involves setting up a set of “risk factors” that may affect learning performance, finding students who fit the risk profile, and then intervening through goal orientated actions. Imagine, for example, that a school has found that those students with the lowest reading ages perform the worse in examinations, then clearly reading age can be considered a risk factor. The same could be said for other attributes such as attendance, behaviour, or socio-economic factors.

To illustrate how SRM works, let’s explore further the ‘reading age’ example. Using SRM a teacher could run a report to identify all students with a reading age in excess of 2 years below their actual age. Armed with this data, the teacher can now trigger a whole set of automated events and escalations – e.g. getting students to reading clubs; persuading parents to encourage more reading at home; asking teachers to give extra reading support where needed etc. To do the same analysis and run the intervention programme using a paper based approach would be extremely resource intensive.

The goal of intelligent intervention isn’t to just react to a string of unrelated scores however, but rather to tackle deeper personal needs through addressing a range of student attributes. At the heart of the SRM is the student profile. This builds up over time and as more data is added, the smarter the interventions can get.

At New Line Learning, the data that is held in the student record could be easily used to make comparisons between groups of students.

A different example of how CRM can be exploited in schooling systems is in the area of professional development. In Maryland, USA, the State Education department used CRM to improve administration of certification. At any one point in time, there will be 160,000 people in the Maryland State Education System requiring certification of one kind of another. Overwhelmed with a backlog of requests processing times for new certificates extend to as long as 18 months. Working with Avanade, Maryland introduced a CRM system that reduced certificate-processing times to as little as five days and virtually eliminated dependence on paper.

WHY SRM IN THE CLOUD?

Besides the core advantages of scaling, managing resources and cost that applies to most aspects of Cloud based services, there are two additional advantages that SRM in the cloud brings:

1. Scaling interventions – there is technically no reason why an intervention – say for absences – can’t be deployed across multiple schools. If the risk factors, triggers and escalation paths are the same or similar, then a centralised system could potentially manage interventions across several schools simultaneously.

2. Better data – the more schools are contributing data to understand risks and how best to mitigate against them, the better. The more data, the more variables can be considered and the richer the decision making process.

IMPLEMENTING SRM

In the business world, CRM is as much a philosophy as it is a software service At its core CRM is seen as a more customer-centric way of doing business enabled by technology. The focus of CRM is also shifting to encompass social networks and user communities.

For SRM to work in a schooling system the organisation must analyse its workflows and processes; some will need to be re-engineered to better serve the overall goal of tailoring services to students.

If student relationships are the heart of effective schooling, then SRM can be the engine that mediates relationships at scale.

Further information

http://www.gartner.com/technology/media-products/newsletters/datatel/issue1/gartner1.html

http://download.microsoft.com/download/6/9/3/693d3df0-9202-42cd-a961-1bb7b1b8b301/MSDynamicsCRM_EDU.xps

https://partner.microsoft.com/40062157

Cloud Watching #2 – How to Manage 30Bn Trees Worth of Data

Data is fundamental to operating schooling systems. Without data schooling systems would grind to a halt – teachers wouldn’t get paid; students wouldn’t get transported; taught and fed; and essential services would cease to operate.

As the value of good data for decision making is becoming more widely understood, the quantity of data in the world’s schooling systems is ballooning. But how much data are we talking about, how fast is it growing, and how can it be better managed.

To get a sense of how big the issue is, let’s start by looking at Charlotte Mecklenburg in the US – a School District that has paid a lot of attention to its data and information systems recently. According to David Fitzgerald, Vice President of the Education Group at Mariner, Charlotte Mecklenburg School District in the US plans to use 70 Terabytes for a system with 140,000 students – 524.3MB per student.

The US and Western Europe account for ~10% of the world’s school students population – 0.12Bn. So, assuming similar levels of consumption across these regions, we can estimate that in these areas alone there is 60,000TB of data in schooling systems. 1TB = 50k trees worth of paper and print, so we’re looking at 3bn trees worth of data. Imagine that every student on the planet used the same amount of data as Charlotte Mecklenburg – that would add up to 30bn trees.

Whilst it’s currently unlikely that the amount of data in schooling systems adds up to this amount yet, there are several factors pushing it hard in this direction.

For example, major countries such as Russia, Mexico and Brazil are developing and running massive student data operations, increasing both the quantities and sophistication of data used.

UNESCO (2003) state that most countries develop education databases, and they also specify the optimal datasets that should be maintained. Let’s suppose that this adds up to a minimum of 1/2 a typewritten page on each of the student population living outside the USA and Western Europe, roughly 1 Kilobytes each. Rounding-off, we can estimate that 1bn students x 1Kb = 954GB. It’s interesting to think that this could be kept on a single external hard drive no bigger than a paperback book. However, add other data, say a single low-resolution image per student, and that rises by a factor of 8. Add digital work produced by students and this number grows exponentially.  

Also, there is a sharp increase in the rate at which data is used in developed countries. Take New South Wales for example. Last year, New South Wales Department of Education and Training – which has 1.3m students – used 280TB of storage space – but this has been doubling every year for last five years!

The amount of data used in schooling can only increase as governments around the world recognise that it is core to improving effectiveness.

WHY IS MANAGING DATA CORE TO IMPROVING SCHOOLING EFFECTIVENESS?

Driven by the need for better accountability for how public funds are spent, and the widespread use of international benchmarks such as PISA, there is a sharp increase in the number of governments and private companies that are investing in solutions for data driven decision making. These investments aim to use data to:

  • Improve student performance: Give students, parents, teachers and administrators a clear picture of student performance at an individual or group level so they can adjust and personalise learning accordingly
  • Make better management decisions: Inform routine decisions and strategic planning across all enablers and disciplines with accurate, readily-available data
  • Increase accountability: Quickly and easily understand performance across organisations
  • Manage resources more effectively: Gain a better understanding of projected revenues and expenditures; keep track of financial health; compare costs against those of other organisations
  • Drive administrative efficiencies: Improve time and effort taken to report information. Improve quality and presentation of information.

SO WE HAVE TO TALK ABOUT DATABASES THEN?

Why is it that peoples’ eyes glaze over when you start talking about databases? Most web pages that you will experience – including this one – are driven by databases. For most people databases are “black boxes”, and few care about how they work or what they do. However, a basic understanding of databases and how they work is essential to understanding how ICT can make schooling more effective – so let’s take a quick database 101:

WHAT IS A DATABASE?

Databases arrange data as sets of records, and these records are arranged as rows. Each record consists of several fields which are arranged in columns. The rows and columns combine to form a table.

 

Most large scale databases are Relational, which means that they can connect data from two or more tables.

  • Forms are a main way to enter data into a database
  • Queries are used to get data out of a database.
  • Reports format and display data from the database.

Indexes improve the speed of data retrieval operations by querying a unique key which in turn uniquely identifies each row in a table. Metadata – data about data – can include tables of all tables, their names, sizes and number of rows in each table; or tables of columns, what tables they are used in, and the type of data stored in each column.

DATABASE ESSENTIALS

At the heart of a database is the Database Engine – software for storing, processing and securing data; providing controlled access and processing capabilities. The structure of the database is described in a Schema, and this is usually written in a language called “Structured Query Language” SQL. This language determines how data is inserted, queried, updated and deleted. Different database vendors have different extension to SQL – T-SQL is Microsoft’s extension to SQL.  

A Data Warehouse is a database that extracts data from operational systems for reporting. It can aggregate data from different sources, and ensure that the integrity of operational data isn’t compromised by the processes associated with analysing it.

Integration Services are the means by which data from various sources can be integrated, extracted, transformed, and loaded into data warehouses.

OLAP – or Online Analytical Processing – enables data to be manipulated and analysed from multiple perspectives. Eg a Longitudinal analysis could involve the study of student progress over time, and take advantage of an OLAP Cube to interrogate a number of different dimensions over a given period.

 

Analysis Services supports OLAP by allowing the design, creation, and management of multidimensional structures that contain data aggregated from a range of data sources, such as relational databases.

Data Mining – is about extracting patterns from large sets of data, to yield Business Intelligence (BI) for example, high achievement correlated with the number of books in the family home, or low reading ability impacting examination results. Data Mining Services enables the design, creation, and visualisation of data mining models.

Reporting Services – enabling reports to be published in various formats drawing on content from a variety of data sources. They also centrally manage security and subscriptions. Portal Integration – it’s crucial to for end-users to work with operational data – in ‘dashboard’ format ideally – through a portal site.

 

To be able to manage databases is crucial and several key tools are used for this. Master Data Services is the means by which all applications across the organization can rely on a central, accurate source of information.  Replication – copying and distributing data and database objects from one database to another, and synchronizing between databases to maintain consistency. Automated compression and backup are also key tools.

WHAT HAS THIS GOT TO DO WITH THE CLOUD?

With massive growth in the amount of data used in schooling comes questions about sustainability, cost and management. The Cloud offers some major advantages here:

1. Ubiquity

Having data in the cloud makes it easier for authorized users with internet access to access that data from almost anywhere.

2. Management

In an enterprise architecture where resources are distributed, organisations usually have a single SQL Server back-end with WAN links and/or multiple distributed SQL Server installations that replicate data with each other. Maintaining this kind of environment is time consuming and expensive. With the cloud, replication, backup, compression etc are all taken care of.

3. Pricing

As with other Cloud services, you only pay for what you use. During the peaks and troughs of schooling system operations, one can expect to see varying amounts of data storage requirements.

SQL AZURE

SQL Azure is Microsoft’s Cloud Database solution, and it offers the following benefits:

  • No physical administration required – software installation and patching is included, as SQL Azure is a platform as a service (PAAS)
  • High availability and fault tolerance are built in
  • Simple provisioning and deployment of multiple databases
  • Scale databases up or down based on business needs
  • Multitenant – i.e. a single database can provide services to multiple organisations
  • Integration with SQL Server and tooling including Visual Studio®
  • Support for T-SQL-based familiar relational database model
  • Option for pay-as-you-go pricing

The SQL Azure suit currently comprises of the following offerings, some currently on limited availability:

SQL Azure Database – a Platform as a Service (PaaS) relational database. Highly available and scalable .

SQL Azure Data Sync – allows organisations to extend their current sets of data into the Cloud. It provides synchronisation between an organisation’s current SQL on-premises databases and SQL Azure Databases in the Cloud.  Currently available in Community Technology Preview.

SQL Azure Reporting – a complete reporting infrastructure that enables users to see reports with visualizations such as maps, charts, gauges, sparklines etc. Currently available in Community Technology Preview. 

The Windows Azure Platform Appliance under limited trials, this will eventually enable organisations to deploy their own Cloud Services from within their own datacentres. The Windows Azure Platform Appliance consists of Windows Azure, SQL Azure and a Microsoft-specified configuration of network, storage and server hardware.

TAKING ADVANTAGE OF CLOUD DATABASE SERVICES

Taking full advantage of the Cloud is not something that is going to happen overnight. Besides careful analysis and planning for migrating existing services, Cloud computing opens up a whole set of questions around what new services could be offered. For example, the rise of virtual schooling across the world – as brilliantly analyzed in the US by Clayton Christensen in his book “Disrupting Class” – will be a major beneficiary of cheap, ubiquitous database services at massive scale.  

As pointed out in the Cloud Watching #1, moving to the Cloud is not without effort and risk. David Chappell, in his excellent paper “The Benefits and Risks of Cloud Platforms: A Guide for Business Leaders“ points out that storing data outside their organization makes people nervous. Many countries have regulations about where certain kinds of data can and can’t be stored, so before putting data into the Cloud platform, it’s important to ensure compliance.

A key question is to ask whether any given data centre is more secure than those of the major Cloud service providers. A significant data breach for a Cloud services provider is likely to mean a huge financial loss, so there’s a very strong incentive for them to keep the data they hold secure.

David Chappell also advises – “as with any new technology, starting small can be a good approach. Perhaps your first cloud application should be important, for instance, but not truly mission critical”. The same can be said for data.

CONCLUSION

Whilst its early days for Cloud based database services in Education, we’re beginning to see interest turning to into plans and action. For example, Curtin University in Perth, Australia, has started to move some of its services to the Cloud and intend to take advantage of SQL Azure. 

Educause Horizon Report 2010, includes an analysis of Cloud amongst other key and emerging technologies – http://wp.nmc.org/horizon2010/chapters/trends/ It states:

“The abundance of resources and relationships made easily accessible via the Internet is increasingly challenging us to revisit our roles as educators in sense-making, coaching, and credentialing”.

Cloud will no doubt change how data is gathered, manipulated and interrogated, and by making vast amounts of storage available at extremely low prices we can look forward to seeing innovative organisations build completely new services to reach growing numbers of learners in completely new ways.

FURTHER INFORMATION

A great introduction to databases: http://www.microsoft.com/student/en/us/techstudent/handson/database.aspx

Getting started with SQL Azure: http://msdn.microsoft.com/en-us/magazine/gg309175.aspx   

Migrating to SQL Azure: http://msdn.microsoft.com/en-us/library/ee730904.aspx  

“How much data is that?” – http://www.jamesshuggins.com/h/tek1/how_big.htm

Thanks to Sven Reinhardt, database guru, for input into this article.

Cloud Watching #1 – Cloud 101

This article is the first in a series on Cloud computing, and focusses on the basics – the “what, why and how” of Cloud computing as it relates to Schooling.

INTRODUCTION

When New South Wales Department of Education and Training (DET), the largest School District in the Southern Hemisphere, wanted to put an annual Science Standard Attainment test online they faced a simple choice – $200,000 for server infrastructure or $500 to use a Cloud computing service from Microsoft. Watch the video here to find out what the NSW DET gained from their implementation: http://vimeo.com/18637271 

WHY CLOUD?

We don’t normally expect a schooling system to generate its own electricity. There’s no building with a bank of generators, no “Manager of Electrical Generation”, leading a team of technicians. But we have expected our schooling systems to be experts at running their own “IT Power Stations”, generating their own utility service.

 

So why not provide computing power in the same way as electricity? “Cloud-based” IT services can be “generated” remotely by a factory-size bank of powerful computers (“servers”) and delivered over the internet to subscribing consumers who can take as much, or as little as they need.

Cloud computing changes the game of delivering schooling services by addressing the following challenges:

The Scale Challenge

Schooling has scaling issues like no other service. With 1.2bn learners, 55m teachers, and 4.3m institutions, schooling represents one of the biggest single human enterprises on the planet. Providing cost effective learning services to entire populations is one of the opportunities that Cloud computing potentially addresses.

The Cost and Seasonality Challenge

Students are typically only in their physical school environment for 15% of the year. Schooling services undergo huge peaks and troughs, on daily, weekly, monthly and annual basis. When schooling systems run their own IT services, they have to pay for these whether they are being used or not.

Currency, Relevance and Interoperability

The next problem that schooling systems face is the rate of progress and change in IT. Choices usually come down to either to systems stagnating and providing out of date services, or enormous cost just to keep pace with change. Technology is advancing so fast that a student leaving a secondary school is likely to be comfortably using technology that did not exist when they started.

BUSINESS MODELS

Cloud computing addresses these issues through three main kinds of business models:

Software as a Service (SaaS)

Subscription based or free Cloud application services deliver Software as a Service (SaaS) over the Internet, eliminating the need to install and run the application on the customer’s own computers, and simplifying maintenance and support. Activities are managed from central locations rather than at each customer’s site, enabling customers to access applications remotely via the Web. Click here for more details, or here for architectural guidance.

Microsoft SaaS offers include:

Live@Edu – insitutions can use their own domain names to provide students with a complete “consumer” set of e-mail, collaboration and storage services. Live@Edu will be superceded by Office 365 for Education

Microsoft Business Productivity Online Suite delivers a suite of services for hosted communication and collaboration. 

Microsoft Exchange Hosted Services – filtering, archiving, encryption, and continuity.

Microsoft Dynamics CRM Online, student relationship management, automate workflows and centralized information.

Windows Live – worldwide there are 500 million Windows Live users using a package of comms, collab and storage services

Infrastructure as a Service (IaaS)

With “Infrastructure as a Service” (IaaS), customers get on-demand computing and storage to host, scale, and manage applications and services. IaaS delivers computer infrastructure – typically a platform virtualization environment – as a service. Rather than purchasing servers, software, data-centre space and network equipment, customers buy those resources as fully outsourced services. Suppliers typically bill such services based on a utility computing basis and amount of resources consumed – therefore the cost will typically reflect the level of activity. Click here for more details.

Microsoft IaaS offers include the following Datacentre tools for in-house or external service provision:

System Center – dynamically pool, allocate, and manage virtualized resources

Windows Server – provides a foundation for data centre services, including web-apps, power management, and server and desktop virtualization between on-premises, private cloud, and public cloud computing

Dynamic Data Center Toolkit for Hosters allows you to create a private or public cloud offering, including services for provisioning and managing servers

Platform as a Service (PaaS)

“Platform as a Service” (PaaS) delivers a computing platform and/or solution stack as a service. PaaS facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers. Typically, customers (such as NSW DET) will rent a set amount of capacity for specific periods of time, and turn their applications on or off and scale according to demand. They will only get billed for the time and capacity consumed. Delivering an annual test online for example, becomes significantly more cost effective through PaaS than through other means. Click here for more details.

Microsoft PaaS offers include the following:

Windows Azure platform is a version of Windows that runs in Microsoft datacentres. It includes SQL Azure (database) to enables applications and services to be run in the Cloud.

AppFabric provides a range of services including access control; connections between applications in the cloud; caching; integration; and APIs for developing and hosting an application on Azure

Bing Maps a complete set of geo-data services enabling functions such as visualisation of enrollment trends, or tracking assets such as buses.

Microsoft .NET – programming enironment for writing applications across a variety of devices, application types, and programming tasks.

ADVANTAGES OF CLOUD

Cloud offers a way to tackle the issues of cost, scale, change, currency, relevance and interoperability and flexibility of demand. In addition Cloud services, by their nature, tend to be designed for reach, and work across multiple open standard based devices. Cloud services are designed to run at internet scale supporting millions of users at prices of an order of magnitude lower than traditional solutions.

The cost of migrating between versions, or staying up-to-date is outsourced to the Cloud service provider. This also has the effect of removing capital expenditure (Capex) from IT provision and transferring to an operational expenditure (Opex) model that does not have the same associated peaks and troughs.

Cloud services are designed to be simple to deploy, provision and deprovision. Indeed when using platform as a service you only pay for the services you are using, while you are actually using them, which fits perfectly with education’s seasonality.

DISADVANTAGES OF CLOUD

With significant advantage comes a degree of disadvantage and risks, which should be carefully considered. These can be summarised as follows:

  • The risks of outsourcing
  • Storing data outside the institution or organisation
  • Service provider tie-in

For a an excellent and unbiased guide outlining the advanatages and disadvantages of the Cloud, download – “The Benefits and Risks of Cloud Platforms: A Guide for Business Leaders” by David Chappell.

MIGRATING TO THE CLOUD

Most on-premises applications will not have been built with Cloud architecture in mind, so the first set of decisions focus on what kind of of architecture you want. For example, a key consideration here is whether to use a multi-tenent model – ie a single instance of the software serving multiple client organizations.

There are several resources available to help with migrating to the Cloud  including:

SKILLS FOR THE CLOUD

CONCLUSION

Live@Edu has 10s of millions of accounts, proving that Cloud models can deliver quality services at massive scale. Aside from New South Wales DET, many schooling and learning services organisations around the world are beginning to take advantage of Cloud computing – for example, The Kentucky Department of Education moved more than 800,000 peopl to Live@Edu – a move that will help them save more than $6.3 million over the next four years. Florida Virtual School saved $2 Million by switching to BPOS.  Another interesting use of Cloud technology comes from Eduify – a small company providing research and writing assistance to students. Read the case study here.

MORE RESOURCES

Thanks to Brad Tipp for his input.

David Chappell has some excellent Cloud resources on his blog and a great summary of Cloud Platforms

  

 

What does Cloud based computing mean for schooling?

Cloud based computing is generating a lot of questions in Schooling Technology circles, but what does it really mean? How can it be exploited? What are the potential benefits?

The first thing we need is a definition of Cloud. Cloud based computing is generally thought centrally hosted services that can scale according to demand, with the advantage of significantly reducing costs. This differs from hosted services in as much as its elastic.. ie service levels can grow or shrink in response to varying demand levels.

But what of significantly reducing costs? In any one country there are usually data centres at Ministry, State and Local Education Authority levels. Each data centre will handle workloads that are common to other data centres – eg Student information (SIS) and management information (MIS). In some cases these data centres are used to distribute content, and manage learning. All this is underpinned by core infrastructure, security, identity, system management etc…

In other words, Ministry, state and local authority education departments solve very similar information management and technology problems in isolation, which is expensive and wasteful. It’s quite possible to aggregate the kinds of functions needed at various organisational levels and sell these on as hosted services, enabling individual organisational units to make savings on energy, hardware and platform maintenance costs.

So, the main opportunity behind cloud based services is to centralise datacentre functions, then let individual organisational units choose the services they want from a menu.

There are some early examples of this principal at work. For example, The Kentucky Department of Education just announced they are moving all their students, teachers and staff…more than 700,000 people…to Live@edu that will help them save more than $6.3 million over the next four years.

Live@Edu is known primarily as an e-mail service, but probably one of the most popular features of Live@edu is SkyDrive, which provides 25 gigabytes of cloud based storage for homework, documents, and projects.

Microsoft have also just made publicly available the final versions of the Office Web Apps on SkyDrive in the US, UK, Canada and Ireland. This opens up the exciting prospect of consuming productivity tools as a web-based service.

So what do you do if you want to start to exploit the Cloud in your schooling system?

First, start by exploring Live@Edu. E-mail is often one of the most expensive workloads to run, and the savings that Live@Edu can bring are enormous.

Secondly, look at what workloads are common between different datacentres and see where there could be significant savingsa and improvements in services.

Thirdly, through Private Public Partnerships, move relevant services into hosting envirnonments.

As Cloud offerings evolve, these steps will put you in a good position to exploit them.

We are in the early days of Cloud computing in schooling but the prospect of making huge savings, improving services and increasing effectiveness justifies the excitement we are seeing.