How much virtualization do we need?

Earlier this year there was a discussion with my technial sales colleagues regarding what is the right level of virtualization. This may sound a bit superfluously these times as there are plenty of technical excellent implementations like KVM for Linux out there. But do we need to use a sledgehammer to crack a nut?

To help sort out the right technical solution for virtualization requirements starting with the business requirements (something we techies don’t like, but it’s necessary to fit the technology to the problem and get a project funded).

Typical business drivers for considering virtualization (and often this includes multi-core as well) are:
- Lower cost
- Reduce time to market
- Increase safety/security
- Better performance
- Add new features/differentiators

Typical usage scenarios where virtualization and/or multi-core can help are:
- Consolidate existing systems (for example: a system with 3 separate single-core CPU board into 1 multi-core CPU board)
- Seperate applications / run-time environments (for example: run two different OS’ and applications on one CPU)
- Add extra security by shielding the management application from the execution application
- Migrate legacy software to new hardware (including the self-developed real-time scheduler and API)
- Split the part of the system that needs to get certified from the uncritical part (especially in aerospace & defense and industrial vertical markets)

After this overview of the different requirements that may benefit from virtualization it may be easier to select from the different platform virtualization types available today:

1.) No virtualization
Think about what happens if you add no virtualization to your new system, what impacts and risk this may bring. This is the baseline you would rate the other types against with specific pros and cons.

2.) Operating-system level virtualization

This provides multiple, isolated user spaces and is suitable if the host and guest OS are the same, for example chroot jails on Linux systems.

3.) Para-virtualization
This provides virtual hardware similar but not exact to physical hardware, and benefits from hardware-assisted virtualization.
A type 1 hypervisor runs directly on hardware (examples: Citrix Systems, Inc. Xen® in paravirtualization mode and the announcend Wind River’s® hypervisor), while a type 2 hypervisor runs on-top of an operating system (examples: User Mode Linux, lguest).
It requires the guest operating system to be modified to use hypervisor, thus it may not be an option for binary-only operating systems.

4.) Full virtualization
A fully virtualized system provides a complete simulation of the (host) hardware to the guest, thus it runs unmodified (binary-only) guest operating systems.
The virtual machine runs either directly on the hardware (examples: Citrix Systems® Xen® in hardware-assisted full virtualization mode, VMware® ESXi) or on-top of an host operating system (examples: Kernel-based Virtual Machine (KVM), QEMU™ in emulation mode, VMware® Workstation).
To get acceptable performance hardware-assisted virtualization is required such as Intel® VT or AMD®-V™.

Several of the implementations mentioned are well-known and used in todays enterprise IT (client and servers), but do they apply for the embedded market as well?

You may find an answer yourself after comparing the different requirements:

Virtualization requirements in enterprise IT
- Typical: Intel/AMD x86, Sun™ SPARC®
- Multi-core with hardware-assisted virtualization
- Generic Linux, commercial *nix and Microsoft® Windows®
- No real-time requirements
- ideally 100% CPU utilization
- Server clusters, support blade servers
- Application 24/7 up-time
- Seamless migration between (physical) servers

Virtualization requirements in embedded devices
- Various CPU architectures: Freescale™/IBM® PowerPC®, Intel/AMD x86, MIPS®, ARM®
- Single or multi-core, often w/o hardware-assisted virtualization
- Modified Linux, commercial RTOS, legacy RYO OS, no OS
- Hard real-time requirements
- Small memory footprint
- Low-latency inter-virtual machine communication (IPC)
- Certifiable/Safety

Running through the multiple customer projects and case studies we’ve done so far, it became pretty clear that there is hardly a single standard solution to all those different requirements. What’s required is a configurable virtualization implementation that can be adopted and tailored for the project specific need-to-have list in terms of scalability, hardware support, isolation and usability.

The following virtualization technology checklist may be helpful to select the best fitting implementation for your project:
- Hardware: Available for x86, PowerPC, MIPS, ARM, …, hardware- assistance, single/multi-core?
- Adoption: Providing para/full virtualization, AMP/SMP, 32/64 bit, different guest/host OS?
- Isolation: Seperation and control of system resource usage, certifiable for safety regulations?
- Usability: Proven technology, rich tools set, integration with hardware/OS, supported?
- Embedded: Hard real-time capable, small memory footprint, configurable, runs headless, allows for direct hardware access?

So what’s the virtualization technology you’ve selected for your project and why? Did you find it easy to use? Does it fit the bill?

(For another point of view on virtualization please see Richard’s recent blog about Virtual Machines for Embedded Developers.)

Related posts