So, whether you are planning a move to embedded Linux or are considering the investment needed to convert your existing application to run on embedded Linux, this paper will help you to understand the transition process, assess the challenges and risks involved, and appreciate the benefits realised from such a move.
While Linux increasingly takes the place of traditional RTOSs, executives, and kernels, the architecture of the Linux operating system is very different from legacy OS architectures. Moreover, there exists more than one means to host legacy RTOS-based applications on a POSIX-type OS such as Linux.
Several migration approaches exist, namely RTOS API emulation over Linux; run-time partitioning with virtualisation and full native Linux application port.
RTOS emulation over Linux
For legacy applications to execute on Linux, some mechanism must exist to service RTOS system calls and other APIs. Many RTOS entry points and standalone compiler library routines have exact analogs in Linux and the glibc run-time library, but not all do. Frequently new code must intervene to emulate missing functionality. And even when analogous APIs do exist, they may present parameters that differ in type and number.
A classic RTOS can implement literally hundreds of system calls and library APIs. For example, VxWorks documentation describes more than 1,000 unique functions and subroutines. Real-world applications typically use only a few dozen RTOS-unique APIs and call functions from standard C/C++ libraries for the rest of their (inter)operation.
To emulate these interfaces for purposes of migration, developers need only a core subset of RTOS calls.
Many OEMs choose to build and maintain emulation lightweight libraries themselves; others look to more comprehensive commercial offerings from vendors such as MapuSoft. There also exists an open source.
Partitioned run-time with virtualisation
Virtualisation involves the hosting of one operating system running as an application “over” another virtual platform, where a piece of system software (running on “bare metal”) hosts the execution of one or more “guest” operating systems instances. In enterprise computing, Linux-based virtualisation technology is a mainstream feature of the data centre, but it also has many applications on the desktop and in embedded systems.
Data centre virtualisation enables server consolidation, load-balancing, creating secure “sandbox” environments, and legacy code migration. Enterprise-type virtualisation projects and products include the Xen Hypervisor, VMware and others. Enterprise virtualisation implements execution partitions for each guest OS instance, and the different technologies enhance performance, scalability, manageability and security.
Embedded virtualisation entails partitioning of CPU, memory and other resources to host an RTOS and one or more guest OSs (usually Linux), to run higher-level application software. Virtualisation supports migration by allowing an RTOS-based application and the RTOS itself to run intact in a new design, while Linux executes in its own partition. This arrangement is useful when legacy code not only has dependencies on RTOS APIs, but on particular performance characteristics, for example real-time performance or RTOS-specific implementations of protocol stacks.
Embedded virtualisation as such represents a short and solid bridge from legacy RTOS code to new Linux-based designs, but that bridge exacts a toll: OEMs will continue to pay legacy RTOS run-time royalties and will also need to negotiate a commercial licence from the virtual machine supplier.
A wide range of options exist for virtualisation, including the mainstream KVM (Kernel-based Virtualisation Manager) and Xen. Embedded-specific paravirtualisation solutions are available from companies such as Virtual Logix. Open source options include the L4 partitioned microkernel.
Native Linux port of application Emulation and virtualisation can provide straightforward migration paths for prototyping, development, and even deployment of legacy RTOS applications running on Linux. They have the drawback, however, of including additional code, infrastructure, and licensing costs. Instead, “going native” on Linux reduces complexity, simplifies licensing, and enhances portability and performance.
The choice does not have to be exclusive. The first time OEMs approach migration they are likely to leverage emulation and virtualisation technologies. With greater familiarity with development tools and run-time attributes of Linux, OEMs can re-engineer legacy applications incrementally for native Linux execution.
One approach is to choose individual legacy programs for native migration and to host them under Linux in separate processes. This technique works best with software exhibiting minimal or formalised dependencies on other subsystems.
Another sensible practice is to implement new functionality only as native code, even if employing emulation or virtualisation.
Mapping legacy constructs onto Linux
The above architecture descriptions readily suggest a straightforward architecture for porting RTOS code to Linux: the entirety of RTOS application code (minus kernel and libraries) migrates into a single Linux process; RTOS tasks translate to Linux threads; RTOS physical memory spaces, (i.e. entire system memory complements), map into Linux virtual address spaces – a multi-board or multiple processor architecture (such as a VME rack) migrates into a multi-process Linux application.
Bill Weinberg is general manager of the Linux Phone Standards Forum and Jim Ready is founder and CTO of MontaVista Soft ware