Keeping legacy Linux code up to speed through parallelisation

An interesting Moblin interview with Imad Sousou, director of Intel’s Open Source Technology centre, on ZDNet. The Moblin initiative (short for Mobile Linux) is aiming to provide optimised Linux technology for netbooks and MIDs (mobile Internet devices).

Questions posed include:

  • There seems to be some confusion over what Moblin entails  -  it appears to be a full Linux distribution, but we have seen Suse and Linpus flavours, and Canonical are about to release an Ubuntu flavour. What is Moblin?
  • Will we see Moblin devices in the UK market soon?
  • What changes have been made since the first version?
  • Moblin is also tailored for MIDs, which is a segment that hasn’t taken off yet. Will MIDs become more popular?

Read the full interview with Imad Sousou. It took place at the Open Source In Mobile 09 event in Amsterdam.

“Adding parallel processing to legacy code is a desire of every software company that has an existing product which is significant in complexity and which needs to run faster,” writes Tom Spyrou on the Intel Software Network blog.

(Tom works for Cadence Design Systems as a Distinguished Engineer.)

He is addressing the issue of how to keep legacy Unix or Linux software up to speed now that processor clock rates are not increasing much and multiple cores are being added to chips instead.

As he sees it, the problem of speeding up software is “moving from a hardware improvement problem to a software parallelisation problem”.

It all revolves around the use of the Copy on Write (COW) mechanism of fork() for Unix applications.

(Note, this is a follow-on post to another of his blogs – Why Parallel Processing? Why now? What about my legacy code?)

He writes:

Typically with multi-core processors, the first thought is to use multiple threads in a shared memory programming paradigm to parallelize a software algorithm. This approach can work very well, especially for software designed from the ground up to be thread safe and thread efficient. Thread safety means that the threads and data structures are written in such a way that there are no race conditions between the threads for shared data.

Thread efficient is a term I use to discuss the efficiency of the scheme used to avoid race conditions as well as the code’s ability to efficiently use the processor and its cache and memory bandwidth to keep the processors busy. Making legacy code thread safe and thread efficient is often a difficult task, especially for large pre-existing code bases and/or code bases that have been developed over a long period of time. In such code a top level understanding of the code call chains and architecture is often not complete. Simple locking can make the code thread safe but often leads to locks which have a long duration and make the code thread safe but not thread efficient. Re-coding the data structures and code can be prohibitively expensive and lengthy for the short term needs of the software’s user.

Read the full blog post >>

Related posts