Sequential Programmes And Parallel Processing Don’t Mix

Current attempts to use multi-cores in the mainstream computing world, like the efforts made by Intel and Microsoft and some US universities, are doomed, according to the Professor of Computer Science at BristolUniversity, Professor David May FRS.


“To try and take a sequential programme and run it on several cores is impossible,” says May, “they (Intel and Microsoft) are taking all the PC applications and putting them on multi-cores and, in my view, they won’t be very successful. Taking sequential programmes and trying to make them run in parallel is virtually impossible.”


May, the architect of the multi-core Inmos Transputer, sees three approaches being followed for programming multi-cores which have no chance of success.


First the shared memory approach. “Shared memory is incredibly difficult to do”, said May, “Intel have tried for years to optimise access to  memory and now they’re trying to put several cores to accessing one memory and it becomes more and more complex. If you take a general purpose processor, the cores are fighting to get access to the memory system.”


Secondly people put their faith in compilers. “Some people will bet on complex heterogeneous architectures and compilers that do magical optimisations  – if they don’t know that compilers take much longer to develop than hardware.”


Thirdly: “Some people will bet on abstraction layers to allow legacy software to be ported to parallel machines – if they haven’t yet discovered why their mobile phone takes so long to boot,” said May.



  1. There are certain realities that have come to pass
    since the mid/late 1980s when the 5GL initiatives
    1. The technologies to exploit any parallelism/
    concurrency/sequencing inherent in a program (processors, algorithms, code generators etc) now
    exist, or are quite mature.
    2. The per unit cost of the aspects in 1 are now quite
    Correspondingly, there are certain realities that make
    all the above pointless. The primary one being that
    s/w developers are taught, and work, in a sequential
    manner. These people need to work at a far more
    declarative/abstract level than they currently do,
    which will allow the tool chains to be constructed
    to target/exploit a particular environment.
    The h/w industry has been doing this for years
    (VHDL, RTL, SPICE etc) . Granted their equivalent
    of the 5GL ( “silicon compilation” etc) has also not
    happened, but they are much further down that path
    than their s/w peers.

  2. I seem to remember a number of very interesting companies doing similar things (as far as I can tell). Certainly Improv Systems had an approach that, at least conceptually, is similar to XMOS (albeit programming in Java rather than C). Specify the program(me) in a high level language, and at the other end pops out the architecture. Tensilica at a macro level sort of mines the same vein. Then there was PICO from Hewlett-Packard (Program In, Chip Out) which, as my friend Roger Shepherd knows, was Pretty Interesting Stuff. I wish Dr. May and the entire crew at XMOS success. New approaches should be roundly encouraged. Alan R. Weiss, Austin, Texas USA

  3. I think what’s being said is that the current attempts by people like Intel and Microsoft to get programmes, i.e. all the legacy x86 based stuff, that have been written for one processor, to run on several are doomed BUT if you take the XMOS approach where the programmes are written from the start specifically to be implemented on several processors then it is possible to efficiently use multiple processors.
    And you’ll find XMOS in our Tag Cloud and quite a few posts about what it’s doing.

  4. No mention of Prof May’s current company? Not even an Xmos in the tags? Clearly his view of the future of parallel processing is much more positive than this post suggests.

Leave a Reply

Your email address will not be published. Required fields are marked *