Hadrian’s Wall and the Multi-Core Processor

“The shared memory approach of Intel and AMD to general purpose multi-core processing is like building Hadrian’s Wall with 100 builders spread between Newcastle and Carlisle with one guy with a wheelbarrow delivering the bricks”, says Peter Robertson, managing director of Edinburgh multi-processing company 3L.

The shared memory approach is a US route, whereas the Europeans take the different approach of spreading computing resources. “The US approach is to use shared resources, the European approach is separate resources”, says Robertson. “There’s not a solution to the problem the way Intel and AMD are approaching it”, says Flemming Christensen, managing director of Sundance, “the whole concept of multi-processing using shared-memory is flawed.” “They try to take a sequential language like C to capture parallelism”, says Robertson, “people want to take code written for uniprocessors and magically turn it into something that will run on multiple processors and can be made to run as fast as you like just by throwing more processors at it. This is nonsense”, says Robinson. “The Dual Core approach running on Windows is to have completely separate programmes running on different processors”, added Robertson, “it hasn’t solved the problem of getting one programme to run faster.” “The approach of Flemming and I is that you have to recognise you need a vast number of processors, only some of which talk to eachother, and none of which talk to the whole system”, said Robertson, “and then you have to write the programme in such a way that the bits that need to talk to eachother, do talk to eachother You have to break the problems down, then you have a hope of distributing them across the processors.” Will Intel and AMD ever get there? “They’ll suddenly realise where they’ve been going wrong and take the right approach and then they’ll say they invented it”, replied Robertson, “I suspect Intel are doing it already, they’re just too embarrassed to admit it.”



  1. It strikes me Robertson and Christnsen both come from a DSP centric view of the world, where data parallelism exists in their code bases and is the main tenet facilitating the type of multiprocessing they describe.
    AMD and Intel have a different problem, limited data parallelism exists in the code bases they want to take forward in terms of performance (except in the display end…which is already offloaded to graphics processors that are focussed on data parallelism!). So they look for instruction level parallelisms instead. In such environments shared memory is a perfectly valid way forward in my opinion. I am not saying it solves the problem in entirety, because it just shoves a lot of the problems of managing multiprocessing into the programmers lap. So more work to be done here yet, thats for sure. But I am not convinced taking that ‘separate resources’ approach is the way forward for this type of software with its types of inherent parallelisms, or lack of.
    Different horses for different courses. Lets not get confused about the different types of multiprocessing that exist, and more importantly why they exist!

  2. Thanks Roberto, that’s a nice analogy for the Intel approach.
    And where are Morgan and RR now?
    Do you think there’s an answer to programming general purpose processors which use multi-cores?

  3. Many of the multi-core architectures already do have local memory: picoChip, Ambric, Tilera, Stream for example. Indeed, their programming paradigm is quite similar to what’s described here.
    Be interesting to know how 3L’s architecture compares to these.
    Rather than “Hardian’s Wall” my favourite analogy is Hen=ry Ford’s assembly line: a multicore architecture has lots of essentially independent, simple processors (workers) each dedicated to one task, linked together by the assembly line (interconnect).
    Intel’s approach is more like Morgan or Rolls Royce: a very skilled artisan who must do everything

  4. Electro-ramblings

    Top 10 most popular articles on ElectronicsWeekly.com

    Here are the top ten most popular articles on ElectronicsWeekly.com in the last week, with NXP leading the way with a $1.5bn war chest, Manchester University making a graphene transistor measured in atoms, and a warning from Dwight Decker that the semi…

  5. Thanks Alun, it’s good to hear from someone who liked OCCAM. So many people have said it was an innovation too far. With Intel and AMD desperate to find ways to programme multi-core processors, maybe it will undergo a revival.

  6. > Haven’t we been here before with the transputer and OCCAM?
    I remember programming in Occam (as a student, only) back in the late-80s – the folding editor in the IDE was excellent, letting you ‘fold out from view’ sections of code that you didn’t need visible. I’m afraid that idea lasted longer than the language… a very good idea, applicable to other areas, such as word processing in general, let alone other programming languages. See: Folding Editor – Wikipedia
    Although the Transputer HQ was in the South-West I remember there was a national Transputing computing centre of some sort in Sheffield.

  7. You’re absolutely right. Inmos solved the problem using an architecture based on separated memory – sometimes called the European approach.
    The ‘US approach’ so-called, has always been to go for shared memory architectures.
    Peter and Flemming are basically saying the US approach is a dead-end, the European approach might, but only might, be the solution

  8. Haven’t we been here before with the transputer and OCCAM?

Leave a Reply

Your email address will not be published. Required fields are marked *