Intel, Microsoft enter multicore initiative

The thorniest problem in multi-processing, how to share computing tasks between multiple cores on the same general purpose processor,  is to be addressed by an Intel-Microsoft funded initiative at the University of California at Berkeley. The initiative is expected to be announced later today.

Although it is well understood how to share tasks between multiple processors on a chip which is used for a specific purpose, it has been found almost impossible to do this effectively when the chip is to be used for general purposes as in a PC. The result is that one core does most of the work, while the other does little, leading to overall poor performance.

And the more cores you add to the processor, the less the incremental gain in performance  derived from each additional core.

Asked a couple of years back how Texas Instruments was dealing with the problem, Alan Gatherer, CTO for communication infrastructure at Texas Instruments, memorably replied: “I’m not sure anyone knows how to build a generic multi-core architecture. It’s a great goal, but the chances of failure are 100 per cent.”

Gatherer’s point is that multi-core delivers performance when it is targeted at a specific application, when you know what each part of the chip will be doing, but it becomes a nightmare when you try to produce a microprocessor which can be programmed for many different applications which, of course, is supposed to be the whole point of a microprocessor.

Intel has been obliged to go down the multi-core approach to making generic processors because the leakage current on sub-90nm chips became so high that it was impossible to gain extra performance by increasing the frequency, which is was how Intel got more performance out of its processors for over 20 years.

Surprisingly the money being put into such an important project to Intel’s future competitiveness is only $2m a year over five years.

Related posts