Check your speed

Check your speedDRAM manufacturers are looking to speed up of the SDRAM bus to drive PCs. Richard Ball reports
Bus architectures are an underrated aspect of any computer, whether a PC, workstation or embedded system. The design and choice of the various busses can have more effect on the final product than any other component in the system.
PCs today contain a multitude of busses (and acronyms), including PCI, ISA, AGP and USB.
Closest to the processor is the main memory bus which is attracting a great deal of interest. Intel, with its dominance of the processor and chipset market, is pushing for Rambus DRAM to supersede synchronous DRAM.
This will reduce the width of the memory bus from 64 to 16 bits, reducing pin count and, if Rambus is believed, simplifying PCB design.
“Intel has stated that direct Rambus is the next step forward for desktop PCs,” said Intel’s Graham Palmer. “SDRAM will become the limiting factor in overall performance.”
However, various factors, including Rambus’ larger die size and royalties leading to higher prices, mean that DRAM manufacturers are starting a small revolution. They are championing a speed up of the the SDRAM bus, from 66 or 100MHz to 133MHz.
With the memory bus being 64-bits wide, this results in a peak bandwidth just over 1Gbyte/s. But double data rate versions will quickly raise this to equal Rambus.
Rambus, shifting data on both edges of a 400MHz clock, reaches a rather impressive 1.6Gbyte/s.
Intel concedes there is a space for SDRAM, but only at the lower end of the market, where processor such as Celeron and Cyrix devices reside. As Pentium-III moves to Rambus, Celeron will shift to 100MHz SDRAM, said Palmer.
Cache busses are also very important to processor manufacturers and PC makers. However, it’s been the x86 cloners, not Intel, that have been active in changing cache architectures. AMD, Cyrix and IDT have all capitalised on better semiconductor processes and moved second level caches on chip.
“From K6-2 to K6-3, the external cache has moved from the main memory bus to a backside, on-die cache running at the full processor speed,” said AMD’s Richard Baker.
While such a step forces a smaller cache, running at the full processor speed can add up to 15 per cent in performance.
But there is still space for that cache on the motherboard, said Baker. “So some motherboards are coming with up to 2Mbyte of level three cache,” Baker said. “This is almost worth an extra speed grade – around ten per cent. It’s an economical way of boosting performance.”
From AMD’s point of view, the argument between Rambus and SDRAM has less relevance. Like other clone makers its processors still use the Socket 7 or Super 7 format, so they will rely on SDRAM for the time being. SDRAM at 133MHz and double data rate will be sufficient for some time to come.
In the Summer, however, AMD will introduce its next generation chip – the K7. “When we go on to our seventh generation chip, the bus speed is starting to become limited at 100MHz,” said Baker. “We wanted to enable multi-processor systems, so we decided to use the Alpha EV6 bus.”
First developed by Digital Semiconductor for the Alpha processors, this 64-bit bus runs at 200MHz, and is claimed to be much more efficient than existing busses.
And AMD has quietly set up a cross-licensing deal with Rambus. This would seem to indicate the choice of memory for K7 motherboards.
For several years, PCs have relied on the PCI (peripheral component interconnect) bus to link the processor and memory subsystem with storage and the outside world.
But PCI is under increasing pressure to handle more and more data.
One of the bigger culprits – the graphics system – has circumvented the problem by having its own bus – Intel’s accelerated graphics port (AGP). “Streaming video data would not have been possible without AGP,” said Intel’s Palmer.
When Intel brings out the next round of chipsets later this year – those supporting Rambus – it will double the speed of AGP to around 1Gbyte/s.
“Before AGP the graphics card sat on the PCI-bus and graphics data was vying for bandwidth on that pipe,” Palmer pointed out. “By creating a dedicated graphics bus, you remove the bandwidth hungry element and take away some of the bottleneck from PCI.”
This is enabling some quite remarkable applications on the humble desktop PC. With the launch of Pentium-III, comes the ability to record and compress TV programs onto hard disk.
Moreover, an image can be frozen, while the processor is still storing the live video data to disk. When the freeze frame is cancelled, the stored data is played back at 125 per cent speed, eventually catching up with the live TV picture.
This, said Palmer, also hints at why Rambus is a better bet than SDRAM for main memory.
“Rambus is not just about peak bandwidth – it’s also about efficiency in terms of turning around reads and writes,” said Palmer. “Sustainable bandwidth is much higher than SDRAM – it’s a big step.”
For PCs at the higher end of the market, such as servers and workstations, the situation and applications are different. These machines are primarily interested in shifting large amounts of data.
One solution to the limited bandwidth of the PCI-bus is relatively simple – just use a faster or wider bus. The standard 32-bit, 33MHz bus can be extended under version 2.1 of the PCI specification up to 64-bits wide and 66MHz operation. The silicon is difficult to make and more expensive, but it gives a four-fold increase in data throughput. Maximum data rate is 528Mbyte/s.
But for certain servers and workstations, in particular multi-processor systems, even the faster rate PCI-bus isn’t going to solve the problem. In some multi-processor applications that are I/O intensive, one of the processors can end up doing nothing but handle interrupts.
Extensions to PCI have been suggested. The latest and most credible being from Compaq, Hewlett-Packard and IBM. Their PCI-X proposal gives twice the previous bandwidth.
Intel, which originally developed PCI, has also spen t time on the problem and come up with a more radical, longer term solution.
Next generation I/O, or NGIO, uses a serial packet based approach akin to that of Ethernet. It promises bandwidths starting at 1.25Gbit/s for each link.
As to which of PCI-X or NGIO is successful is uncertain. The PCI special interest group will decide what happens in the short term. This could involve PCI-X for the short term with an NGIO style approach in the long term.

Leave a Reply

Your email address will not be published. Required fields are marked *