Will Finfets Affect ARM?

With Intel finally getting its finfets off the ground, it seems apposite to ask exactly what this process will achieve.

And who better to ask than Mike Bryant, CTO of Future Horizons?

“The Trigate transistor designer can choose to improve any parameter but the technology limits are 37% faster OR 50% more dynamic power efficient OR 90% less static leakage, thus in theory if you want the same speed and leakage, you get 50% less dynamic power.  Or if you accept the same dynamic and static (leakage) power then your circuit runs 37% faster.  Or for the same speed and dynamic power you get 10 times less leakage.”


“Obviously you actually choose some middle ground of all three and at the moment Intel have chosen a single option that is best for their MPUs but the optimal solution for SoCs is still being discussed,” adds Bryant, “once decided this will appear next year in the Silvermont processor.”


Next year is slated for the introduction of  Atom-based handset SOCs from Intel out on its 22nm finfet process, will that make Intel’s wireless chip-sets competitive?


No, says ARM’s CEO Warren East. “We expect the 22nm process will give them some advantage in power consumption, but whether that advantage is sufficient to make them competitive remains to be seen,” Warren East told me a couple of weeks ago, “they’re saying the process will give them 20% more power efficiency but they’re a lot more than 20% less efficient than ARM.”


Asked the same question, Mike Bryant replies: “Unlike in small systems where it is key, in big systems, which smartphones most definitely are nowadays, the instruction set used is far less relevant with only 10% to 15% of power dealing with it. So even if ARM waved magic dust on their instruction set, which isn’t perfect anyway, they can only optimise 10% to 15% of the power used. More power goes in moving data around the chip, and most goes to moving data on and off the chip to memory and peripherals.”


“ARM programs can actually use a few percent more codespace than Intel but let’s say they are the same,” adds Bryant, “next of course the data either processor has to be moved around – be it pictures, voice data, HTML or just the call set-up protocol, is identical for each processor.  Finally the video data has to be transferred onto the display and for all the hype ARM’s Mali is about the same as GPUs from Intel or Nvidia, though there are slightly more efficient solutions from Imagination Technologies or Qualcomm’s Adreno.”


“So for a pure ARM system against a pure Intel system, 85% of the power usage is independent of the processor instruction set and dependent purely on the process technology,” concludes Bryant.



  1. Yes indeed, Ian, and I also assume that it’s easier to bring up a new node for one customer than for hundreds.

  2. Intel do have a technology lead in performance, hence the push for TSMC to bring FinFETS forward earlier than 14nm.
    But the other driver for smartphones is cost, and one simple reason that Intel have the most advanced technology is that they can afford to have the most expensive wafers and don’t need to worry quite so much about yield (because high-margin x86 CPUs can afford all this) — the silicon cost is a much smaller part of their selling price so this makes sense, higher performance raises their ASP by far more than the increase in die cost.
    It’s unlikely that this is true for smartphones where they’re competing against relatively low-priced ARM SoC which use foundry processes optimised more for lower cost and higher yield rather than balls-out performance.

  3. OK I see where you got the statement now. I meant ARM was a few percent less codespace but the difference is minimal. I’m sure you can find ARM programs with larger gains, and I could find Intel programs that are smaller, but both would be unusual.
    The 10 to 15% is the consumption of the CPU itself as a percentage of the SoC and is the only thing that is totally ISA dependent. This will vary depending on where the instructions comes from and if it is the local instruction cache without a cache hit, as should be the case in any virtual machine, it will be significantly less than from the L2 cache or memory.
    The GPU is not dependent on the CPU’s instruction set and as you say this has increased in power vastly, as has the consumption of the caches and other parts of the SoC as these have been speeded up.
    But anyway the new Atom is most definitely NOT the old Atom with tweaks and I think the designers of it would take umbrage at that comment. A huge amount of effort was expended to improve the old design which was virtually just a skunkworks.
    I don’t know which reviews you read but the unbiased ones tended to show Medfeld was better than 40/45nm ARM SoCs in performance, but a little worse on power. The Samsung fabbed 32nm A5 is slightly better at everything as I said previously but the difference is closing fast.
    Finally Intel are most definitely not incompetant. They have the best of everything and to compete with them once they have 22nm SoCs will be a massive challenge, hence the demands of TSMC’s customers to roll out 20nm ASAP.

  4. Well you explicitly said “ARM programs can actually use a few percent more codespace than Intel but let’s say they are the same”. That’s not true.
    Note a CPU fetches instructions almost every cycle, which can take 30% of total power of a core. Smaller codesize means less power is required as you transfer more instructions per fetch. Note it doesn’t matter whether you run native code, a JIT or an interpreter (rare nowadays).
    Where do you get your 10-15% number from? It seems out of date. If anything, the relative consumption of the SoC is increasing fast as they become more powerful – CPU count and clock frequency has quadrupled in just 2 years. And there is the GPU too… The new iPad for example uses 10W max, and the SoC is 50% of that despite the huge power hungry screen.
    The “new” Atom is the same old Atom with the same old microarchitecture – besides a few minor tweaks the main difference is the new process. Whether microcode is ever a good idea is a different discussion, but you can’t deny the x86 ISA doesn’t add a lot of complexity and extra transistors which leak power even if not used. My point was that a simpler ISA means there are far fewer transistors wasting power. So simple in fact that you can do an out-of-order in a fraction of the size of an in-order core.
    All the reviews are comparing 32nm Medfield with older 40/45nm ARM SoCs, and Medfield doesn’t come out well. The 32nm A5 SoC improves iPad2 battery life by 15-30%, so when 28/32nm ARM SoCs appear in mobiles things won’t be looking good at all for Medfield.
    Yes I agree that 22nm will help Atom in competing with 28/32nm ARM SoCs. But that’s exactly my point: Intel has to stay one process node ahead in order to compete at all. And unless you believe Intel is incompetent, the main reason is the x86 ISA penalty.

  5. I totally agree Robert. Obviously Samsung are never going to change to Intel but the problem Apple has is that it’s SoC is made by its biggest competitor. That cannot be a happy feeling either. And Apple is one of Intel’s largest customers.
    Another possibility is Intel buys an existing phone company – it has become one of the top SSD suppliers so it can deliver things other than chips.

  6. I’m not competent to argue power numbers but a more interesting question for me is WHY would either of the two big smart phone sockets change over to Intel?
    They would be putting their jewels in a vice where Intel alone decides how hard to squeeze. There is no reciprocity in this relationship. Unfortunately history is not on Intel’s side, they gelded a few too many partners for me to unconditionally trust them.
    Now if you consider 2nd tier sockets (maybe the Chinese guys) you not only need to win at the phone maker level, you have to break the close relationships that MTK/Mstar/SPRD have with these guys. MTK’s business relationship model will be hard for Intel to replicate, it’s just not the American way!
    That kinda leaves HTC, LG and dare one mention it NOK.
    Unfortunately I think all they’ll ultimately get is a RIM tie-up otherwise we are in for a tumultuous time watching INTC try to dance with Moto(Google), I wounder who leads in that pairing?

  7. @Wilco – well obviously you have your opinion and I have mine. But I never made the claim ARM and Intel codesizes were the same, though I’ve never found any ARM code being 40% smaller. However my point is that most memory accesses are for data, not processor instructions. For example smartphone aps generally use bytecodes and these are the same for any processor. The bulk of the interpretor sits in the instruction cache almost permanently so there is just an initial program data transfer from memory after which the data transfers will be identical. This is the core of my argument, that most data transfers around a SoC are now moreorless independent of the actual processor used.
    Returning to my “10 to 15%” number I’ll happily accept the ARM is nearer 10% and Intel nearer 15% of the total SoC consumption but my argument is that this difference is now becoming small in relation to the total consumption of the phone and no longer represents a huge gain in battery life.
    I believe your sizes and quoted performance appear to correspond to the old Atom, which David will confirm I called a dog. But in any case you cannot just omit the L2 cache as it’s about 50% of the size of the processors, thereby narrowing the size differential considerably. And the microcode memory is how Intel have always designed their processors. It’s different to ARM but that doesn’t make it the right or wrong way, just different.
    If you comparing the 32nm Medfeld processor with a 32nm dual ARM A9 the Medfeld gives a quite comparable power and processing performance, but the key thing is the lessons learned from Medfeld will be put into future 22nm SoCs and that is when Intel will be able to compete agressively with ARM.
    But at the end of the day it will be the customer who decides which is the better processor, not either of us. At the moment I’d say the winner will be ARM, but it would only take Apple to change processor and it would be a whole different ballgame. And swapping processors for Intel’s is something Apple has good practice in.

  8. Firstly your statement about ARM vs x86 codesize is incorrect, it is well known that Thumb-2 codesize is 30-40% smaller than either x86 or x64 code. This means the I-cache is effectively larger, resulting in lower power consumption. While I agree a lot of data movement may not be ISA specific, I-fetch is a significant portion of CPU power consumption, so any ISA-related savings are worthwhile.
    Also the claim that “the ISA does no longer matter” is misleading. The ISA still matters a lot as its influence goes way beyond the decode stage – it affects the core including L1 and L2 and thus a very significant proportion of a SoC.
    To give a specific example, an out-of-order ARM is much smaller than an in-order x86 core (excluding L2, you can fit around 4 Cortex-A9 cores or 2 Cortex-A15’s in the same area as a single Atom). ARM CPUs simply don’t need 140KB of microcode ROM or a 256KB dedicated SRAM to implement the x86 power saving states. The much larger die size of x86 CPUs doesn’t translate into performance either – an Atom is easily outperformed by an A9 running at a lower frequency. And 32nm Medfield-based phones are mediocre in power efficiency when compared with older 40/45nm ARM-based phones. If the ISA didn’t matter, 32nm Medfield would have beaten 45nm SoCs by a large margin due to better process technology.
    If you had said “the x86 ISA penalty can be mostly hidden by using a better process” then that would be more accurate (but perhaps inconceivable to admit…). The truth is, both current Atom and next-gen Atom are crucially relying on having a process node advantage in order to compete at all.

  9. Just to clarify a few points :
    The 10 to 15% is the percentage of the IC power budget used to push data around INSIDE the CPU proper. More power is used to move data to and from L2 cache, memory and peripherals, but my argument is that in a high performance smartphone these will be very similar no matter what the processor is – Intel, ARM, MIPS, Renesas, Sparc ……
    Of course this number varies with phone usage, for example when watching a video then the power consumption is dominated by the GPU and memory bus, whilst when actually making a phone call then the data movement is quite small and the CPU will be a higher percentage. But 10 to 15% is a reasonable average.
    Moving onto Intel’s policy, I think it is pretty settled that Intel won’t be opening their fabs to outsiders for custom designs. They are offering a moreorless complete 32nm SoC in Medfeld at a reasonable price whilst gaining enough knowledge to make the 22nm version the true market entry product. Since Intel will be choosing their customers they will be able to set an acceptable price without killing their PC pricing strategy. They won’t be making a PC level of profit as they will have to compete with Qualcomm’s prices but they won’t be bleeding to death either as the fab will have gone through its most expensive start up phase and be some way down the depreciation curve using PC processors before SoCs are thrown at it. TSMC on the other hand have to depreciate their fabs primarily on the ARM based SoCs. And an Intel fab and TSMC fab cost more or less the same to build and run.
    Finally on PC processor performance, rule 1 is a PC is never fast enough 🙂 But in any case the server market is totally dominated by Intel and the growth here is massive, driven mostly by smartphone usage of course.

  10. Thanks Xavier, interesting figures.

  11. Not to be forgotten in the ARM-based vs Intel-based competition landscape is this observation from David : “…there’s a wild card of course – FD-SOI”. Like FinFET, it relies on fully depleted transistor technology, but retains a planar transistor architecture — creating opportunity for chip makers to close the power / performance gap while leveraging existing designs and process technologies, already at 28nm and then at 20nm. STMicroelectronics will have samples this year using 28nm FD-SOI and claims impressive benefits such as over 5x the performance of 28nm low-power technology at 0.6V, as well as better peak performance than G-type technology at a fraction of the total power; their partner STE expects to achieve 35% total power reduction for an ARM-based application processor at maximum performance by moving it to FD at the same 28nm node, giving, e.g., an additional 4 hours of high-speed Web browsing… So overall an interesting wild card indeed.

  12. Yes indeed Ian, what goes around comes around

  13. The real problem that Intel have is that in the desktop PC space (and soon in the laptop space) CPUs are already good enough for 99% of users, and have been for a couple of years. So offering even more performance or CPU cores won’t sell new PCs when most people don’t use what they’ve already got.
    The same will happen very soon in laptops; once performance is enough and the CPU no longer dominates the overall power consumption there’s again no reason to replace/upgrade for more performance.
    Then the CPU becomes a commodity where so long as performance is good enough price is what matters, and once OS like Windows break away from the x86 duopoly Intel (and AMD) will face a real problem as their high-priced CPU business gets eaten alive by low-cost ARM SoC solutions.
    As was pointed out, it’s like high-priced DEC getting killed by the low-cost PC all over again…

  14. Mike, I accept the process-related figures you give. It’s the dynamic power level that Intel’s process advantage must be used to bring down. They will no doubt rely on dual-core to get the performance, and must move to the same level of power-management attention-to-detail that ARM’s licensees have evolved.
    However, I don’t get the 10 – 15% CPU use of total power argument. Are you perhaps looking at the whole phone including LCD and radios? Even then I would say that 15% is lowish.
    As others have eloquently explained, if Intel can use its process lead to equal or even to slightly better ARM in power/performance, then the focus is on the business model, which will be interesting.
    I’ve a feeling that ARM will be able to defend leadership in the smartphone market, but the interesting battlefield will be tablets, and the crossover territory with ultralight notebooks, where the percentage contribution of CPU consumption is indeed lower, and where it also now looks likely that the x86 version of Windows 8 will have some advantages over the ARM version.

  15. Torben Mogensen

    While I agree that most power goes to driving signals off chip, this is exactly where ARM wins: ARM SoCs are a lot more integrated than Intel ditto, so there is less need to push signals off chip. Also, due to the large number of different SoCs (which a single vendor would not be able to support), end-product producers can choose a SoC that pretty much exactly fits their needs without having unused parts of the SoC or having to add extra coprocessors outside the SoC.
    So I can’t see Intel winning this game unless they change their business model to allow other vendors to build SoCs around Intel cores — at a reasonable price. But I can’t really see this happening anytime soon. Intel _may_ agree to tailor SoCs to major customers (if Apple wanted an Intel-based SoC for iPad, Intel would be happy to help), but I can’t see them going for a model as liberal as ARM’s.

  16. I believe that the elephant in the room is the business model. Intel’s business model is built on selling processors for $100’s with 60% plus GPM and limited competition while Intel’s ARM-based competitors rely on business models tuned for being profitable at 40% GPM and with ASP’s in the $10’s range. The likes of Nvidia, Qualcomm, TI and so forth – not to mention Taiwanese and Chinese vendors such as MediaTek, MStar and Spreadtrum – are also used to operate under much tougher competition than Intel because they could not leverage a de facto monopoly for 20 years.
    The risk for Intel is ASP deflation (e.g. a $100 laptop CPU substituted by a $20 tablet SOC) leading to lower % GPM and much, much lower $ GPM. For example, a $100 mobile CPU at 60% GPM yields $60 with which to pay operating costs and generate PFO. A $20 tablet SOC at 40% GPM yields only $8 GPM to carry the business.
    Intel could win some battles but they do not have the business model to win the war and the business model does define to a large extent the capabilities of a firm over the long term.
    Also, as several people before me commented, Intel might be closing the gap in terms of technology over the next couple of years, but what is their unique value proposition and advantage in the mobile space? Intel need to do much more than closing the gap to become relevant in the mobile and tablet space and to make the kind of money that will keep them in the game over the long term. And my bet would be that the P&L of Intel’s mobile business will keep losing money or being marginally profitable at best for the remainder of this decade (assuming they do not shut it down before the end of the decade).

  17. I’d just add a couple of points to Stooriefit’s: will Intel open the door to their process technology with toolkits for fabless mfrs.? I think not. Does the $40. SoC fit the Intel business model which has been focused on ~$300. average price CPUs? I don’t see it.
    I’d also add that in all the years I’ve been in the computer business, I don’t recall a company which has managed to migrate its architecture (system and ISA) down to a disruptively smaller form factor As Intel itself has shown, the momentum favors the upward migration path.

  18. Here Here, Storiefit, I heartily concur with everything m’learned and honorable friend has said

  19. May I refer the house to my written answer of the 19th of April:
    Can the margins on smartphone processors support Intel’s current process R&D + MDF overheads? No.
    Can Intel get enough performance out of x86 architecture (even if it is only 10% of all power consumption) when fabbed on an outside foundry partner’s process? No.
    Will smartphone vendors tie themselves in to a single supplier of processor and leave themselves open to business practices from the PC industry which have been judged to amount to extortion? No.

  20. As I said to David, the MPU process is 20% better than the 32nm process, but Intel will choose a different place in the trade-offs for Silvermont, possibly to the extreme of 50% better with no speed gain.
    The availability of other GPUs is mentioned by David, but the full conversation with David was specifically to compare ARM on TSMC 28nm with Intel on FinFET 22nm and I stand by the numbers given.
    My general point was that the power consumption in the logic is now significantly less than that dissipated in the inter-block interconnections. There is thus little noticeable difference between any instruction sets and the recent A15 or other additions to the ARM instruction set make little real difference.
    I can happily offer a much more technical analysis but it isn’t going to fit into a Mannerisms post. Feel free to grab me at an industry event if you want to discuss it in more detail.

  21. Thanks Ian, you’re right. The Intel claims are a comparison with Intel’s planar 32nm. I’ve heard TSMC people claim that their 28nm planar process has better power/performance characteristics than Intel’s 22nm finfet process. This is the key comparison. And if TSMC can get their 20nm process into production this year – as they say they intend to do – then the Intel/TSMC process gap won’t be a generation, it’ll be 6 months. And there’s a wild card of course – FD-SOI.

  22. It looks as if it’ll be ARM’s architecture vs Intel’s process, David. The proof of the pudding will be in the eating – sometime in 2013 when Atom-based SOCs for handsets come out.

  23. This is not a proper technical evaluation.
    Let’s take one sentence: “Finally the video data has to be transferred onto the display and for all the hype ARM’s Mali…”
    It isn’t the job of the GPU to transfer video data onto the display. Also, ARM-based processors don’t have to use ARM’s Mali – they usually use GPUs from third parties.
    The overall picture of the CPU shifting data around is not what application processors are about. In layman’s terms (and at best that’s what this post reflects, at worst you might call it pseudo-techno-speak) they are servicing a demanding OS which in turn provides the APIs allowing the device applications to do the things that put the “smart” in smartphone.
    The “magic dust” comments fail to offer any technical analysis of ARM’s ever-expanding instruction set as it pushes on to A15 and beyond.
    At least Warren East’s assertions are borne out by the status quo. Today, ARM processors in smartphones are much more power-efficient than Intel. Fact. A worthwhile analysis would consider whether Intel can overcome this disadvantage with 22nm finfets, vs the 28nm best available ARM cores. If it can, Intel could seize the initiative.

  24. With all due respect to Mike Bryant, I’m not convinced with his numbers, I don’t think they’re comparing FinFET and planar at the same geometry (I think Intel compared 22nm FinFET with 32nm planar i.e. two successive Intel process technologies) — the 20% difference ARM quoted is much more realistic, which is about one process generation at the same geometry.
    Intel are good at doing high-speed high-power CPU cores *and peripherals*, it’s where their real expertise lies and why they win in the PC market. For an SoC solution you need good power efficiency in both the CPU cores and all the peripheral circuits which — as Mike says — do all the moving the data around, and ARM (and all their partners) have done much work over the years to keep the power of these down as well.
    So the comment that “they’re a lot more than 20% less efficient than ARM” is probably still true overall for CPU plus SoC peripherals.
    But Intel are catching up, and if they do have a one generation process lead this is certainly not to be ignored — or over-hyped 😉

Leave a Reply

Your email address will not be published. Required fields are marked *