From C Code To GDSII In 20 Weeks

Silicon Valley start-up Algotochip claims to reduce SOC design to 8-16 weeks by directly converting customers’ C-algorithms into an optimal IC implementation with unprecedented power savings.

Algotochip’s CEO Satish Padmanabhan, says that the company has turned an LTE design from Germany’s MimoOn from C code to final GDSII in 12 weeks.

Based on the complexity of the application, the company says it will take 8-16 weeks to create GDSII from C code.


The company is currently on the hunt for funding, said Padmanabhan. Its customers are in the consumer industry.


Algotochip answers the question posed by Altera’s Ron Wilson last year: “Whatever became of the idea that we could define an embedded system in C, push the Compile button, and watch the tool spit out a complete hardware and software design system.”


Algotochip claims to do all the work in creating the hardware, firmware and software from a customer’s C-code. A customer does not need to use, or possess any knowledge of, Algotochip’s technology and tools.


The resulting solution is optimized for target implementations so, for the same application, the appropriate RTL will be generated for each target.


There is no need for Algotochips’s customers to license any other 3rd party IP cores for the SOC, says the company. The IP code in a customer’s C code is realized directly into Silicon IP.


The company claims its Power-Aware Architecture offers ‘superior’ dynamic and leakage power. Customers can choose their own process node and foundry.




  1. But Ian, transistors are free. Someone somewhere said that I’m sure.
    NOT !!!

  2. I agree that tools like CatapultC and Forte reduce the time to RTL; how efficient the result is depends on how well the coder understands the implications on the back-end design of choices made at the architecture level.
    It may be that the fact that the tools are quicker/easier to use than RTL coding means that the designers using them are less experienced at spotting these consequences than designers who understand how to code efficiently, or it may be that the tools just don’t generate as good code as an experienced RTL coder — I don’t know, but our experience is that the gate count, area and power are bigger.

  3. Hello,
    Compilation from C code (or SystemC) exists for a while: CatapultC from Mentor Graphics, or Forte Cynthesizer for example. I have tried the latter and it proved to be efficient when designing IPs that process data-paths.
    There exist as well custom CPU/DSP design tools that will be efficient to implement complex audio-video algorithms or SW defined radio.
    All in all, after an initial learning curve, you gain 50% on development time for many complex digital IPs. But this is more C to IP: “C to GDSII” concept is purely marketing. Besides, these tools are rather tricky to use and once converted to RTL, just forget the idea of making metal fix for a potential bug.

  4. Processors are fine if you want/need flexibility, but they are pretty much *never* lower-power than hard-coded logic for known compute-intensive applications like signal processing, regardless of what Icera say — they may be lower power than FPGA, but nobody would use FPGAs in power-sensitive applications, or even cost-sensitive high-volume ones.
    The exception is if you’ve got a lot of varied code each bit of which only executes occasionally, in this case the complexity of the logic grows exponentially and a processor is usually better.
    And if you need the flexibility to change algorithms afterwards a logic solution is obviously not the best 😉
    Neither is it if you can’t afford the massive NRE or development for a 40nm/28nm/20nm custom chip, even though you might *really* want one…

  5. The problem with the sort of approach is that it converts the C algorithm into hard gates, which implies that the designers knew what they wanted in the first place. In today’s complex systems it makes a lot more sense to design a processor for the class of application and then just write C code that runs on that. This can also work out more power efficient as well (ask the ex-Icera guys about that).

  6. Thanks Ian.

  7. Having seen similar ASIC designs from multiple customers using different design flows, our experience is that C-to-silicon type flows deliver bigger higher-power chips in (maybe) shorter timescales.
    And even then the overall timescales in 40nm and below are still dominated by the back-end design (RTL to silicon) not the front-end design (architecture/algorithm to RTL). So if the C-to-silicon approach gives higher gate count the back-end takes even longer, so you probably don’t save any time overall.
    Note that this is for very large designs (50M gates and up), the same design time scaling may not apply for smaller designs where time-to-market is highest priority — but larger chip size is not usually a price that can be paid in consumer designs which are cost-critical.
    Of course it’s always possible that they’ve made some real breakthrough, but then this has been claimed many times over the years 🙂

  8. “Its customers are in the consumer industry.”
    And so can amortize the probably very high NRE over millions of units. Worth it if it reduces the silicon size and/or reduces time-to-market significantly.

  9. I am suprised you are that naive to believe such silly claim. The result of this C to chip press button approch will be, as usual, umpredictable in terme of quality of result, anyway dramatically underperforming human design … the sole purpose of such claim is definitively to lure investors … I give you a Rendez Vous in a few months … and you will see that this “technology” will not take the world by storm.
    This is not the first time (and probably not the last time as well), such a claim is made. By comparison PCM is almost a promising technology, if you see my point.
    C to VHDL is clearing another of these techno Ponzy scheme. Welcome to the techno Bubble 2.0.

Leave a Reply

Your email address will not be published. Required fields are marked *