SoC design verification at all levels is key, says Cadence

Verification technology has advanced significantly over the last few years and with today’s SoC design, verification needs to be done at all levels, from the system down to the silicon in parallel, moving up and down the levels as appropriate, writes  Paul McLellan, Cadence Design Systems.

The key verification technologies are formal approaches, simulation, virtual platforms, emulation and FPGA prototyping. Each technology serves a unique purpose. However, based on our system design approach, a complete verification strategy for a large system on chip (SoC) should make use of all of them.

Adapting Verification Techniques as Your Design Progresses

As a design progresses, the appropriate verification techniques to use changes; the techniques used also depend on the type of the design. When it comes to techniques, some designs are amenable to starting at a very high level of abstraction, where verification consists of running C code.

This approach works well for many visualisation algorithms, for example, which can then be compiled using high-level synthesis to get to RTL.

Also in regard to techniques, most designs are a combination of RTL brought into thePaul McLellan - SoC design verification at all levels is key, says Cadence design from an external source—either third-party IP or IP developed by a specialized internal group—along with RTL blocks created by the design team themselves.

Obviously a fair amount of verification can be done on IP blocks. If those blocks are external interfaces such as USB or DDRx, there is usually verification IP (VIP) available to check that the block implements the protocol correctly.

Internally created blocks rarely come with true VIP. The design engineer creates the block, and specialized verification engineers actually do the verification. The primary tool for this is simulation of the RTL, but a complementary technology to simulation is formal.

Formal Now Much Easier to Use

Formal has an increasingly important part to play. Partially, this is because formal has become so much easier to use and no longer requires PhD-level knowledge. In turn, this is because the formal proving engines have improved so much in power and performance.

Some blocks are especially appropriate for formal techniques. Certain blocks, especially those related to the security of encryption keys, for example, really want the certainty that comes with formal as opposed to those where the “coverage looks good” (which is the best that simulation can do).

As Edsger Djikstra famously said, “Testing shows the presence, not absence of bugs.” Of course, Djikstra was speaking about software development, but the notion applies here, too, because simulation is the same, and formal can really prove the absence of bugs.

Eventually, verification needs to move to software-based verification, where the actual software stack that will be run on the system is used to simultaneously test the software and verify that the hardware runs it correctly.

This requires emulation since simulation is too slow to boot even a small operating system (never mind a behemoth like Linux or Android). With access to virtualized devices, the SoC can even be tested in its environment, writing and reading data to and from the real world.

Another approach, which offers the highest performance of all but is still a little tricky to use, is FPGA prototyping. This creates a device that is as close to the SoC as it is possible to get without actually taping out the chip, which obviously is not usually done as part of the design process. The software can be run, and the system will run as close as possible to its real performance level.

All these verification technologies either run as expected, or something anomalous happens. In the second case, the verification engineer needs to dig into the details to work out why. This is verification debug.

Why Debug is One of the Fastest Growing Parts of Verification

Debug is one of the fastest growing parts of verification because when the software detects an error in large SoCs, getting to the root cause is incredibly difficult. The larger the SoC, the more difficult it is. Even determining whether an error is in the software or the semiconductor is not straightforward.

Another driver of this part of the market is the increasing ability to run a huge simulation (or more likely emulation) and save what occurred. The design can then be investigated offline without requiring constant resimulation or re-emulation.

In summary, delivering an optimized SoC today requires verification to be done at all levels, from the system down to the silicon. As your design progresses, you need to select the best choice from a portfolio of verification technologies.

 


Leave a Reply

Your email address will not be published. Required fields are marked *

*