Home »

Broadcom moves from simulation to emulation with Mentor

Chip designs today have more functionality, more black-boxed intellectual property (IP) and shorter tape-out schedules.

However, they require even more design verification than in the past, which leaves little time for verification, yet re-spins are more expensive than ever.

Traditional waveform-based debugging is slow and laborious, making it almost impossible to fully verify a design within desired time schedules. There is a pressing need to increase debug and verification productivity.

Typical verification flows lack the speed and capacity to perform system-level verification. Importantly, they also lack the mechanisms to leverage the assertions and coverage effort done with simulation and formal tools (at the block and sub-module levels) in emulation (at the chip and system levels).

As a consequence, there is a great deal of redundancy among the discrete verification suites used at the different design levels, and there is a lack of shared data to allow system integrators to focus on untested functionality. All of this makes it difficult to find coverage holes and impedes expeditious tape outs.

Overcoming these shortcomings requires a single, scalable verification flow and a common test environment that can be shared by designers, verification engineers, and system integrators as a design moves from the block to the system level. It also requires a platform that supports standard languages and formats.

Assertion and coverage-based verification are integral to such a solution, and these must scale as well. Finally, this flow requires sufficient transfer speeds to handle the enormous amount of assertion and coverage data traffic between the emulator and workstation.

Broadcom recently developed such a unified, scalable, verification methodology based on the Veloce emulation platform.

In order to test this new scalable test environment, Broadcom needed to make sure it was possible to propagate assertion checkers and protocol monitors from the block to the system level. They ran a test case using a BFM design, first on a small block and then on the entire SoC. The test case proved that they could take assertions, compile them into the emulator, and verify that they would fire accurately.

In so doing, they were able to provide proof-of-concept for their primary goal: the creation of an internal flow to go from simulation verification with assertions to emulation with assertions.

Establishing a single block to system-level verification flow

A unified assertion-based flow enables information to be passed from the block to the subsystem and finally to the system integration level. This allows teams to begin measuring verification activity and results at the block level and share the aggregated data all the way up to the system level.

Coverage and assertions play a major role in increasing debug productivity by providing greater functional coverage and immediately informing engineers that they’ve covered a particular function without resorting to slow, traditional debug methodologies.

Broadcom carries out a majority of its design validation at the system level. In our flow, we start with simulation then formal equivalency checking at the sub-block and block levels, followed by emulation at the block and chip levels. Broadcom’s verification environment is based on an industry standard SCE-MI, and most of our tests are developed using C++ and SystemVerilog.

15jan14velocefig1Figure 1: The block to system level unified flow.

Because a lot of assertions are written at the block level, we wanted to propagate that effort and verification knowledge to the system level. This means that everyone (designers, verification engineers, and system integrators) must write the assertions and coverage in the same format. It also requires an emulator that automatically figures out what has been covered in simulation by reading the coverage database and excluding the covered areas.

By building and sharing coverage data written in a single format, the entire verification flow is more productive, because it is efficient and focused, delivering a faster time-to-verification closure.

Making a unified assertion-based verification methodology a reality requires the emulator to possess certain functionalities and capabilities. The emulator must support a single format and use the same constructs to describe assertions and coverage that can be used at every level of the verification flow. Thus the assertions are compatible between one environment and another so that we can leverage what we have invested in assertions and coverage from one level to another. This establishes a single environment from simulation, to formal, to emulation, adding and accumulating assertions and coverage along the way.

The emulator should support the standard-based UCDB and the standardised unified coverage interchange standard (UCIS). This allows the various design and verification teams to accumulate coverage data from various tools, review the coverage matrix at each stage of verification, and exclude redundant coverage.

The coverage data and assertions from simulation, formal, and previous emulation runs must be merged into a single UCDB file that can be easily viewed and analysed. With coverage and assertion reports written in one format, it is much easier to determine that enough coverage has been done before going to tape out. That decision is much easier if.

15jan14velocefig2Figure 2: A single standardised coverage database.

When needed, the emulator should automatically add assertion-based checks to capture unexpected behavior. Verification performance is critical; therefore, design teams cannot afford to compromise on performance.

A high-speed, physical link and on-demand bandwidth makes it possible to transfer vast amounts of raw coverage data between the emulator and workstation.

Debug productivity is enhanced when assertions can be synthesised within the emulator. When an assertion fires, the emulator should provide an efficient flow for tracing the error down from the system to the module level, then on down to the individual assertion level.

By providing a shared testbench between simulation and emulation, it gives us a unified test bench with similar capabilities between simulation and emulation and supports an internal infrastructure commonly used between simulation and emulation; therefore, we can switch between them. Veloce allows us to verify software running on our chips in emulation.

Emulation is becoming more powerful today with the acceleration of advanced verification techniques, including support for the Universal Verification Methodology (UVM), so we can take advantage of all the expertise and techniques embodied in that standardised methodology. Finally, because the emulator supports our standardised SCE-MI environment, we have the flexibility to use any number of tools and IP that is most suitable for a specific project.

Ajeya Prabhakar is senior principal engineer, Broadcom and Vijay Chobisa is product marketing manager, Mentor Graphics.




Leave a Reply

Your email address will not be published. Required fields are marked *