IC Design: Preparing for the next node

David Abercrombie & Michael White

INTRODUCTION

Despite all the rumors of Moore’s law dying or falling behind, it seems that most of the semiconductor industry continues to push forward to new process nodes and increasingly complex designs. As a result, companies find themselves in an almost never-ending mode of preparing for and transitioning to the next node.

For foundries, that preparation centers on new devices, new process tools, and new process flows. At the same time, they must ensure that qualified design enablement tools and decks will be available to their customers. Design companies focus on defining circuit functionality and performance targets, while also ensuring they have and are ready to use the design software and hardware they need to enable design signoff in reasonable turnaround times.

Although rarely discussed, the electronic design automation (EDA) industry is also constantly in a state of preparing for the next node. All those new process technologies and new design functionality add up to an ever-increasing pressure for increased automation in verification using a set of foundry-qualified tools, all while maintaining the highest level of accuracy without driving up runtimes. We take an inside look at the challenges of next node development, and how Mentor, a Siemens Business, works to prepare the Calibre nmPlatform for each “next node.”

THE COMPUTATIONAL CHALLENGES OF THE NEXT NODE

The classic metric for measuring the forward momentum of the semiconductor industry is the integrated circuit (IC) transistor count in a design. Moore’s law describes the empirical observation that, historically, transistor count/IC has approximately doubled every two years. Lately, there always seem to be voices in the background claiming that Moore’s law is dying, but the empirical evidence continues to show otherwise. Figure 1 shows the latest composite graph of the transistor count of the most well-known IC chips over time. The data shows a consistent slope of increase throughout four and half decades, with the most modern chips pushing close to 20 trillion transistors.

fig 1Figure 1: IC transistor count over time. 

Design rule checking (DRC) complexity is directly proportional to the number of polygons in a design. While transistor count has a direct impact on the front-end-of-line (FEOL) layer polygon counts, it doesn’t, by itself, account for the total increase in overall polygon count. The middleof-line (MOL) and back-end-of-line (BEOL) layers not only show increases in polygon count per layer, but advanced process nodes typically require additional interconnect layers. These multiple sources of additional polygons means verification tools like the Calibre suite must contend with a polygon processing count that exceeds even the Moore’s law rate of increase.

Of course, the design rules associated with a foundry process are not just a function of the total number of design layers in a design. The types of issues that must be checked on any given layer have also increased over time, as more complex, context-aware, and variation-sensitive design components and process techniques have been incorporated into the most advanced process nodes. Figure 2 shows just some of these new process techniques and design sensitivities that require not just more checks, but entirely new types of checks. Pairing the increase in layer count with the increase in types of checks now required, the graph demonstrates how both the number of design rules and the operations needed to implement those rules have increased process node to process node. Each check requires many lines of coding to implement, so the average DRC operation line on the chart illustrates the number of steps the software actually has to execute to properly check a design.

fig 2Figure 2: New functional requirements and DRC rule/ code complexity by process node.

In the end, the compute power and resources needed to verify a modern IC are driven by rule complexity multiplied by the total polygon count of the design. As any math-savvy person will instantly notice, multiplying two exponentially-increasing trends produces quite a demanding problem to overcome. At Mentor, we realize that the solution to what can seem an overwhelming challenge lies in thinking outside the box of traditional solutions, and pursuing all possible avenues of expanding and improving the performance and productivity of the toolset. The Calibre team continually adds fundamental new capabilities into the Calibre tool base to provide accurate automated checking of these new and expanded requirements while still enabling companies to meet their go-to-market schedules.

BOOSTING THE BASICS

The two most obvious components for combating this explosive compute challenge are raw engine speed and memory. Although the Calibre suite has been around for decades in name, the underlying code base is continually optimized, and even completely rewritten, not just to add new functionality, but to also dramatically improve its capability to perform existing functionality, as well as to take advantage of modern distributed and cloud computing infrastructures.

Figure 3 shows a trend of normalized runtimes for the same Calibre nmDRC runset by software version release. Each data point is an average of 20 real-world customer designs, demonstrating (by holding everything else constant) the isolated improvement of the underlying Calibre engine over multiple software releases. Over this three year time span, engine speed increased by 80%. This trend is indicative of the way Mentor optimizes performance for all of the Calibre physical and circuit verification tools.

fig 3Figure 3: Normalized Calibre engine runtime trend by software release.

Memory usage is also a key aspect of improving tool performance. Figure 4 compares two recent Calibre nmDRC versions across six different 7 nm designs. There is a consistent 40-50% decrease in memory usage as the underlying data structures and memory management techniques were improved. Again, this progress is representative of the performance improvements achieved across the Calibre nmPlatform. While the Calibre nmPlatform is already the industry leader in using the least memory, Mentor continually seeks opportunities to make further improvements.

fig 4Figure 4:  Comparison of recent release over release improvement in memory usage for the Calibre nmDRC tool.

In addition to leaving no stone unturned to improve the base engine performance, we must also address the explosive growth in computational requirements by effectively utilizing modern compute environments and distributed CPU resources to increase overall compute power. Mentor invests heavily in this space, constantly pushing the Calibre platform’s capability to effectively scale over larger and larger numbers of CPUs, as shown in Figure 5.

fig 5Figure 5: Calibre engine scaling by CPU count. This graph of continuous runtime, memory, and represents a full-chip Calibre nmDRC run for a production scaling performance improvements in customer 16nm design and foundry rule deck.

The Calibre platform has maintained its industry-leading performance advantage through the combination of continuous runtime, memory, and scaling performance improvements in the underlying engine. And yet, even this is not enough by itself to keep up with the overall complexity associated with moving to the most advanced process nodes. Many other not-as-intuitive improvement avenues must also be pursued.

PARTNERING WITH THE FOUNDRIES

Most IC companies probably think that the only people who really care about the foundry adoption of an EDA tool are the marketing and sales folks at the EDA company. On the surface, the table in Figure 6 may look like just such a “marketing advertisement.” However, on closer examination, this table actually offers a completely different value proposition.

fig 6Figure 6: Calibre software adoption by foundry compared to other EDA companies.

The primary key to being ready for the next node is being in lockstep with the foundry as they create that next node. The analysis and development of new checking capabilities takes time. Maturing that new functionality into a verification flow with high performance, low memory, and good scaling takes even longer. An EDA company can’t wait for the foundry to have the next node completely ready before they build the capabilities needed into their verification tools. At the same time, they can’t successfully develop robust new functionality based on the requirements the foundry has in the early stages of node development. Experience shows that the foundry is learning as they go during a typical process development cycle. Not only do the actual process requirements change, but the expectations of what the verification tools must be able to do change as a function of actually trying them out on the new design requirements.

That’s why the most valuable return from a foundry partnership is having the foundry use Calibre tools internally as they develop a new process. That real-time, iterative cycle of developing new functionality and evaluating the results with Calibre tools not only helps the foundry fine-tune the design requirements, but it also allows Mentor to simultaneously finetune and mature our verification tools, long before design customers begin using them. And for mutual customers, the benefit extends beyond this collaborative learning.

The foundry’s “golden” DRC tool is used for the development and verification of test chip intellectual property (IP) through early versions of the sign-off deck.  In that process, the DRC tool is used to help define and validate the new process design rules, validate all IP developed by the foundry, and help develop and validate the regression test suite with which other DRC tools will be validated. All of this work, and the thousands of DRC runs on thousands of checks and tens of thousands of checking operations, hones the accuracy of the foundry’s primary DRC tool and helps set the standard for other DRC tools. All of this typically happens months to years before those other DRC tools can even begin their validation. Think of it this way—if the Calibre nmDRC tool was a new car model, it would have 100,000 miles of experience/testing (along with its associated foundry deck) before the other DRC tools even got to the test track. That is truly what it means to be a foundry’s development DRC tool, and the Calibre nmDRC physical verification tool is that tool for all the major foundries.

If the verification software a design house uses is not the same software the foundry itself is using during the development process, then that company won’t have the chance to learn about all these changes and new requirements until sometime after the process node is production-ready. Which means their designs are going to lag the introduction of that process node. That is why, although the Calibre platform is the most fully certified toolsuite in the industry, certification is a minor part of what it means for Mentor to partner with the foundries.

One measurable example of this advantage can be seen by tracking the performance of DRC decks across the pre-production release cycle. Physical verification tools are, at their core, a specialized programming language for writing design rule checks. Just like in any programming language, there are efficient and inefficient ways to write that code. Mastering the art of writing good checks takes a lot of skill, and a lot of time. Working directly with the foundry teams that write the design rule decks and their teams that use those decks as the decks are being written enables Mentor to identify opportunities for coding optimizations throughout the development process. This interactive feedback means that by the time the process goes into production, those decks run significantly faster as a result of those optimizations. Figure 7 shows just how much deck optimization can affect runtime. Version 1.0 (the first full production version) of the deck runs nearly 70% faster than version 0.1 written very early in the process development cycle. This improvement has nothing to do with the verification tool itself, but is completely the result of implementing best coding practices.

fig 7Figure 7: DRC normalized runtimes by foundry release version.

This partnership with all the major foundries enables Mentor to offer IC design companies Calibre tools and decks with the best possible performance and highest accuracy achievable.

PARTNERING WITH THE DESIGNERS

An often-overlooked opportunity for combating the complexity of next node design is increasing the productivity of the designers themselves. The key to this opportunity is tool integration and usability. It is easy to become fixated on the basic function of checking in isolation, because that is where those metrics of performance and accuracy reign supreme. However, measuring performance in isolation does not capture the reality of the use model, and the overall turnaround-time (TAT) required to reach a clean tape-out. This overall TAT is ultimately the metric that really matters to design companies.

Most companies today have many design teams spread over many parts of the world. They also use a plethora of design implementation tools, and much of their IP is purchased from outside companies and must be incorporated into a single functional chip. The industry clearly understands that each EDA company has a few strong tools, and others that are merely functional. Best-in-class companies use best-in-class EDA flows composed of a heterogeneous set containing the best tool for each verification need. It’s not practical, and ultimately not as cost-effective, to rely on a single-vendor solution.

To enable this optimal flow strategy, EDA tools must prioritize integration over proprietary formats and interfaces. For example, in physical verification, integration includes the ability to configure, launch, review, and debug within the design tool environment, regardless of which design tool is used. For operations like multi-patterning coloring (which includes back annotation of the colors) and fill (in which fill data is generated into the native format of the design tool), stream-out and batch execution is a necessity, but not sufficient by itself for the latest design nodes. The verification tool must also be able to run inside the design tool directly on the design tool database, as if it was a native part of the design tool integrated directly into the toolbar.

Calibre tools have always been built to be design tool-independent. As you can see in Figure 8, it is one of the most universally integrated tools in the industry. Not only has this allowed computer-aided design (CAD) teams to build best-in-class collections of design tools, but it enables them to use Calibre interfaces as the glue to piece together entire flows that facilitate and automate fill, multi-patterning coloring, chip integration, and engineering change orders ( ECOs), just to name a few.

fig 8 main

Figure 8: Calibre integrations into design tools.

Designer productivity may begin with integration, but it lives or dies by usability. Usability is more than making it easy to launch jobs and get results. Effectively managing, navigating, and visualizing all of the complex errors that occur in leading-edge design nodes is critical. Isolating fundamental issues in early-stage “dirty” designs can be overwhelming. Trying to separate out errors in chip integration when the IP blocks are still dirty is frustrating. Debugging errors for checks like multi-patterning, delta-voltage, and antennas is extremely difficult.

Calibre has led the industry with innovative capabilities like special debug layers for double patterning debugging, and automated waiver processing for masking out IP errors during chip integration debugging. Features like these have saved designers countless hours of debug time and frustration, and can, in many cases, decrease time-to-market more than improving raw tool performance.

CONCLUSION

The challenges of preparing for the next process node are truly immense for the foundries, the design companies, and the EDA industry. Meeting that challenge takes not only commitment and expertise in the traditional skills of software performance, memory and scaling, but also skills and experience in partnering with foundries and designers in ways that optimize all the available avenues for overall productivity and performance. Calibre not only has a track record of software leadership, but also of foundry partnerships that have enabled a history of success, and sustain confidence that Mentor will always be ready to meet and overcome the ongoing challenge of the “next node.”

Article Courtesy: www.mentor.com 

LEAVE A REPLY