Printed circuit board

Artificial intelligence “invades” chip manufacturing

Artificial intelligence (AI) is currently transforming several industries. One interesting phenomenon: AI is helping to drive advances in AI chips. As early as June 2021, Google used AI to design its TPU chip. Google says AI can do chip design work in less than six hours that would take a human several months to complete. A review in Nature called the research an “important achievement” and noted that such work could help offset the end of Moore’s Law. In addition, Nvidia has begun using AI to effectively improve and accelerate GPU design; Samsung has also talked about using AI to design chips.

But this is far from the only application of AI-assisted chips, AI technology is penetrating more core aspects of the chip industry, including in manufacturing, a key link in the chip industry chain, AI is also quietly making its presence felt.

Chip manufacturing link, yield is increasingly being tested

Now almost all applications including 5G, IoT, automotive, data center and other implementation and development are built on the basis of higher performance, lower power consumption, greater arithmetic chip. The demand for chips has increased dramatically while the supply of chips has not kept up with the demand. Improving the yield of existing products is an effective measure recognized by the industry.

However, the improvement of yield rate poses a great challenge to both chip designers and manufacturers.

Manufacturing is a key part of the semiconductor industry chain. The entire manufacturing process is divided into eight steps: wafer processing – oxidation – photolithography – etching – thin film deposition – interconnection – testing – packaging, each chip manufacturing step requires hundreds of processes. The cycle of chip production and manufacturing is often two to three months, the production process generates a huge amount of data, involving a wide range of parameters, any small change can affect the yield of the final chip.

Following Moore’s Law process evolution is one of the most effective ways to achieve high performance computing chip, but also the direction of industry chase. With the chip process to more advanced 5nm, 3nm, chip design complexity is geometrically increased, the production process continues to lengthen, the chip manufacturing has become extremely complex and sophisticated, the yield becomes very challenging. According to semiconductor equipment supplier giant Applied Materials, from 2015 to 2021, the number of process steps in chip manufacturing increased by 48%. The benchmark yields of advanced nodes are also getting lower compared to mature nodes.

And in the process of commercialization of semiconductors, yield is directly related to the chip yield, production costs and corporate profitability. So it is becoming increasingly difficult to improve PPA just through improvements in chip process technology, and from a price/performance perspective, chip flow is becoming more and more expensive, and only a very small number of chip companies can afford it.

Therefore, to both improve chip yield and be economically viable, a multi-pronged approach must be explored and innovative methods must be explored. In today’s highly automated era, introducing technologies such as AI/machine learning to drive the chip manufacturing process and improve chip yields will in turn help us quickly bridge the gap between supply and demand for computing power.

AI’s strong presence

Chip manufacturing is one of the most expensive production processes in the world. Chip yields determine the success or failure of fab makers such as Intel, Samsung, TSMC and others. They go to great lengths to invest resources to keep their fabs operating 24/7 to maximize long-term profits.

Semiconductor manufacturers need to rely on scanning, testing and diagnostics to help with failure analysis to solve yield problems. Back-end defect detection is undoubtedly a major “gatekeeper” to improve chip yields. Most advanced SoCs now use very small manufacturing processes, and some even introduce EUV lithography, making it more difficult for manufacturers to locate small faults and defects on the chip; and when manufacturing 3D structures and performing complex multi-patterning steps, some of these small differences can accumulate to produce yield suppression defects, and if some of these small differences are delayed in detection, then the subsequent All process steps are basically a waste of time and money. The longer it takes for them to detect defects, the more money they lose.

To address this industry challenge, semiconductor equipment supplier Applied Materials has incorporated artificial intelligence into the wafer inspection process. Since 2016 Applied Materials has been developing the Enlight system using ExtractAI technology and in 2020 introduced the next-generation Enlight optical semiconductor wafer inspection machine, which introduces big data and AI technology. The Enlight system can map out millions of potential defects on wafers in less than an hour.

Applied Materials says that by combining their Enlight optical inspection, ExtractAI technology and SEMVision eBeam review capabilities, they have solved the most difficult inspection challenge: distinguishing defects that affect yield from noise, and also learning and adapting to process changes in real time. And by generating big data, the Enlight system reduces the cost of capturing critical defects by a factor of three. This will allow fabs to receive more actionable data faster than ever before, resulting in lower cost of ownership and faster yields and time to market. These latest toolsets are currently installed in multiple fabs that are using it to shorten yields for the latest technologies.

Applied Materials said Enlight is the first system in its product line to use artificial intelligence to improve production processes, with more AI-enhanced systems in the pipeline.

Inspection equipment is one measure in the later stages of manufacturing to improve yields, and if the necessary measures can be taken at the physical design stage of IC development to gradually shift yield control to the front-end design of the chip to ensure that the design can be manufactured accurately, then it can improve yields and prevent defects that may occur after the product is delivered to the customer. This is called DFM (Design-for-Manufacture) in the industry, and the concept exists in almost all engineering disciplines.

On the DFM side of chip design, EDA vendors are working to integrate various AI capabilities into the tool flow.

For example, Siemens EDA’s Calibre SONR tool has an embedded machine learning engine, TenssorFlow, which enables EDA tools to run faster by incorporating parallel computing and ML technology into EDA tools. Calibre’s product line continues to expand, with complementary products that truly extend from the design side of the chip all the way to the manufacturing side. This not only helps designers to implement physical verification and deliver designs with confidence, but also dramatically improves flow-through yields, reduces time-to-market and accelerates innovation.

Simulation has always been a pain for chip designers, with the development of advanced processes and ultra-low voltage requirements, the field of simulation is faced with a large amount of data, long extraction time of the timing library, too slow violent exhaustion, STA tools to do the internal difference method accuracy is not enough and other pain points, and if the use of machine learning algorithms, through the way of big data analysis of existing databases, through the interconnection of multiple surface models to build a multi-dimensional model. By creating such a model to infer a new database under Corner. Such an approach can be compared with SPICE simulation or the internal difference method, which can be said to be a cross-generational competition, both in terms of speed and accuracy, have huge advantages. The Solido machine learning technology introduced by Siemens EDA is able to accelerate the extraction of a single timing library file by nearly 100 times (compared to the traditional SPICE approach) and 2 to 3 times faster for the overall timing library extraction, while still keeping the accuracy within acceptable limits.

Verification is also becoming an increasingly complex and difficult task along with the complexity of SoCs, and verification is taking up more and more weight in chip development because such a heavy verification effort must be guaranteed to be 100% correct to ensure successful flow. Regarding this challenge, it can also be left to AI, where machine learning is used to automatically select parser strategies to perform proofs of assertions related to formal verification in Siemens EDA OneSpin.

As processes and designs move forward and the root cause of yield loss becomes more complex, fault isolation techniques are challenged and improving diagnostic resolution becomes a top priority to reduce yield creep time. In this regard, Siemens EDA’s Tessent Diagnosis’ plate-aware and cell-aware technologies, combined with Tessent YieldInsight’s unsupervised machine learning technology, known as Root Cause Deconvolution (RCD for short), can find the most likely defect distribution and remove low-probability suspect points, resulting in improved resolution and accuracy. This technique is currently used by Lattice, UMC, and SMIC to quickly locate the exact root cause of defects and quickly achieve yield improvement.

As you can see, with AI/ML technology, EDA tools are becoming more and more powerful tools to solve yield creep.

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注