Through many years of research and development, technology computer-aided design (TCAD) has matured to a stage where complex semiconductor processes and devices can be simulated with reasonable accuracy. As we move into the deep-submicron technology era, we are facing many challenges as well as opportunities for the predictive TCAD approach to semiconductor technology development and ultra-small transistor design and optimization.
In this article, we first discuss some of the problems associated with
the use of TCAD in technology development. Then, we introduce some
ideas about an alternative approach, namely, the multi-level TCAD
synthesis approach, which is being carried out as a joint research
project, Project DOUST: Design and Optimization of
Ultra-Small Transistors, with Chartered Semiconductor
Manufacturing, Ltd. (CSM). It is hoped that the successful completion
of the project will be a first step towards the establishment of a Virtual
Fab Foundry (VFF) to provide TCAD services to technology developers,
circuit designers, and device researchers.
THE MYTH
OF TCAD PREDICTABILITY
AND ACCURACY
This question has attracted as much attention and interests as confusion
and frustration among semiconductor technology developers, TCAD tool vendors,
and device researchers. Predictive TCAD, as implied by its very nature,
has set an “ultimate standard” but yet an “elusive goal” for modeling semiconductor
technologies and devices. The problem stems from the ambiguous definition
of predictability and unrealistic expectations, since in many cases it
is expected that a “match” of the TCAD-simulated I–V characteristic
(through all process steps and device analyses) to the measured
one can be obtained. In fact, there is a great potential danger if
a “perfect” match is obtained and it is claimed that the model is “accurately”
calibrated without stating the conditions.
Definition of Predictability and Accuracy
In a sense, the answer lies in the question itself being asked. By and large it depends on how much accuracy is required for the target variable to be predicted.
Metrology analogy. A similar question is: “can we measure linewidths?” The answer varies whether you mean 1-µm or 0.1-µm linewidth.
Physics analogy. The well-known uncertainty principle states that one cannot make precise determination of both position and momentum (or energy and time) of an object simultaneously. In analogy, the precision of a target to be predicted by TCAD depends on many (not one) variables. “Absolute accuracy,” in general, does not make sense.
Mathematics analogy. In the mathematical definition of the limit of a function f(x), for any arbitrarily small value of e > 0, one can always find a positive number d such that for all values of x satisfying 0 < |x – a| < d, | f(x) – A| < e, i.e., f(x) approaches A as x approaches a. This is analogous to the fact that, no matter how accurate the TCAD result can match the measured data, we can always find a better model as we gain deeper understanding of the process/device physics.
Philosophical viewpoint. From a philosophical point of view, the art of science is the constant search for the ultimate truth, which lies only beyond one’s imagination. The history of mankind has been a history of breaking our own physical limits in approaching (instead of reaching) the ultimate truth through scientific and technological advancements.
The point is that an unambiguous definition of predictability must be
associated with the required accuracy. With these in mind, the answer
to the question whether TCAD is predictive should be rephrased as:
Now, the real questions are:
Problems with TCAD Calibration
What are the problems? Predictive TCAD obviously requires a very high degree of accuracy. This is determined by its own nature and objective -- emulating a very complex phenomenon of a complete semiconductor processing and characterization process with physics-based models. The problem often renders itself a formidable task because to predict a complex phenomenon, the physical models and the model parameters must be well understood and calibrated.
TCAD calibration refers to the process of selecting appropriate models and adjusting the model parameters so that the response of the physical model (assuming physically correct) can predict in a wider range (within a specified tolerance) the measured one the model represents. This task itself creates a lot of problems. First, there are simply too many variables to be adjusted, especially when a “full-loop” calibration (i.e., calibrating to the device electrical characteristics from the process variables) is performed. Secondly, in some cases the physical models (required for the desired accuracy) are not well understood, or even not implemented. Thirdly, for the measured data one is calibrating to (e.g., SIMS or I–V), there is bound to have experimental errors, some of which cannot be controlled or estimated. Last, but not the least, if the goal is to predict a new technology which is being developed and is changing, TCAD calibration will be essentially a dynamic process.
Where do the problems come from? Aside from the difficulties intrinsic to TCAD calibration, such as measurement errors and our physical understanding, some problems arise from the way we develop and use the models, which, in principle, can be avoided or minimized if the question is properly addressed.
TCAD users are facing two contradicting problems. On one hand, they need more detailed (atomic-level) physical models to account for the increased complexity in problems for deep-submicron devices; on the other hand, they are already overwhelmed by an excessive selection of models and coefficients, and the burden of selecting and calibrating the models rests with the user. The latter problem is partially due to the fact that the TCAD tools developed by the vendors are meant to be general tools with maximum user flexibility. In fact, in many cases the coefficients for a given implemented model are not supposed to be adjusted arbitrarily; and probably a good portion of the implemented models in a simulator is completely irrelevant to the problem at hand.
Most software vendors claim that they are the leader in providing “solutions” to the problems in the respective area of their products. But in fact what they provide are actually “tools,” and it is up to the users to find their own solutions using the tools. This is not to blame the CAD tool vendors. But from the user’s perspective, there is an increasing need to “get the job done, and fast,” rather than “tweaking coefficients for all days.”
How to solve the problems? The key to the effective use of TCAD tools is to identify the specific objectives and make the best use of available resources to achieve the goals. For example, if the problem is to predict or optimize a particular process step (e.g., implantation or diffusion), a calibrated 1-D process simulator may be sufficient. If the objective is to study the performance of a sub-0.1-µm transistor, depending on the region of operation, existing device models may not be adequate and new (atomic-level) models must be developed.
For the most difficult task of full-loop calibration, the urgent need
is to come up with a “standard” procedure and benchmarking. However,
so far there is no commonly agreed standard and, perhaps, there will never
be one. To develop a general framework and a systematic approach
to the use of TCAD in aiding technology development and circuit design,
it is important to understand the nature of TCAD and to make the best use
of it.
Vendor’s perspective. From the vendor’s perspective, TCAD tools are products that are supposed to cover a wide spectrum of applications, and should include the state-of-the-art physical models and numerical algorithms. User interface, user friendliness, tool integration and support, etc., are of important concern. Commercial tool vendors and in-house tool developers also have different objectives and strategy in their software development.
User’s perspective. From the user’s perspective, everyone has his/her own objectives. Technology developers (process engineers) are more interested in getting the trends and trade-off in reducing split-lots. Device physicists use TCAD to study the physical limits and to optimize device performance. Circuit engineers expect a set of SPICE parameters for their design that can be made available before the design is fabricated. Very often, engineers are working on a multi-target optimization problem in a multi-variable design space, which involves a lot of trade-off; whereas physicists are more interested in a specific physical mechanism.
Industrial vs. academic. Applications of TCAD in industry
and academia also have different perspectives. Problems exist in
the TCAD arena that new physical models from the research community often
lag behind the technology in the industry; and software vendors are not
in a co-development mode with technology developers. Commercial tools
with complex models and comprehensive data post-processing, although attractive
to researchers to probe insights into the device physics, often complicate
the problems for “non-expert” process engineers.
There is a lot of trade-off in the development as well as use of TCAD software. Some are general and some are specific. Different concerns arise depending on who develops/uses the tool, when it is to be used, how it is to be used, and so on.
Flexibility vs. complexity. Commercial TCAD tools are developed for the general public with maximum user flexibility, which increases the complexity of the tool (e.g., number of implemented models and coefficients). However, the particular user, for the specific problem, may use only a subset of the available models.
Interactive vs. batch mode. Commercial TCAD tools have excellent interactive user interface and comprehensive data post-processing. The user can create his/her own design using the built-in design of experiment (DOE) or response surface modeling (RSM) facilities, and probe detailed device structure, profile, distribution of physical quantities at will. However, he/she must be experienced in understanding the models to use these tools. There are also times when it is advantageous to run the computer experiment in batch mode and use the “templates” to reduce the amount of repeated work.
Accuracy vs. speed. Accuracy/speed trade-off is always
a big concern in computer simulation. It is specially so in TCAD
since the accuracy and speed of a simulation are highly grid dependent.
In TCAD simulation, one has to make sure that the error associated with
the grid must be smaller than the required accuracy for the target variable
being simulated and, at the same time, a reasonable simulation time can
be maintained.
In order to understand the nature of TCAD and make use of its features, it is helpful to make some comparisons with the electronic CAD (ECAD).
Skeptical or optimistic. It has always been skeptical about TCAD, even among experts; but very few people question the use of ECAD tools in IC design. In fact, it is hard to believe that one does not use ECAD tools to design IC’s. This is largely due to the different expectations. One never expects a “very accurate” transistor timing delay from a logic-level simulator, although the same device (to be fabricated) as the one for TCAD is simulated.
Statistical or deterministic. The numerical results from TCAD always have statistical fluctuations, while ECAD (such as SPICE) is continuous and deterministic in nature. This is because TCAD is based on the numerical solution of partial differential equations (PDE), which imitates more closely the behavior of the actual device, as opposed to ECAD which is based on macroscopic, analytic, logic, or closed-form equations. This feature of TCAD, however, is not a fatal one, since computer experiments are, in principle, exactly reproducible (even for Monte Carlo device simulator). The important thing is to know the source of fluctuation or error (physical or numerical), and to be able to quantify it.
Linkage. Although TCAD and ECAD are quite different in
terms of their features and level of abstraction, they become more closely
linked as we go into the deep-submicron regime. One area is electrical
characterization in which SPICE circuit parameters can be extracted from
TCAD-simulated I–V characteristics. The other area is technology
characterization in which interconnect delays can be modeled by TCAD and
RC-delay models can be extracted for design-rule checking (DRC) and timing
verification.
There are many “right” questions to ask about predictive TCAD. Some large projects, such as the Computational Prototyping for 21st Century Semiconductor Structures (hierarchical simulation from atoms to interconnect) at Stanford University and the Michigan Synthesis Tools for IC’s (process compilation and device synthesis) at the University of Michigan, are all addressing important, but different, problems in TCAD.
If we ask the question: “how to provide TCAD solutions for process/device engineers in aiding technology development?” we are facing all the problems discussed in the previous section, such as calibration and all kinds of perspectives and trade-off.
In this section, we propose one approach to the above question -- multi-level
TCAD synthesis -- for getting “approximate” solutions with “satisfactory”
accuracy.
The motivation behind the idea of multi-level TCAD synthesis is based on the following general belief that T-C-A-D
If we look at what has happened in the ECAD and IC design community, sometimes it may help in setting the directions in using TCAD to aid technology development.
ASIC analogy. In the IC design community, the semi-custom design approach (as opposed to full-custom design) for the application-specific integrated circuits (ASIC) has become popular, which uses gate array, standard cell, or programmable logic devices as building blocks that have been thoroughly studied or even partially implemented in silicon. In analogy, we can develop “application-specific TCAD” (or AS-TCAD) for the specific device structures and processes (e.g., CMOS, SOI, DRAM, LDD) based on the generic TCAD tools. This will help users to trade-off generalization/specialization and flexibility/complexity concerns, and to arrive at the solutions faster.
Multi-level simulation analogy. In the circuit simulation community, especially for mixed-signal simulation, a hierarchy of simulation tools has been developed (e.g., gate-level, switch-level, timing, electrical, behavioral, table lookup) to trade-off speed and accuracy. For TCAD, although being mainly a bottom-up analysis tool, it is also advantageous to adopt the notion of “multi-level TCAD” (or ML-TCAD) and come up with compact models (CM) for the devices and processes based on the TCAD results, similar to the idea of table lookup (e.g., TimeMill of EPIC). The compact models can be formulated through empirical nonlinear regression of the TCAD data, which can be considered as a higher-level model that incorporates all the nonuniformities and nonlinearities of the original TCAD data. This will help users to quickly examine trends in device/process parameters, to generate response surfaces and process boxes, and to perform multi-target optimization.
High-level synthesis analogy. In the systems design community,
it is simply too complex to design the whole system from bottom-up.
With the accumulation of design expertise and advancement in knowledge-based
systems, high-level synthesis becomes the ultimate and powerful solution
to systems design. In general, synthesis means an ensemble of answers
waiting for the right question. In principle, a TCAD-Synthesis
approach can be applied to the prediction and characterization of semiconductor
technologies and devices as long as a “large” process/device-variable space
can be covered.
Objectives, Scope, and Advantages
Objectives. The primary objective of the proposed application-specific, multi-level TCAD synthesis approach is to construct a general “framework” for the design and optimization of ultra-small transistors for a given technology. This framework will provide a link between the device performance parameters (“targets”) and the process variations (“variables”), and hence, a guide for process/device engineers in technology development and transistor optimization. Other major objectives are listed below:
Advantages. There are several advantages for the proposed synthesis approach over the full-loop calibration approach to technology development:
The first question that comes in mind is whether this approach is feasible. Basically, this is the question of emulating a real, complex phenomenon (deep-submicron transistor electrical characteristics due to process and structural variations) using an ideal, also complex model (process and device TCAD model).
Hypothesis. If the “ideal” device covers large enough design-variable space, the behavior of the “real” device should fall somewhere in between, as long as the process/device physics is reasonably well understood.
Challenge. It seems that there are simply too many (may be “infinite”) variables for the design space. And it is difficult to determine how “large” the database is “enough” for accurately predicting the device performance.
Rebuttal. It seems that we are limited by the computing resources to design such an “infinite” database, but if you ask the question “what you can do if computer speed and memory capacities were infinite?” you will find that the real question is how we approach the problem and whether the effort is worth the investment.
Assumptions. There are two major assumptions behind the
whole idea of TCAD synthesis. First, the physical (process and device)
models implemented in the TCAD tools are valid in the region of the database
to be constructed. Secondly, the device performance (target parameters)
can be predicted by interpolation of the TCAD data either through table
lookup or compact model. These two prerequisites call for proper
use and careful treatment of the available models in the database construction
as well as appropriate definition and formulation in the compact modeling.
Problem Specification and Approach
The following is a brief discussion on the main problems, considerations and approaches in the proposed TCAD synthesis.
Design specification: targets and variables. The first step is to specify what are the target parameters and what are the design variables. Since our proposed TCAD-Synthesis is intended to be multi-level (ML-TCAD) and application-specific (AS-TCAD), the design specification will depend on the particular technology to be developed. For example, if we are trying to predict a 0.18-µm CMOS technology based on the 0.25-µm technology for logic applications, a list of major targets and variables could be as follows:
Compact modeling. Compact models are the essential part of the proposed multi-level TCAD synthesis approach, although table lookup can always be used as long as the database is constructed. The philosophy behind the compact modeling is to come up with closed-form equations that represent the TCAD data with all the nonlinearities included and yet efficient to use. The following is a brief guideline in formulating compact models:
Database range. There are some anticipated problems with this synthesis approach. The range of the variables for the database may be one problem. Since in general extrapolation will not be reliable, the variable range should be large enough, but yet reasonable, for interpolation. However, in some cases the conditions may be non-physical and the TCAD solutions may not converge. For example, at the extreme conditions of short channel length and low channel doping, the device may be punched through so that the target electrical parameters will not be available.
Inverse modeling: multiple solution. Another major difficulty
is the “multiple solution” problem. One example is the inverse modeling
of reverse short-channel effect (RSCE) in which the pileup of the 2D channel
doping profile is modeled to account for the observed increase in the threshold
voltage at shorter channel lengths. In principle, there are many
(infinite) combinations of the 2D profiles that may give rise to the desired
RSCE. Another example is the compact model for the nonuniform doping
profile (used in threshold voltage formulation). If a nonuniform
doping profile is characterized (defined) by some parameters (say, surface
concentrate, peak concentration, and peak location) to be used in the compact
model, there will be multiple profiles that satisfy the same set of parameters.
Although commercial TCAD tools are developed with comprehensive models
and interactive user interface, it is still a novelty for the general,
non-expert, process/device engineers who need it most. Borrowed from
the concept of IC design houses and wafer fabs that provide “foundry services”
to customers, the proposed multi-level TCAD synthesis approach can be a
first step towards the establishment of a Virtual Fab Foundry that
will provide TCAD modeling services to technology developers and device/circuit
engineers.
There are many unique features of the proposed TCAD synthesis approach for commercial exploitation.
AS-TCAD tool development. From the tool development point of view, what process engineers need are application-specific TCAD tools that can give them solutions and guidance. Currently, TCAD environment (such as TMA WorkBench from Technology Modeling Associates and ATHENA/ATLAS from Silvaco) is developed by commercial vendors with highly user-friendly, interactive interface. However, it is still up to the users to create their own experiments, which requires expertise in using TCAD tools. The virtual fab foundry can be in such a position to develop AS-TCAD tools that generate specific target parameters for the user while hiding all the physical models and parameter extraction behind. Even the background simulators can be from any vendors -- the user does not have to care about. Graphical plotting can also be “standardized” for the specific type of plots to minimize the “flexibility” (or repeated work such as labeling) in generic tools so that the user can concentrate on getting the solutions.
Professional service. From the professional service viewpoint,
the complete TCAD database developed by the VFF, which builds in the dedicated
modeling expertise and the state-of-the-art models and parameters, can
be a valuable resource for technology developers. Even for a non-calibrated
TCAD database, the user can roughly know what he/she would get if he/she
is to use the same simulator, same models, same grids, same processing
and boundary conditions. With the efforts in compact modeling, the
compact models can be very useful and efficient design aids for the user.
All these development and services can be, in principle, applied to the
notion of internet-based TCAD.
Successful implementation of the virtual fab foundry based on the TCAD
synthesis approach would have great potential impact to the chip design
and fabrication industry. Combining the expertise in the TCAD tool
vendors, process and device modeling efforts, and a calibrated TCAD to
a specific wafer fab, the virtual fab foundry will provide a guide
to technology developers, a bridge between the wafer fab and the
design house, and a dynamic solution to the future technologies
and design methodologies.
What has been discussed and proposed in this article is an alternative
approach to the design and optimization of ultra-small transistors for
the new deep-submicron technology development. The central idea is
to use the multi-level synthesis approach for application-specific problems
in aiding new technology development. The approach emphasizes on
the multi-target optimization and speed/accuracy trade-off in providing
trends with quantifiable accuracy for non-expert users. It is not
claimed that the proposed approach is better than other practices (such
as full-loop calibration, hierarchical modeling, or process compilation),
instead, the validity and relevance of the synthesis approach will be dependent
upon simulator calibration and new physical models. However, the
proposed approach will be generic and dynamic, which can and must be adapted
to new technologies as well as the accumulation of our knowledge.