Now consider the producibility analysis of design alternative A
under the constraint of short-term capability data as related to the process,
material, and components. Essentially, whenever a producibility study is based
on this type of parameter capability data, the analytical output reflects
"instantaneous producibility" or the capability of any given parameter free of
perturbing temporal influences.

Hence, if the short-term capability data is used as the basis of
computer modeling, the outcome projects what would be expected when only
random variation is present.

For example, assume that the widget
simulation was conducted using the short- term capability data base associated
with the i, j, and k performance parameters. In this context, also suppose
that Y_{RT} for i, j, and k was .99994, .99998, and .99962,
respectively. From this, it may be concluded that the joint yield would be
.99994 x .99998 x .99962=.99954, or 99.95%. Again, the normalized
rolled-throughput yield by virtue of .99954(l/m), where m= 5000 would be
found. In this instance, the
short-term rolled-throughput yield expectation would be Y_{RT;N} =.99954^{1/5000}
=.9999999, or approximately 99.99999%. Calibrating this to area under the
standard normal curve would reveal that .9999999 cumulative area is given by
5.21
. Hence, the short-term or "instantaneous"
producibility benchmark is 5.21
.

#### Analysis of Design Robustness

Now, if the discrepancy between the normalized short-term and
long-term producibility estimates were considered for any given design, the
degree of robustness could be readily determined. For example, study design
alternative

A. Since the short-term estimate of
producibility was 5.21 (per
opportunity) and the long-term was 3.85 (per
opportunity), the discrepancy was 5.21 - 3.85
= 1.49. That 1.49 represents
an equivalent mean shift. Thus, the design robustness may be given as
(3.85/5.21
)100 = .739 or the
design is 74% robust to perturbing influences of a temporal nature. On the
flip side, the instantaneous design producibility was degraded by (1.00 -
.74)100 = 26%.

#### Producibility Optimization Alternatives

It is easy to see how m could be an item such as the number of
process steps. operations, or specifications. In fact, m could just about be
anything which relates to complexity. In essence, normalized rolled-throughput
yield allows the study of producibility from many different perspectives.
However, only such traditional variables as part count and process operations
should be used as the basis of normalization. Also note that m can be weighted
by relative cost; however, it must be recognized that most weighing schemes
introduce an additional layer of computational complexity. This is not to say
that cost should not be considered in a quantitative producibility analysis,
but rather to say that there are better ways of studying it.

If any given design fails to reveal a satisfactory index of
producibility, several alternatives are available. First, it would be
necessary to hierarchically study the producibility metrics such as creating a
Pareto breakdown of the design so as to isolate leverage. Once leverage is
isolated, different scenarios on the baseline design through manipulation of
part count and manufacturing capability can be discussed. For example, the
possible product and/or process design changes that would result in a reduced
complexity factor could be investigated. For example, m could be minimized
such that the desired rolled-throughput yield was obtained. Also various
aspects of the design could be optimized in terms of variation robustness.
Another exploratory procedure would be to artificially minimize the process,
component, and material capabilities so as to determine the feasibility of a
manufacturing control solution. Yet another alternative would be some
combination of all three.

Regardless of optimization strategy, it is clear that the
independent and joint effects of design complexity (product and process) and
manufacturing capability largely determine the functional producibility of a
product design. The post- mortem optimization efforts can be avoided through a
pro-active design process. For example, this presented methodology can be
worked in reverse so as to establish initial baseline design goals.

#### Producibility Assessment Model Synopsis

As a consequence of the arguments presented, rolled-throughput
yield and robustness can be related to all three of the circles given earlier.
The questions remain, however, on how to collect such data and to what indices
should the data be reduced.

As a final point on the topic of producibility metrics, of note is that any
given quantitative assessment of producibility is only as good as the data and
assumptions which underlie the final numbers. In addition, too much real world
practical constraints can easily invalidate the abstract domain of
mathematical precision and quantitative analysis. On the other hand, too much
of the latter can lead to what is termed analysis
paralysis.