Since the available data usually only constitute a sample from
the total population, statistical methods are used to estimate the reliability
parameters of interest, e.g., MTBF, failure rate, probability of survival,
The main advantage of statistics is that it can provide a
measure of the uncertainty involved in a numerical analysis. The secondary
advantage is that it does provide methods for estimating effects that might
otherwise be lost in the random variations in the data.
It is important to keep in mind the fact that data constitute a
sample from the total population, that random sampling peculiarities must be
smoothed out, that population density parameters must be estimated, that the
estimation errors must themselves be estimated, and - what is even more
difficult - that the very nature of the population density must be estimated.
To achieve these ends, it is necessary to learn as much as one can about the
possible population density functions, and especially what kind of results we
can expect when samples are drawn, the data are studied, and we attempt to go
from data backward to the population itself. It is also important to know what
types of population densities are produced from any given set of engineering
conditions. This implies the necessity for developing probability models, or
going from a set of assumed engineering characteristics to a population
It is customary, even necessary, in statistical analysis to
develop, from physical engineering principles, the nature of the underlying
distribution. The sample of data is then compared against the assumed
The usual parameter of interest in reliability is the
distribution of times to failure, called the probability density function or
failure density function. The failure density function may be discrete, that
is, only certain (integer) values may occur, as in tests of an explosive
squib. Success or failure will occur on any trial, time not being considered.
Or it may be continuous, any value of time to failure being possible.
Typically histograms are plotted (e.g., time-to-failure plots)
and statistical techniques used to first test the data to determine the
applicable form of the probability distribution, and then identify and
evaluate the relationship between the reliability parameter(s), such as
failure rate, and the critical hardware characteristics/attributes which
affect reliability (such as technology, complexity, application factors, etc.)
as defined by the data.