Background
In 1994, the Office
of the Director, Operational Test and Evaluation (DOT&E), asked the Test
and Evaluation Department (T&E Department) to research the relationship, if any, between
the number of test articles used in the Engineering and Manufacturing
Development (EMD) Phase of a program acquisition, and the success of that program in
EMD. For major defense acquisition programs which include a Low Rate
Initial Production (LRIP) effort within the EMD phase, the DOT&E is responsible
for approving the number of test articles required for the Initial
Operational Test and Evaluation (IOT&E) 1 accomplished near the end of EMD.
Since 1991, Congressional Law requires the DOT&E to specify at Milestone II,
the number of test articles required for this test. 2 The determination
of test quantities is a difficult trade-off among several factors; the office
of the DOT&E wanted to know if data was available that would help in making
this important decision. Indications were that factual references or metrics
relating to the subject were nonexistent. 3
This research was completed
and the results published in a Defense Systems Management College Technical
Report dated May 1995,4 hereafter called the original research report. The
interest in the original research methodology and conclusions have resulted in
a follow-on research effort described in this technical report.
Overview
This research extends the prior research
in two important ways. In order to answer the specific T&E question it
was necessary to devise a research methodology and accumulate considerable
EMD program management data. This larger, general management data,
including comparative Operational Test Activity (OTA)5 /DOT&E operational test
evaluations, has become the focus of current research. The initial research
sought to identify the test article relationship to EMD program success in
terms of cost and schedule overruns only. During dissemination of the results
of the original research, the question "What about performance?" was asked. We
originally established criteria for cost, schedule, and performance success,
but collected no performance data. The current research adds performance data
to the database; this is the second significant addition to the research
methodology.
In most other
respects this research uses the original research methodology. The original
database for 24 programs that completed the EMD phase has been extended to 53
programs with 41 programs having nearly complete data within the database.
Programs continue to be added as they near completion of their EMD phase.
All data used
within this report are unclassified. The numbers of data points vary between
parameters due to the non-availability of some data.
It is important to recognize that our research approach
is a measure of EMD program management success in terms of cost, schedule and
performance metrics rather than the eventual weapon system success. There was
no attempt to survey the effectiveness of the systems in their operational
roles in the field.
The
subsequent chapters of this report are organized as follows. Portions of the
original research report that remain relevant are repeated. Chapter 2
discusses recent literature search efforts and repeats some of those from the
original report. Chapter 3
essentially repeats the description of the original research methodology and
an overview of the spreadsheet contained in Appendix C. Chapter 4
describes the limitations and assumptions implicit in this research. Chapter 5 is an analysis of the
results of this effort. Chapter 6
is
a discussion of opinions resulting from our viewpoint of the facts contained in the data bank.
Figure 1 is intended to show the
relationship of the EMD phase, between Milestone II and Milestone III, to the
total system acquisition and test and evaluation activities
therein.