A review made of a sample of past maintainability demonstrations showed a 100% success rate for both large and small systems at primarily the organizational and intermediate maintenance level. Only a small percentage of the systems reviewed, however, specifically addressed testability. Those that did also had a 100% demonstration success rate, determined by specifically calculating the percentage of faults detected and isolated. Despite the fact that maintainability demonstrations are quite successful, testability related problems, especially those associated with Built-In-Test (BIT), have continued to plague the maintainability performance of many complex systems. Metrics such as cannot duplicate (CND) rate, retest OK (RTOK), and false alarm rate have continued at unacceptable values in actual operations resulting in too many resources being spent on maintenance of systems and equipments.
There are several reasons why maintainability demonstrations are usually successful, but testability performance in the field continues to fall short of both expectations and demonstrated values. Specifically, current demonstration techniques are inadequate to demonstrate testability metrics such as fraction of faults detected and fault isolation resolution. Most maintainability demonstrations are performed in laboratory environments using the fault insertion methods previously described. Furthermore, the faults selected for insertion represent a small percentage of those likely to occur during fielded operation. The reasons for limiting the number of faults inserted include the fact that faults that will result in equipment damage or cannot be easily inserted are not selected for demonstration. Only hard faults, such as open leads or shorted components, that are relatively easy to detect, isolate and repair are selected. Also, many of
the faults that result in CNDs or RTOKs, are not easily simulated in a demonstration test. Finally, it is not possible to simulate failures or intermittent conditions that can be considered false alarms, thus eliminating the ability to demonstrate any specified false alarm rate for BIT.
Given the preceding facts, effective demonstration of testability is probably not possible in the near future. It should be considered as part of future development programs only if significant progress is made in developing methods that can demonstrate meaningful testability metrics. This does not mean that maintainability demonstration, as described in this appendix, is not useful. The need to demonstrate ease of maintenance and the adequacy of logistical support services such as technical manuals, support equipment, sparing levels and training is still extremely important to maintainability. Furthermore, if the diagnostic system designed into a system cannot detect and isolate even those hard failures induced as part of a maintainability demonstration, then this is an indication that a redesign is warranted.
If it is not possible to adequately demonstrate
testability characteristics of the design in terms of the aforementioned
metrics, the question still remains then as to how the customer can be given
some assurance that the diagnostic system will allow the system to meet its
overall system performance requirements. The key is to do a better job early
on in development of determining exactly what the system diagnostic needs are,
and then to develop a process by which higher level requirements are allocated
properly to subsystems. Further, as part of the systems engineering approach
to design, wherein an integrated product development (IPD) team is assembled to manage the program and make decisions regarding requirements, allocations, etc., a single individual must be given overall authority for testability. Furthermore, this person must be given equal status in the decision making process, such that testability needs and requirements do not take a back seat to other performance needs. In this manner, any design decisions must consider the impact on testability prior to finalizing any approaches.