Testability analysis is important at all levels of design and can be accomplished in a variety of ways. For instance, when designing complex integrated circuits (ICs), such as Application Specific ICs, or ASICs, it is important to develop test vectors that will detect a high percentage of 'stuck at' faults (i.e., signal stuck at logic '1' or '0'). This is almost always determined via logic simulation wherein a model of the design is developed in an appropriate fault simulation language. Once the model is compiled and ready to be simulated, a set of test vectors are applied to the model. The fault simulation program then produces a list of faults detected by the test vectors, as well as reporting the percentage (or fraction) of faults detected. Many such programs also identify specific signals that were not detected such that adjustments can be made either in the design or in the test vectors themselves in order to increase fault detection percentage.
For non-digital electronics, fault detection efficiency is typically determined with the aid of an FMEA. The FMEA will provide those faults that result in an observable failure, and can therefore be detected. The test engineer then must develop a test that will verify operation and detect any malfunctions as identified in the FMEA. Fault detection percentages are then determined by summing the number of faults identified in the FMEA that are detected versus the total number identified as being detectable. This process can occur at all levels of design. The fault grading methods described in the first paragraph above are primarily applied at the IC and printed circuit card levels.
In addition to determining fault detection percentage, a testability analysis should be performed to determine the fault isolation effectiveness of designed tests. For digital electronics, many of the tools used to grade test vectors also provide statistics on fault isolation percentages. This is typically provided by creating a fault dictionary. During fault simulation, the response of the circuit is determined in the presence of faults. These responses collectively form the fault dictionary. Isolation is then performed by matching the actual response obtained from the circuit or test item with one of the previously computed responses stored in the fault dictionary. Fault simulation tools can determine from the fault dictionary the percentage of faults that are uniquely isolatable to an ambiguity group of size n (n = 1, 2, 3, ...). These tools can be used to verify fault isolation goals or requirements via analysis, prior to actual testing. For non-digital circuits, hybrid circuits or even digital systems above the printed circuit card level, analysis of fault isolation capability can be performed with the aid of a diagnostic model and a software tool that analyzes that model. Examples are dependency modeling tools such as the Weapon System Testability Analyzer (WSTA), System Testability Analysis Tool (STAT) or the System Testability and Maintenance Program (STAMP)13. These tools, and others like them, can be used to determine the fault isolation capability of a design based on the design topology, order of test performance, and other factors such as device reliability. Statistics such as percentage of faults isolatable to an ambiguity of group size n are provided, as is the identification of which components or modules are in an ambiguity group for a given set of tests. Test effectiveness and model accuracy are the responsibility of the test designer, however.
13 STAT is a registered trademark of DETEX Systems, Inc. and STAMP is a registered trademark of the ARINC Research Corporation. WSTA is a tool developed by the US Navy and available to most US Government contractors and US Government employees.