Assessing testability via dependency analysis has gained in popularity recently, and it is therefore prudent to provide some additional information on this technique. Dependency analysis starts with the creation of a dependency model of the item to be analyzed. The model is designed to capture the relationship between tests or test sites within a system, and those components and failure modes of components that can affect the test. As an example, consider the simple functional block diagram shown in Figure 15.
Figure 15 - Simple System Showing Test
The dependency model for the system, in the form of a tabular list of tests and their dependencies is provided in Table IX.
Figure 15 has been labeled to identify each potential test site within the system, where in this example, exactly one test is being considered at each node. The dependency model shown in Table IX is a list of "first-order dependencies" of each test. For example, the first order dependency of test T3 is C2 and T2. This would indicate that T3 depends upon the health of component C2 and any inputs to C2, which is T2 in this case. For this simple system, it is also obvious that T3 will also depend on C1 and T1, but these are considered higher-order dependencies. Each of the tools mentioned previously (i.e., STAT, STAMP and WSTA), determine all higher order dependencies based on a first order dependency input model.
Dependency modeling is attractive due to its applicability to any kind or level of system. Note in the example that neither the nature nor level of the system is required to process the model. Consequently, this methodology is applicable to most any type of system technology and any level (i.e., component to system).
Based on the input model, the analysis tools can determine the percentage of time isolation to an ambiguity group of n or fewer components will occur. In addition, each of the tools discussed will also identify which components or failures will be in the same ambiguity group with other components
or failures. Furthermore, any test feedback loops that exist, including those components contained within the feedback loop, will also be identified. Note
that the ambiguity group sizes and statistics are based on a binary test outcome (i.e., test is either good or bad), and in most cases the tools assume that the test is 100% effective. This means that if the model indicates that a particular test depends on a specified set of components, the tools assume that should the test pass, all components within the dependency set are good. Conversely, a failed test makes all of the components within the dependency set suspect. Therefore, the accuracy of the model, in terms of what components and component failure modes are actually covered by a particular test are the responsibility of the model developer. The coverage is very much dependent upon test design and knowledge of the system's functional behavior.
Even before intimate knowledge of what tests are to be performed is known, such as in the early stages of system development, a model can be created that assumes a test at every node, for instance. The system design can be evaluated as to where feedback loops reside, which components are likely to be in ambiguity, and where more visibility, in terms of additional test points, need to be added to improve the overall testability of the design. Once the design is more developed, and knowledge of each test becomes available, the dependency model can then be refined. Given that the analyst is satisfied with the model results, each of the tools discussed can be used to develop optimal test strategies based on system topology and one or more weighting factors such as test cost, test time, component failure rates, time to remove an enclosure to access a test point, etc..
One of the drawbacks in the past to dependency modeling has been the time it takes to create a model. However, translation tools exist and are continuously being developed that can translate a design captured in a CAD format, such as the Electronic Data Interchange Format (EDIF), into a dependency model compatible with the specific dependency analysis tool being used. The analyst is still responsible for verifying the accuracy of the model, however, as in some cases, not all dependencies will be 100% correctly translated. Despite this fact, the amount of time that can be saved in translation out weighs any additional time it may take to verify the model.