Technical barriers are perhaps the easiest to envision. Technical challenges include interoperability concerns, and limitations in what can be effectively and affordably modeled.
Interoperability challenges result from the desired ability to seamlessly share models and simulations between programs and across services to analyze tradeoffs in a system of systems synthetic environment. As discussed previously, the DMSO is pursuing various initiatives in the Common Technical Frame-work to facilitate interoperability and reuse. Certainly more technical improvements will continue emerging, such as computer-aided tools, which will facilitate the development and deployment of reuse standards and policies such as the High Level Architecture.
Programs can experience interoperability challenges within their own program, as they attempt to move information up and down the hierarchy of models and simulations (for example, from campaign to mission to engagement to engineering, and then back up). There are interoperability challenges in making different tools communicate with each other, particularly those that were developed for use in a different context from the one for which they are now intended. For example, a logistics model that helps determine the optimum stockage level for spare parts may not be compatible with another tool that predicts reliability, or even worse, gives conflicting results.
The MSRR will provide increasing access to programs to
assist in their finding what models and simulations are available for their
systems of systems analyses, which will facilitate maxi-mum reuse. This is
particularly evident with items that have been designed for reuse, such as
threat and environment data. However, designing for reuse costs approximately
three times as much as designing for a specific program,3 and is only feasible when all of the programs intending to use the models and simulations are known in advance. HLA begins to solve this problem by helping groups that choose to interact with each other define specific system architectures called federations. This creates a problem when a program wants to use another program's model or simulation that was developed as part of another federation. An extreme but probable example would be attempting to use models and simulations from several other federations to get a true picture of the battlefield.
In an ideal SBA process, models and simula-tions can
be reused as easily as using a library or repository. However, historically
this type of ad hoc reuse has only resulted in about 20 percent savings in
software code reuse.4 A lot of time is often spent analyzing the code to determine what is applicable for reuse, and often the most valuable results are the insights gained into how to functionally design the new code, rather than the reuse of the code itself. A SBA process may not actually reuse models and simulations in this traditional software sense, but it does point out the considerable difficulty experienced in a similar process which is much simpler than SBA envisions.
Some of the easiest areas of reuse to envision are for common areas that many programs will need, such as models of the environment in which systems will operate, and of the threats these systems will face. Efforts are already ongoing in some of these areas, such as the SEDRIS effort mentioned earlier, and the Virtual Proving Ground under development at the Army's Test and Evaluation Command. The DoD has a vested interest in reducing duplication of efforts in these areas, and also in having a common depiction of them across all systems. Programs will be able to use the DoD's environment to evaluate many aspects of their system. If a program has a requirement that is not satisfied by the common environment, it can be expanded to meet the needs, and it can be subsequently brought back into the aggregate common environment for reuse by other programs. The common environment gets expanded as a result. Having a common environment will also provide credibility and help to minimize issues that could arise if each contractor developed his own environment and threat models, especially if these systems were in a competitive selection process. Other items the DoD should consider providing are the analytical tools that will be used to evaluate contractors' models. For example, if the DoD planned to use a constructive model such as the Army's CASTFOREM (Combined Arms and Support Task Force Evaluation Model) to evaluate a contractor's proposed model, then the program office should consider providing CASTFOREM to the contractor early in the program.
There are also technical limitations in what can be effectively and affordably modeled, including reliable cost figures, failure and reliability prediction, and human behavior. Cost models that accurately project TOC need to be available as early as possible to enable informed tradeoff analyses that all communities and decision makers can believe. Discrepancies will otherwise persist between cost estimates prepared by differing agencies - for example, the Government Accounting Office might predict higher cost estimates than the program's estimates. These discrepancies could effectively cripple the program's ability to design a more affordable system, because the credibility of the system's total ownership costs that are being used to make design tradeoffs is called into question. Cost curves for each alternative considered allow quantification of alternatives and operational tradeoffs with cost as an independent variable. The objective is to use modeling and simulation to help determine the "bend in the knee" when looking at performance versus cost. This bend in the knee refers to the point where the cost of providing greater performance begins to exceed the benefit it provides, or its point of marginal return. By showing the user where the bend in the knee starts to occur on a given performance characteristic then we can ask if it is worth the extra cost to achieve the incrementally less capability per dollar spent. The earlier we are able to show the user the cost implications of his requirements, the sooner we can agree to the necessary tradeoffs to result in an affordable design. This forces the user to make tough calls early in the system's development and avoids pursuing unaffordable solutions.
We need to be able to treat CAIV at the earli-est stages, by incorporating cost considerations into the computer-aided design and engineering tools. It is otherwise impossible to get cost on the table as a trade mechanism. We are still using cost models that use dollars per pound as the basis of estimating, as opposed to conducting detailed cost tradeoffs. Just as computer-numerically-controlled (CNC) machines are able to manufacture items directly from the digital design data, cost models would be able to directly use the digital data to project manufacturing costs. Even more sophisticated cost models would be able to address costs across the life cycle of the system - operations costs, upgrade costs, maintenance costs, and disposal costs, for example.
Today's state-of-the-art in modeling and simulation
does not provide the ability to fully and accurately predict failures to the
point where we can replace physical reliability testing with reliability
testing using modeling and simulation. Durability algorithms are in short
supply, because of the complexity of durability models. The ability to predict
failure varies from one product area to another. Physics of failure is
farthest along in electronics.5 For example, transistor manufacturers are able to predict with high confidence when transistors will fail. In chips, we know exactly how they'll function and how they'll behave, but the same does not hold for the mechanical world. Currently, we do not have good models for predicting failure and determining reliability of mechanical systems or their components. The ability to predict the failure of a new system requires an understanding of the internal physics of the materials within the system. The ability to model this level of fidelity within components and within a system is in its infancy, but some fields of study are making strides by applying recent increases in computing power. Meteorological experts are beginning to model weather systems at increasing levels of fidelity to achieve a better understanding of how thunderstorms, hurricanes, and tornadoes behave. Pharmaceutical companies are beginning to model new drugs at the atomic level and then immerse themselves into the model to help them create the drug. We can expect similar advances in other fields as the technology matures. We need to develop physics of failure and cost models to help predict reliability through simulation. If, in simulation, we are able to test a program to the point of failure, then we can improve overall system reliability by improving the area that failed. We can also use these failure data to predict the required maintenance, manpower, spares, and life cycle cost for a system given the number of operating hours.
The limitations to our ability to build high fidelity physics of failure models result from such things as the limited understanding of how materials fail, variations in the manufacturing processes of materials, and variations in the properties of materials. One of the most significant limitations is the cost and time required to capture all the stress loads that a system or component will experience during operations. Each test conducted on a system captures data for only one set of operational conditions. Program schedule and cost constraints limit the number of samples that can be taken, so assumptions are necessary to extrapolate from this limited sample size to make conclusions for all possible conditions that may be experienced. If we could accurately model the physics of the system, we could conduct many more tests in the synthetic environment than are cost-effective with today's approach.
A program manager must demonstrate that the system he
is developing will meet the reli-ability required by the user. Durability
testing is the process that provides the evidence of whether a system is
reliable enough to proceed toward production. Durability testing is very
demanding of program resources, requiring the PM to schedule and pay for
hundreds, if not thousands of hours of system operations and testing.
Confidence in the reliability of the systems that we field requires that we
have an acceptable level of under-standing about why and how often a system
will fail, which can be a significant cost driver of a program.6 The challenge is to reduce
the number of reliability testing hours required. If we can reduce the number
of reliability testing hours required of an end item, we can reduce both the
schedule and budget necessary for a program during this phase. As an example,
the M1A2 tank program dedicated four physical prototype vehicles just to
support durability testing.7
Today, when we develop a new system it has no known
reliability.8 Only through use and time do we achieve reliability. The only true way to achieve this knowledge is to run the physical system. When we do this we gain confidence in the reliability of that one system we tested and then we infer reliability to any system that is built to the same specifications. Therefore, the two critical factors are first, that the prototype we test be in accordance with the system's specifications; and second, that we are able to accurately repeat builds of subsequent systems. The weakness of this approach is that the physical prototype is likely to have flaws and inconsistencies compared with the final production version of the system because of human error in building the prototype, and configuration changes as we gain more knowledge about the system and as technology continues to develop.
Modeling and simulation actually enables us to improve the confidence in the ability to accurately build and test a prototype that is truly representative of a production item. Using M&S we are able to keep the design open to changes longer because we have greater confidence that the final design will meet the required need and that the production run will be successful. This means we need less time allocated for last minute changes and gives us more time to learn about the system as it matures, to incorporate lessons learned back into the system, and to integrate the latest technology if desired. The changes made using M&S early in the design can be much less expensive than if we were making changes to a physical prototype, which would not be possible until much later in the program.
A final technical limitation is human behavior
modeling. Conceptually, SBA could include modeling and simulating the
acquisition process itself to help produce better acquisition strategies.
Human behavior is difficult to model, however, because our assumptions cannot
be correct for all individuals. Some factors that influence how an individual
will react in a given situation include experience, education, values,
self-confidence, and fatigue. The variability in reactions is so great that
accurately predicting behavior is not possible and any process, such as
acquisition, that is heavily dependent on human interface and decisions is
difficult to predict and model. Probably the best we'll be able to achieve in
this area will be through the use of wargaming techniques. Wargaming allows
for shortfalls in models and data, and provides a way for program managers to
simulate interactions with entities that cannot be directly controlled. The
insights gained by understanding the implications of decisions should enable
managers to develop more effective strategies.9