||C4I: Realizing the Potential of C4I
As argued above, an essential underpinning of C4I interoperability is architecture and requirements specification. Ensuring that the architecture and requirements are in fact successfully implemented, and that the required level of interoperability is achieved (which is not guaranteed by conformance to specifications), requires comprehensive testing and evaluation. Testing is critical to achieving interoperability and has an especially large payoff if conducted concurrently with development. Many interoperability problems are subtle, manifesting themselves only in certain sets of circumstances, and so are hard to uncover, and they demand a great deal of empirical work and testing to resolve.
Testing compares actual performance with requirements.
It can take place in a laboratory, a field location, or at someone's desk with
early system designs. Typically, systems are tested at different stages in
their life cycle: during development, preproduction, and in the field (Box 2.7 describes DOD's efforts in these areas):
- Developmental testing assesses progress in meeting system-level requirements ranging from functionality to performance (including software stability). To ensure correct intent, a system's "paper" requirements may be tested against user-stated needs. Systems may be tested against requirements to ensure correct architecture and design. Subsystems may be tested against designs to ensure correct development.
- Preproduction testing is undertaken when a system has completed the development process but before it has been accepted for production.
- Conformance testing focuses on the
stand-alone functionality and performance of a particular system. Through
a paper or laboratory test, it validates the system in terms of stated
requirements or specifications. The result of conformance testing
typically is formal certification of compliance with the relevant
standards. Today, commercial suppliers are commonly regarded as having the
primary responsibility for ensuring conformance to customers'
requirements, transforming conformance testing from an adversarial test
conducted by the purchaser into a more cooperative process (Box
- System-to-system testing determines how well
a system interoperates with other systems. It is typically performed in a
laboratory, where two or more systems can be interconnected. Involving
multiple systems and suppliers, it is usually more complex and expensive
than conformance testing. Its scope can range from "lower-layer" (e.g.,
communications) to "higher-layer" (e.g., applications/data)
- Field testing17 assesses the extent to which a system satisfies users' operational needs in a "real-world" setting, which differs from the controlled environment of developmental and preproduction testing: system configurations in the field (e.g., software releases, intermediate communications, etc.) are quite likely to be different, in detail, from the ideal configuration envisioned in the system design; those personnel operating the systems are typical field personnel rather than technically trained engineers; and nuances of system usage--often not apparent until a system is fielded--will arise, especially under non-ideal scenarios. Field testing is also essential because end-to-end interoperability involves critical non-technical dimensions such as people, procedures, and training. Additional complications that require field testing to resolve may arise because corporate or organizational information systems are typically systems of independently developed systems (or components) in which unsynchronized component insertions can alter the interoperability properties of the overall system. Field testing involves functional testing and follow-on testing:
- Functional testing, the initial test in the field, cannot occur until people in the field have been trained in both the system and the business processes that the system will support. Functional testing involves configuring systems to meet the unique demands of particular customers, integrating products with the embedded base of systems (including earlier generations of the same product), and evaluating the resulting system of systems from the end-to-end functional perspective of the user.
- Follow-on testing assesses a system's
performance after it has been fielded, reverifying interoperability
periodically or as changes occur and providing a mechanism for tracking
progress in addressing known problems. Some requirements cannot be
adequately tested during the functional testing phase, and are best
assessed during ongoing operations. Follow-on testing draws on information
from multiple sources, including problem reports and lessons learned in
joint operations and exercises, vendor information about features and bugs
in new releases, and periodic monitoring of system performance and
failures during field use.
In an ideal world, with an absolutely complete set of interface requirements and complete exercise of each system, conformance testing would catch all possible flaws. However, requirements are seldom complete enough to allow thorough testing, and complete testing takes too long. Often, requirements are strong in specifying behavior under ideal (sunny-day) conditions and weak about what should happen when it rains--for example, what the response of a system to a failure somewhere should be. System-to-system and field testing compensate by testing actual systems under a variety of conditions that go beyond those typically stated in requirements.
Testing should also be seen as an integral part of requirements definition and system development. Particularly in functional and follow-on testing, the value comes as much from having a process for learning about new requirements and feeding those requirements back from the operators to the developers as from identifying and correcting mistakes. As spiral development (see the discussion of evolutionary acquisition in section 4.3.2) becomes the normal mode for acquiring C4I systems, such mechanisms for rapid feedback become especially important.
Thus testing must be essentially continuous, and "stability" is a state that is never reached in any meaningful sense. Only when information is fed back to system developers and maintainers can processes and systems be modified to help ensure continuing high performance as the operating environment changes. Without ongoing feedback, initial implementations of processes and systems may interoperate satisfactorily at first, but not later.