10.1 Purpose. This appendix contains a default set of definitions for the evaluation criteria appearing in the associated graphs. These definitions shall be implemented by the contractor if an alternative set has not been proposed in the Software Development Plan and accepted by the contracting agency. The definitions specified in this appendix are a mandatory part of this standard.
10.2 Criteria definitions. The following definitions are listed in the order that the criteria appear in the associated graphs. For convenience, the definitions use the word "document" for the item being evaluated, even though in some instances the item being evaluated may be other than a document.
10.2.1 Internal consistency. Internal consistency as used in this standard means that: (1) no two statements in a document contradict one another, 2) a given term, acronym, or abbreviation means the same thing throughout the document, and (3) a given item or concept is referred to by the same name or description throughout the document.
10.2.2 Understandability. Understandability, as used in this standard, means that: (1) the document uses rules of capitalization, punctuation, symbols, and notation consistent with those specified in the U.S. Government Printing Office Style Manual, (2) all terms not contained in the U.S. Government Printing Office Style Manual or Merriam-Webster's New International dictionary (latest revision) are defined, (3) standard abbreviations listed in MIL-STD-12 are used, (4) all acronyms and abbreviations not listed in MIL-STD-12 are defined, (5) all acronyms and abbreviations are preceded by the word or term spelled out in full the first time they are used in the document, unless the first use occurs in a table, figure, or equation, in which case they are explained in the text or in a footnote, and (6) all tables, figures, and illustrations are called out in the text before they appear, in the order in which they appear in the document.
10.2.3 Traceability to indicated documents.
Traceability as used in this standard means that the document in question is
in agreement with a predecessor document to which it has a hierarchical
relationship. Traceability has five elements: (1) the document in question
contains or implements all applicable stipulations of the predecessor
document, (2) a given term, acronym, or abbreviation means the same thing in
(3) a given item or concept is referred to by the same name or description in the documents, (4) all material in the successor document has its basis in the predecessor document, that is, no untraceable material has been introduced, and (5) the two documents do not contradict one another.
10.2.4 Consistency with indicated documents. Consistency between documents, as used in this standard, means that two or more documents that are not hierarchically related are free from contradictions with one another. Elements of consistency are: (1) no two statements contradict one another, (2) a given term, acronym, or abbreviation means the same thing in the documents, and (3) a given item or concept is referred to by the same name or description in the documents.
10.2.5 Appropriate Analysis, design, and coding techniques used. The contract may include provisions regarding the requirements analysts, design, and coding techniques to be used. The contractor's Software Development Plan (SDP) describes the contractor's proposed implementation of these techniques. This criterion consists of compliance with the techniques specified in the contract and SDP.
10.2.6 Appropriate allocation of sizing and timing resources. This criterion, as used in this standard, means that: (1) the amount of memory or time allocated to a given element does not exceed documented constraints applicable to that element, and (2) the sum of the allocated amounts for all subordinate elements is within the overall allocation for an item.
10.2.7 Adequate test coverage of requirements. This criterion, as used in this standard, means that: (1) every specified requirement is addressed by at least one test, (2) test cases have been selected for both "average" situation and "boundary" situations, such as minimum and maximum values, (3) "stress" cases have been selected, such as out-of-bounds values, and (4) test cases that exercise combinations of different functions are included.
10.3 Additional criteria. The following definitions apply to criteria that are not self-explanatory and that appear in the NOTES column of the associated graphs. These criteria are not included in each figure, but appear only as appropriate.
10.3.1 Adequacy of quality factors. This criterion applies to the quality factor requirements in the Software Requirements Specification (SRS). Aspects to be considered are: (1) trade-offs between quality factors have been considered and documented, and (2) each quality factor is accompanied by a feasible method to evaluate compliance, as required by the SRS DID.
10.3.2 Testability of requirements. A requirement is considered to be testable if an objective and feasible test can be designed to determine whether the requirement is met by the software.
10.3.3 Consistency between data definition and data use. This criterion applies primarily to design documents. It means that each data element is defined in a way that is consistent with its usage in the software logic.
10.3.4 Adequacy of test cases, test procedures, (test inputs, expected results, evaluation criteria). Test cases and test procedures should specify exactly what inputs to provide, what steps to follow, what outputs to expect, and what criteria to use in evaluating the outputs. If any of these elements are not specified, the test case or test procedure is inadequate.
10.3.5 Completeness of testing. Testing is complete if all test cases and all test procedures have been performed, all results have been recorded, and all acceptance criteria have been met.
10.3.6 Completeness of retesting. Retesting consists of repeating a subset of the test cases and test procedures after software corrections have been made to correct problems found in previous testing. Retesting is considered complete if: (1) all test cases and test procedures that revealed problems in the previous testing have been repeated, their results have been recorded, and the results have met acceptance criteria, and (2) all test cases and test procedures that revealed no problems during the previous testing, but that test functions that are affected by the corrections, have been repeated, their results have been recorded, and the results have met acceptance criteria.