

DAU RMG 5th: Risk Management Guide for DoD Acquisition 
 

2.6.4.3.1 Risk Rating and Prioritization/Ranking
2.6.4.3.1 Risk Rating and Prioritization/Ranking
Risk ratings are an indication of the potential impact of risks on a
program; they are a measure of the probability/likelihood of an event
occurring and the consequences/impacts of the event. They are often expressed
as high, moderate, and low. Risk rating and prioritization/ranking are
considered integral parts of risk analysis.
A group of experts, who are familiar with each risk source/area (e.g.,
design, logistics, production, etc.) and product WBS element, are best
qualified to determine risk ratings. They should identify rating criteria for
review by the PMO, who includes them in the Risk Management Plan. In most
cases, the criteria will be based on the experience of the experts, as opposed
to mathematically derived, and should establish levels of
probability/likelihood and consequences/impacts that will provide a range of
possibilities large enough to distinguish differences in risk ratings. At the
program level, consequences/impacts should be expressed in terms of impact on
cost, schedule and performance. Tables 22 and 23 are examples of
probability/ likelihood and consequence/impact criteria, and Table 24
contains an example of overall risk rating criteria, which considers both
probability/likelihood and consequences/impacts. Table 25 provides a sample
format for presenting risk ratings.
Level 
What is the
Likelihood the Risk Event
Will Happen? 
a b c d e 
Remote Unlikely Likely Highly Likely Near
Certainty 
Table 22.
Probability/Likelihood Criteria (Example)
Level 
Given the Risk is
Realized, What is the Magnitude of the Impact? 
Performance 
Schedule 
Cost 
a 
Minimal or no
impact 
Minimal or no
impact 
Minimal or no
impact 
b 
Acceptable with some
reduction in margin 
Additional resources
required; able to meet need dates 
<5% 
c 
Acceptable with
significant reduction in margin 
Minor slip in key
milestones; not able to meet need date 
57% 
d 
Acceptable; no
remaining margin 
Major slip in key
milestone or critical path impacted 
710% 
e 
Unacceptable 
Can’t achieve key
team or program milestone 
>10% 
Table 23. Consequences/Impacts
Criteria (Example)
Risk
Rating 
Description

High Moderate Low 
Major
disruption likely
Some disruption Minimum
disruption 
Table 24. Overall Risk Rating
Criteria (Example)
Priority 
Area/Source Process 
Location 
Risk
Event 
Probability 
Consequence 
Risk
Rating 
1 
Design 
WBS 3.1 
Design not completed
on time 
Highly
Likely 
Can't achieve key
milestone 
High 
2 






3 






Table 25. Risk Ratings
(Example)
Using these risk ratings, PMs can identify events requiring priority
management (high or moderate risk probability/likelihood or
consequences/impacts). The document prioritizing the risk events is called a
Watch List. Risk ratings also help to identify the areas that should be
reported within and outside the PMO, e.g., milestone decision reviews. Thus,
it is important that the ratings be portrayed as accurately as possible.
A simple method of representing the risk rating for risk events, i.e., a
risk matrix, is shown in Figure 26. In this example matrix, the PM has
defined high, moderate, and low levels for the various combinations of
probability/likelihood and consequences/impacts. The matrix is structured
somewhat symmetrically; programs should tailor the scales and risk rating
blocks to match their unique risk management requirements.
Figure 26. Overall Risk Rating
(Example)
There is a common tendency to attempt to develop a single number to portray
the risk associated with a particular event. This approach may be suitable if
both probability/likelihood (probability) and consequences/impacts have been
quantified using compatible cardinal scales or calibrated ordinal scales whose
scale levels have been determined using accepted procedures (e.g., Analytical
Hierarchy Process). In such a case, mathematical manipulation of the values
may be meaningful and provide some quantitative basis for the ranking of
risks.
In most cases, however, risk scales are actually just raw (uncalibrated)
ordinal scales, reflecting only relative standing between scale levels and not
actual numerical differences. Any mathematical operations performed on results
from uncalibrated ordinal scales, or a combination of uncalibrated ordinal and
cardinal scales, can provide information that will at best be misleading, if
not completely meaningless, resulting in erroneous risk ratings. Hence,
mathematical operations should generally not be performed on scores derived
from uncalibrated ordinal scales. (Note: risk scales that are expressed
as decimal values (e.g., a 5 level scale with values 0.2, 0.4, 0.6, 0.8 and
1.0) still retain the ordinal scale limitations discussed above.) For a more
detailed discussion of risk scales, see Appendix G of the reference
Effective Risk Management: Some Keys to Success.
One way to avoid this situation is to simply show each risk event’s
probability/likelihood and consequences/impacts separately, with no attempt to
mathematically combine them. Other factors that may significantly contribute
to the risk rating, such as time sensitivity or resource availability, can
also be shown. The prioritization or ranking — done after the rating — should
also be performed using a structured risk rating approach (e.g., Figure 26)
coupled with expert opinion and experience. Prioritization or ranking is
achieved through integration of risk events from lower to higher WBS levels.
This means that the effect of risk at lower WBS elements needs to be reflected
cumulatively at the top or system level.




 
 