P10, P50, P90 or P(X) are statistical terms which are used to describe the outcome of a risk event. These key phrases have been both widely and *wildly* used in project management over many years to forecast uncertain outcomes. But before we can investigate into these terms, we must first understand the definition of risk.

According to the international standard of risk management, ISO 31000, risk is the 'effect of uncertainty on objectives'. The "effect of uncertainty" includes both positive (opportunity) and negative (threat) impacts on objectives. Uncertainty on the other hand, is the probability of such an "effect" taking place.

In project management, risk can be generalised in to 2 different categories: the known-unknowns and the unknown-unknowns. An example of known-unknowns could be any technical assumptions, inaccuracy of estimate cost/schedule rates, or any event that might lead to a change of information. These known-unknowns can be identified through a brain storming session during a risk workshop. With sufficient care and caution, these known-unknowns can be quantified and treated to minimize the impact on the overall project objective.

Unknown-unknowns on the other hand, are events that are generally beyond the expectation/imagination of what a reasonable project would experience. Since these events and impacts are un-foreseeable, the only way to quantify and analyse them is through a statistical analysis of completed projects. Since all completed projects would ideally have the same exposures on unknown-unknown risks, a quantitative analysis can be carried out on all measurable parameters on a project (i.e. duration, cost, quality, etc). This analysis can be taken at an activity level (for example digging a hole, welding pipe) or at a project level by nominalizing these parameters by its unit and baseline value (as a ratio or percentage) respectively.

All of this statistical analysis can be very difficult to understand, particularly if one comes from an organisation which typically 'slaps' on a percentage figure for contingency and calls it an allowance. The problem with this approach is that it creates a bucket of additional funding with no traceability for what the allowance was supposed to cover.

So lets dive into some mathematical details and try to understand how a simulation can be a better approach.

The outcome of a statistical approach (such as Monte Carlo Analysis) is usually a spread of data over a measurable scale, which can be represented using Gaussian Distribution (means and standard deviations) or binomial distribution. It is here where P10, P50 and P90 comes into play.

The above image shows a classic Gaussian Distribution, with a P50 intersection

By definition, P10, P50 and P90 are values on an ascending or descending scale, representing the point where the integral (total area) from one end of the statistical curve to the define value would have equaled to 10%, 50% and 90% of the total area respectively. In another words, P(x) is the value where historically, only x% of the sample/trail was able to achieve or out perform this particular outcome.

In the above example, we demonstrate that 50% of values are less than or equal to $12,507 (in other words there is a 50% chance that the outcome of a subsequent analysis will fall in this same range).

## An introduction to Risk Simulation

It is important to remember that these P(x) values are always calculated as the result of statistical analysis. A common tool in such an instance is the Monte Carlo Analysis (MCA) technique. In MCA, an algorithm is used to simulate a project's execution numerous times over. The algorithm works (crudely) as follows:

*Begin Simulation*

*For each risk in the register*

*generate a random number between 0 and 1 (expressed as a percentage)*

*if the number is less than the probability of the risk's occurence - it means the risk has occured*

*obtain a random value framed by the risks three-point impact (the risk has occured and a random impact is chosen)*

*else ignore, move to next risk*

*next risk*

*End simulation*

This algorithm essentially simulates the project taking place over thousands of iterations. As it runs, the algorithm activates risks randomly throughout a project. The impact of each risk is then subsequently framed by the three point estimate. The downside to this approach however is that the impact is only as good as the estimate range applied to the risk. *(ie, garbage in is garbage out).*

Unfortunately as human beings we tend to be slightly pessimistic when it comes to quantifying risk outcomes and often forget to include opportunity as part of the analysis, and hence the calculated results can end up biased.

## Final Thoughts

There are several benefits to undertaking risk analysis:

- If a full risk register is developed and each risk appropriately costed then a project team has a firmer basis with which to contend it's access to contingency. This is especially valuable when a project applies for contingency in a political or cash-strapped environment.
- With more emphasis on saving cost and enhancing project delivery performance, being able to fully quantify and qualify risk events is even more important in applying for budgets
- By identifying risk as well as opportunity within the simulation, a project team can get a feel for the most likely outcome for the project
- The contingency value derived is based on a statistical method not just a "number pluck"
- When deriving risk impact values, engage independent subject matter experts (consultants) who may take an unbiased perspective of your project
- Engage senior management in the process so they can fully understand and appreciate the impacts of risk on your project

But to get to this point, a series of successful risk workshops must be held to obtain un-biased data.