Login
LinkedIn
22 September 2014
Risk Acceptance

One of the key types of decisions that safety analysis supports is whether or not a particular amount of risk is acceptable. This may be risk associated with a whole industry, such as nuclear power, a single platform, such as an airliner, or a single hazard, such as the risk associated with cars failing to stop at a level crossing. Everything in life has risk. Even after we’ve done our best to control a hazard, we always need to ask whether we’ve done enough.

Even if people don’t agree about whether particular amounts of risk are acceptable, you would think that we could agree how to ask the question. Unfortunately that isn’t the case. There are a number of different frameworks for arguing about risk acceptance, and in this section I’ll try to cover the main approaches.

The first, and most subjective approach is we could judge risk based on public perception. We have a fairly good understanding of the psychology behind risk decision making. People prefer risks that they perceive as being voluntary, under their own control, having clear benefits, natural, and familiar. People dislike risks that they perceive as being imposed by outsiders, exotic, unfair, or more likely to affect children. Just because these preferences are emotional, it doesn’t make them irrational. They are legitimate value judgements that people make. Unfortunately, they don’t help us much with managing risk. If we used public perception as our benchmark, we would be spending our effort managing perception rather than actually improving safety. That would be similar to defending airport security on the grounds that it makes people feel safer, regardless of its actual effectiveness.

The extreme case of this was proposed by Senator James Delaney, in what is has come to be known as the Delaney Principle. Delaney argued for a total ban on all carcinogens, and was successful in enshrining this principle in legislation. Such an approach is similar to trying to prevent high tide via maritime regulations. Slogans such as “zero harm”, or “you can’t place a value on human life” ignore the reality of risk, and result in excessive mitigation of some hazards, at the expense of failing to deal appropriately with other more serious hazards.

The second approach is to apply an absolute test. This involves defining some benchmark for what is an acceptable amount of risk, and comparing all risks to that benchmark. An example of an absolute test is the German MEM, or Minimum Endogenous Mortality. MEM says that any new risk shouldn’t significantly augment the endogenous mortality – the risk of dying from all causes. For a young person, enogenous mortality is around one in five thousand each year, so MEM is set at one in one hundred thousand per year for any new risk. Using MEM is a bit tricky, because instead of working out the risk of hurting anyone, you need to work out the risk of hurting any particular person. For example, let’s say that my new train exposes train drivers to a one in ten thousand risk per year. I could make my train ten times safer, or I could hire more train drivers so that each driver is exposed for one tenth of the time.

In the United Kingdom absolute tests are used indirectly, to filter risks that are too high or too low to deserve detailed treatment. Risks above one in ten thousand per year are considered so extreme that they are automatically unacceptable. Risks below one in a million per year are considered so low compared to background risk, that they don’t deserve detailed consideration.

Absolute tests are widely used in civil aviation regulation.

The third approach is to use a relative test. This involves comparing the risk to other similar risks. An example of a relative test is the French GAMAB or GAME, standing for Globally at Least as Good or Globally equivalent. This works very well when replacing an old system with a similar new system. If the old system was safe, and the new system is at least as good, then the new system must be safe. The logic breaks down when technology changes significantly, or when social expectations change. If we only built systems at least as safe as old ones, then there would be no impetus to make systems safer.

The fourth approach is to apply a trade-off test. Rather than treating all risks as equal, a tradeoff test compares risks and benefits. Simple trade-off tests are not very useful, since they don’t take into account possible mitigations, and because the risks and benefits usually involve different people. The most commonly used tradeoff is called “As Low As Reasonably Practicable”, or ALARP, introduced in the United Kingdom in a legal case called Edwards and National Coal Board. ALARP doesn’t convert risks and benefits directly, but instead weighs up the costs and benefits of further mitigation.

For most risk acceptance tests, the output can be described by saying “This is the amount of remaining risk, and it is acceptable”. Under ALARP, you describe the output by saying “These are the other mitigations I considered, and this is why I haven’t used them”.

The final approach is implied acceptability. This is where you follow an accepted process or design, and the risk is assumed to be acceptable because of the guidelines you followed. In the UK, one of the ways of demonstrating ALARP is by following good practice for the industry you are in – this is a type of implied acceptability. Software safety often uses implied acceptability – rather than trying to measure software risk, you instead base your safety argument on the processes you used to develop and test the software.

There are strengths and weaknesses of all of these tests. I discussed them in detail in a 2007 paper which I’ve linked to in the shownotes.

sitemap | cookie policy | privacy policy | accessibility statement