Recipes for Risk Reduction

Home » Performance Improvement Articles » Recipes for Risk Reduction

Recipes for Risk Reduction

By Kevin McManus, Chief Excellence Officer, Great Systems

Rarely do I come across organizations that are not interested in minimizing the risk of their operations. At the same time, I consistently come across people who underestimate the types of systemic changes that are needed to actually realize and sustain such a goal. What practices do your people use daily to help keep risk levels low? How well do these practices actually work? How effective are your recipes for risk reduction?

How Well Do Your Fixes Match the Significance Potential of the Error?

I have seen too many cases in the 400 plus TapRooT® root cause analysis courses I have facilitated where a significant potential incident or actual loss has occurred, and yet the corrective actions proposed are weak in nature. For example, I have seen cases where seemingly only procedure modifications and safety meeting reminders were expected to keep heavy objects from almost falling on someone in the future.

All too often, I see only one or two relatively weak corrective actions being proposed for what could have been a very serious problem. We do the same thing from a proactive perspective. What types of safeguards do you use to help perform error free work? How effectively are you ‘mixing and matching’ your use of safeguards to optimize human performance, while also minimizing the risk associated with potential errors?

The hierarchy of controls model provides a great place to start when one is beginning to create recipes for risk reduction. Consistent, process-driven risk assessments, along with process-based error rate and type analysis, also help calibrate the team to potential risk levels. Most organizations underestimate their current risk levels, simply because they underestimate the actual error rates that occur on the job each day.

LEARN MORE: The Psychology of Failing Fixes

How Effective are Your Safeguards at Risk Reduction?

When a human error occurs, at least one, and often more than one, safeguard has failed. In many cases, people often rely on memory as their only ‘real time’ safeguard. Weak safeguards such as policies, low-level engagement training, and supervisory reminders are used in an attempt to sustain low error rate levels, but rarely achieve their desired rate reduction. As task complexity increases, team chemistry is mixed up, and shifting job scopes occur, the holes in these safeguards become larger and larger.

Reason’s Swiss cheese model also provides guidance for gauging the relative strength of a given set of improvements. In this model, each slice of cheese represents a safeguard. Each safeguard has a probability for success, as reflected by the relative size of the holes in a given slice of cheese. Theoretically, more slices of cheese equal less risk, as it is harder for the holes to line up and let the hazard get through to the target (or let a defect reach the customer). All too often, we overestimate the power of a given safeguard, however.

If your recipe for risk reduction – your ‘mix of fixes’ – is sound, and your improvements are effective, the likelihood of the errors repeating themselves should be low. This shift alone reduces risk. If the improvements also help reduce the potential severity associated with an error, risk is further decreased. Our common problem here is that we overestimate the relative impact of many of our proposed fixes. One-time meetings to review newly written procedures or to reinforce the rules rarely have a significant behavior change or risk reduction impact.

EXPLORE MORE: Measuring Safeguard Effectiveness

Building a Risk Reduction Cookbook

Different types of work carry different levels of risk with them. Risk of error for a given task also varies with who is performing the work, how frequently the work is performed, and where the work is being performed. That’s why risk assessments prior to work can be very effective safeguards themselves. Consider using an effective risk assessment process to determine the mix of safeguards required to achieve the desired level of risk.

Keep in mind that you will need to ‘mix to taste’ here. You don’t want to be investing the effort to prepare a gourmet meal when the goal is to only get the kids fed before the baseball game starts. Too many organizations waste limited resources focusing on weak safeguards. They also often use more layers of protection than they need.

Let’s look at the inspection safeguard, for example. When higher risk work is being performed, a procedure-driven inspection, complete with step-by-step double signoffs and photographs, may be needed to help ensure that the risk potential of missing a defect has been reduced. In low risk cases, an undocumented, memory-guided inspection, or no inspection even, may be sufficient.

Creating a Recipe for Risk Reduction

To get started, think about the human behaviors you are trying to change. What types of errors are you trying to help people avoid making? If the tasks are consistent and simple, and the staff is experienced, simply clarifying expectations with a well-written and well-communicated policy may be enough. As risk related to potential errors increases, you may want to remove potential memory loss from the ‘what causes errors?’ equation by requiring the use of in-hand work instructions.

You should never rely on only paper references as safeguards for error free work. At a minimum, either commit the tasks to memory via practice-based instruction and/or refresh memories prior to work with an engaging, focused team huddle. For even higher risk work, consider guarding the target and working to minimize exposure through effective job redesign.

In summary, here are some possible ingredients that you might consider using as you develop your own recipes for risk reduction:

  1. Utilize well-designed policies and procedures to define work methods and performance expectations (how time is used and measurable goals).
  2. Minimize the use of memory when performing relatively high-risk work by using well designed, in-hand procedures.
  3. Require all process owners to effectively communicate performance expectations in a daily, positive, and meaningful manner (huddle or tailboard).
  4. Let the potential risk level of the work determine the level of adequacy required. If the potential for error is relatively high, a more in-depth approach to job preparation and the use of in-hand instructions might be useful, for example.
  5. Include adequate time for practice in training – require on-the-job competency demonstration for skills that involve relatively high-risk work.
  6. Engineer out the potential for error whenever possible through effective human factors engineering and organizational ergonomics (job design).
  7. Install a process for capturing key daily errors at the process level, aggregate this data for analysis over time, and address high risk findings with effective fixes.
  8. Use a root cause analysis process, such as TapRooT®, that is designed to consider the impact of missing safeguards and low safeguard effectiveness on the potential for human error.

Keep improving!

By Kevin McManus, Chief Excellence Officer, Great Systems

NOTE: if you found value in this article, you might also benefit from reading my new book “Error Proof”, which is now for sale on Amazon.com.

www.greatsystems.com            kevin@greatsystems.com

FOLLOW me on Twitter: @greatsystems

LIKE Great Systems on Facebook

CONNECT with me on LinkedIn

 

By | 2018-07-29T11:18:24+00:00 July 29th, 2018|Process Improvement, Safety Systems|Comments Off on Recipes for Risk Reduction

About the Author:

Kevin McManus serves as Chief Excellence Officer for Great Systems! and as an international trainer for the TapRooT® root cause analysis process. During his thirty five plus years in the business world, he has served as an Industrial Engineer, Training Manager, Production Manager, Plant Manager, and Director of Quality. He holds an undergraduate degree in Industrial Engineering and a MBA. He has served as an Examiner and Senior Examiner for the Malcolm Baldrige National Performance Excellence Award for eighteen years. Kevin also writes the monthly performance improvement column for Industrial and Systems Engineering magazine, and he has published a new book entitled “Vital Signs, Scorecards, and Goals – the Power of Meaningful Measurement."
Show Buttons
Hide Buttons
Translate »
X