Qualified, But Not Competent

by Kevin McManus, Chief Excellence Officer and Systems Guy, Great Systems


How Broken are Your Competency Testing Processes?

The driver’s licensing process across our fifty United States is perhaps the best example of a very systemically broken competency testing process. In the state of Texas alone this year, we are experiencing traffic fatalities at a rate of over nine people a day. I am pretty confident in saying that one systemic root cause of many of these fatalities is a broken driver qualification process. I am also pretty sure that being able to text at 30 words per minute with 95% accuracy while driving is not part of any state’s licensing process.

Every day, we accept the risk, knowingly or unknowingly, of allowing people to do things they don’t really know how to do very well. We have many more people who are qualified for (approved to do) a given task or skill set then we have people who are competent in that skill set (able to do). Competency models continue to be created as the seconds tick by. How effectively do we measure the effectiveness of an employee’s performance against those models? How often do we even attempt to do this in a measurable way?

DISCOVER More: Process Improvement Strategies


Where are Our Competency Testing Processes Broken?

Worse yet, this problem becomes more magnified as we move up the organizational ladder. In most companies, competency testing is relatively more effective on the front lines due to the existence of formal, standardized certification processes. We can more easily tell if a welder is qualified and competent for a job than we can assess a vice president of marketing’s true abilities versus job expectations.

Personally, gaining competency and moving towards mastery in my own skill development has taught a lot about what it means to be approved to do a job versus able to do a job. I became approved to teach TapRooT® root cause analysis courses by myself after demonstrating the ability to effectively do so across 25 or so courses. More than 400 courses later, I am a MUCH better instructor on the topic. When did I actually become competent to teach this topic? When does skill mastery come into play? Which positions need elements of skill mastery built into their personal development plans?

If you want to improve the effectiveness of your competency testing processes, four systemic gaps should be addressed first. These gaps are weak curriculums / competency modeling, limited practice time across defined skill sets, weak trainer development processes, and weak testing protocols.


  • Weak curriculums / competency modeling – These models are often developed in a hurry, and the event is often a one time thing. Failing to keep our lists of necessary skills and abilities for a given job up-to-date could really prove to be costly with customization becoming the new service standard.
  • Limited practice time across defined skill sets – In order to cement a skill into long-term memory, it typically has to be practiced repeatedly. Most of todays’ training courses contain only a small percentage of practice time. Too much content, too little practice!
  • Weak trainer development processes – How many of your ‘on the job’ trainers have gone through a formal trainer development process? Too many of our trainers know their content, but are quite weak at delivering that content effectively and providing effective coaching feedback.
  • Weak testing protocols – Few organizations consider Kirkpatrick’s four levels of training evaluation when they design the competency testing protocols for a given skill set. We often limit competency testing to only certain skills, certain positions, and a written evaluation test form (limited skill demonstration). What percentage of a given course’s content are we truly remembering and applying correctly?


Challenges with employee access and excessive time requirements head the list of excuses for not addressing these types of competency testing challenges. Ironically, if we only knew the TRUE cost of the daily errors, process failures, and service delivery goofs our people make, we would quickly invest the time to fix these processes whose effectiveness is requisite for high performance. Even worse, failing to address the process design gaps now will allow these costs to only escalate as job skill requirements become more fluid and numerous in our attempts to satisfy the ever shifting expectations of the customer.

EXPLORE More: How Effective are Your Learning Systems?


How Do We Improve Our Competency Testing Processes?

With today’s technology, we could significantly improve state level driver testing protocols to require more practice time (via VR/game console and simulator) across a much higher percentage of potential driving scenarios. It’s not the same as really doing the driving, but it is light years ahead of what we do now. We can use this same technology to help improve our own competency assessment processes.

Effectively utilizing training technology is not the only solution that we need to focus on however. In the management ranks in particular, we need to get better at assessing soft skill competency. For example, how competent are your leaders when it comes to effectively communicating in writing and face-to-face, facilitating groups, and inspiring others?Competency Testing Process Pieces

This can be done. Techniques exist for observing workplace behavior using a formal assessment tool and process. We have to want to fix this problem however. We have to recognize that there are tremendous waste pools out there waiting to be drained. The training system needs improvements in multiple areas if we truly want to maintain effective competency levels in our organizations.

How effective are the competency testing processes that you use? What percent of your key tasks, or even positions, have no, or limited, competency assessments developed for them? How many of your people are making errors because they are considered to be qualified for a job, but have not yet really became competent at doing it? Is it possible that you might gain significant benefits from improving the effectiveness of your competency assessment processes across ALL employee groups?

Keep improving! – Kevin McManus, Chief Excellence Officer and Systems Guy, Great Systems


If you would like more information about the improvement tools and systems I have to offer, please send me an e-mail at kevin@greatsystems.com.

FOLLOW me on Twitter: @greatsystems

LIKE Great Systems on Facebook

CONNECT with me on LinkedIn