Post-Training Evaluation Strategies - Part 1


One of the biggest challenges that organizations have when they decide to start down the road of performance improvement with their employees is the difference between training and learning.

Rather than give you an overly academic textbook definition, we can differentiate the two in the following way.

"Training" is the deliberate practice of skills to improve the performance of that skill.
"Learning" is the acquisition of knowledge to improve understanding.
It's crucial for business people to understand the difference between training and learning because without these operational definitions it becomes all too easy for organizations to waste precious training budgets on ineffective "training" that doesn't result in any performance gains on the job. To highlight this difference let's walk through a case study.


One of my former client organizations, for whom I worked full time, had a customer service training program that most employees were required to complete. A senior director organizes this program as a means to gain prestige within the company, and it's worth noting this person had no formal experience with training systems development or instructional design.

I was asked to evaluate if this program was producing any value for the company as part of my role in stewarding investments made in employee training. On the surface, this request appeared rather basic and straightforward until I learned a bit more about the program sponsor and her program. This individual was a rather ambitious person who excelled at the organizational politics. Her customer service training program was a highly visible program that had a lot of executive buy-in and had been in operation for two years. More than 500 employees had been through the program, and it had a part-time coordinator dedicated to scheduling and co-facilitating the program. Employees enrolled in the program were excused from a half-day of work, and there was a lot of top-down pressure from the VP level to run as many employees through this workshop as possible.



I am rather proud of my origins and often joke about being, "...just another hick from the mountains of Idaho." However, this mountain hick didn't fall off the potato truck yesterday. Once I got the basic lay of the land and was asked to determine if the program was providing value a single thought immediately sprang to mind.


Now I can't just back away from this request. My boss handed it to me and briefed me on how politically sensitive the situation was. She also gave me the low-down on the program sponsor.  The sponsor had a reputation for working organizational politics in her favor even if that meant, "leaving a trail of bodies behind her," as one of my colleagues put it. So from the start, I had to be very careful with my evaluation strategy.

When In Doubt Start With Alignment On A Commonly Valued Outcome

In the case of this company, it was a producer of an Electronic Healthcare Record (EHR) system. This industry shares a common customer service rating system provided by an impartial third-party. The third-party reports the latest customer service ratings on a quarterly basis. Rule #1 to follow when developing a training program is to align the content and design of the training solution with an organizational outcome you can track and measured. The Holy Grail of the learning analytics game is to track return on investment, but this can be difficult for a variety of reasons. If you know tracking ROI is not realistic then using customer service ratings can be a good outcome upon which to align your program. The added benefit of using an industry standard set of metrics is that it has the gravitas to cut through organizational politics. However, as you will see, this can be a double-edged sword.

Compare The Dimensions of The Outcome To The Content Of The Program

The third-party customer service survey was significant in that it provided a framework against which we could evaluate the program. It deconstructed "customer service" into several dimensions that we can use as reference points for evaluating if the content aligns with the survey outcomes.

Let's use a tried and true method of program evaluation; the good ol'  Logic Model.

Follow  this link  for more details about  Logic Modeling.

Follow this link for more details about Logic Modeling.

With Logic Modeling, we can work backward from the impact the business desires to create all the way to the inputs for training materials needed for the training. This need diagram shows you the high-level explanation of how various aspects of a training solution aligns with the customer service rating my client was chasing.

We need to work from right to left to get a sense of how I reverse engineered an evaluation strategy.

We need to work from right to left to get a sense of how I reverse engineered an evaluation strategy.

Logic Modeling is not just a robust evaluation strategy, but it's a great planning strategy for the creation of a program. Sadly, I don't believe the program sponsor used this approach. When a colleague of mine and I sat down to evaluate the program here were our core findings.

  • My company was middle-of-the-road when it came to the competition.

  • When we looked at internal customer service metrics, compared them to the third-party metrics, and we noticed a lack of alignment that could be improved.

  • There was no objective measure of how customer-centric my company's employees were.

  • The training process was a 4-hour stand alone workshop with no post-training coaching or performance support plan.

  • The content was more attitudinal and inspiration in nature; there was zero alignment between the content and the other metrics.

In short, employees were being pulled away from their work for a half-day motivational TED Talk about how much our customers needed our product. The only evaluation the program had was a post-workshop survey about how much people liked the content, but no assessment of whether this workshop changed on-the-job customer service behaviors of the employees. My colleague and I were a little disappointed and came to the following conclusion...




When my colleague and I looked at the internal customer service metrics, we saw some immediate opportunities for improvement. In some cases, we could cleanly map the higher-level customer service metrics tracked by the industry third-party. We were able to map them onto my company's internal customer service department's survey. However, some of these metrics were tied directly to the opinion of senior executives for individual health systems who simply won't participate in telephone surveys after a tech-support call.

From there, my colleague and I planned to monitor the survey results for a business quarter and then identify where the pain points around certain metrics were. In some cases, we expected to find issues that we could solve with customer service training, but in other situations, I was predicting that some of the customer support folks evaluated on the internal company survey would need improved access to information via a better knowledge base tool.

Regardless of the solution, having tight alignment between your evaluation methodology on the front-end and the actual training topics will ensure a better chance of producing a valued result when it comes time to design the program.


To quote Robert Burns' poem, To A Mouse, "The best-laid plans of mice and men often go awry." Despite the sound analysis and corrective action plan my colleague and I laid out, our boss shut us down before we could even get this program off the ground. Sadly, when we explained our plan to the boss she was thrilled with the quality of our work, but bluntly stated that our plan would go over like a lead balloon.

"There is no middle ground."



Remember the politically ambitious senior director I mentioned earlier in this post? You know, the program sponsor. She still needed to be handled with care. Based on all the stories I heard about this woman, she frightened me. She had been ruthless about securing her position within this company through a combination of hard work, faithful service, and stepping on the necks of others. I think this was in part due to her personality and the fact that this organization was awash with data, but little in the way of actionable insight or analysis. The executives were good at personality politics, but were hit or miss when it came to data-driven evaluation. As a result, it becomes much easier for those with high ambition, but without the skill to achieve it, to hide in the shadows and whitewash the facts.

This political whitewashing is both a survival mechanism and a means of advancing one's career. Regrettably, it's all too common a trait with workplace bullies. As unprofessional as this may sound, there is a famous character from contemporary literature that comes to mind whenever I think of this woman. Cersei Lannister, from George R. R. Martin's Game of Thrones series. Cersei is has a famous quote in the first book of the series. 

"When you play the game of thrones, you win or you die. There is no middle ground."

This quote pretty much sums up how many of my fellow employees perceived her. My boss was very concerned that our "Cersei" would hijack any attempt by the training department to build upon her work and treat it as an attack on her credibility and position. "Cersei" had reacted this way when other process improvement initiatives strayed into her sphere of influence and the people driving those initiatives rarely fared well.

In The End, The Plan Died On The Vine

The real shame here is that this kind of scenario is all too common in major enterprises. Wasteful spending on programs that produce little to no value is typical. In my opinion, these boondoggles for the overly ambitious do more to undermine the credibility of the training profession than any other cause. My boss regretfully suppressed my plan, mostly to protect my career since I was so new to the company. I remember feeling at the time that the situation felt too much like Game of Thrones and I kept wondering, "Is winter coming?" 

Ironically, shortly after I went to work for this company, there were multiple rounds of layoffs. One was so politically contentious that many people dubbed it, "The Red Wedding," after the scene in HBO's production of Game of Thrones where allies of House of Stark murdered Rob Stark, his bride, his mother, and his entire retinue in exchange for political favor from House Lannister.

I left the company in less than a year, and my boss left about six months after that.


I have a mantra I try to instill in my clients as much as possible when it comes time to talk about learning analytics and the importance of defining your evaluation plan.

"Those who can't manage moralize. Those who can't produce politicize. Those who can't measure manipulate."

Well developed and managed evaluations have real power to shine a light into the dark corners of your business and empower you to clean up messes you didn't know where there. Additionally, it helps expose the creepy crawlies that are poisoning your organization and driving away your best people.

Here's my advice to organizational leaders considering making a significant investment in training.

  • Don't let the person who knows the most about a topic develop the training, hire a training expert to help you.

  • Make sure to grill your training expert about how to measure the impact training will have on your business, and make sure you listen and heed their advice.

  • If your need for training is so immediate that you can't take the time to build a data collection system, at least plan to do focus groups with workers and then managers to hear what each group has to say about the training.

  • Above all, keep your training focused on what people need to do on the job, not what they need to know or the attitudes you expect, there are better solutions to address those needs.

All is not doom and gloom when it comes to training evaluations. I plan to publish a 3 part article on this topic. Next up in the series will be a case study of the United States Coast Guard and how they develop curriculum architectures that ensure that training can be evaluated for impact after trainees completed their basic training.