Hard Measures versus Soft Measures
We explore a common training evaluator's dilemma: whether to use hard or soft data in evaluating a training program.
Evaluating the effectiveness of the training programs you deliver helps you and your program sponsor determine whether the program was worth the effort. Evaluation also helps you identify areas for future improvement. Worthwhile evaluation starts with the setting of organizational goals for your program. At the outset, agreeing goals with your program sponsor that are measurable sets the scene for a smooth evaluation exercise later on.
Being measurable, these goals then set the benchmark from which you can much more easily gauge the extent to which the goals were achieved. Using the SMART principle for setting goals is an excellent way to ensure that the goals you agree are relevant and measurable to an objective standard.
For your program evaluation report to earn credibility with your key stakeholders, the reported program results need to be verifiable by independent observers. So, whoever is doing the measurement, whether it is an external consultant, a representative front-line employee or a department manager, all need to agree on the measured value of the outcome.
For many training programs, this requirement for objective measures is not too difficult to achieve. A sales training program may set a sales target for each sales executive of $50,000 worth of new sales by end of year. A quality engineering training program may set a goal of 20% reduction in defects for the next quarter. Measures such as sales volume and defect rates are called "hard" measures because the measurement is so amenable to assigning definite numbers to the values. These numbers can easily be extracted from sales reports, inspection sheets, and so on.
For some training programs, though, there may appear to be no kinds of hard measure applicable. Many trainers ask in desperation, "What are the measurable outcomes from team building and leadership development programs?" Consider for a moment a mechanical repair shop implementing self-managed work teams. A Working in Teams training program is scheduled over a two week period, with the program forming an integral part of this comprehensive change initiative. Isn't the ability to work in teams a "soft" skill, not amenable to hard measures?
The first step to developing hard measures for soft skills training is to get clear on what the program sponsor wants from the program. To get to the hard measures that might apply, ask the program sponsor and key stakeholders what business or organizational impact they expect from the program. The facilitative process for constructing an Impact Map is a useful approach for guiding the discussion and getting everyone on the same page.
In this case, the auto repair shop owners are clear on their intentions. They are aiming for improved customer focus and increased efficiency, they said. Their plan is to achieve these gains through each self-managed team dealing directly with customers, scheduling their own work, managing their own performance issues and selecting and recruiting new members.
Working with the training consultant, the owners agreed the following measurable targets for the program after the first year of operation:
- improve customer loyalty by 40%
- reduce overtime costs by 30%
- reduce average job turnover time by 20%
The data needed to calculate progress for each of these targets is easily obtained from the company's existing service and payroll reports.
Other programs for which it may seem difficult to construct hard measures revolve around compliance-type programs. These include health and safety, employee discrimination, financial auditing, customer privacy protection, and so on. One type of hard measure that you can use effectively with these kinds of programs is an avoidance measure. With this type of measure, you can easily turn them into business targets. Examples here include:
- reduce the annual number of critical safety incidents to zero by 2016
- reduce the number of employee complaints by 30% next quarter
- reduce the number of non-conformances by 70% for the next audit
- reduce the monthly incidence of customer privacy complaints to less than 10 by 2016
Another option is to target reduced expenses resulting from the new way of working. This could include reduced:
- insurance premiums
- litigation costs
- external auditing costs
- employee turnover costs
- overtime costs
- advertising costs
and so on.
Avoidance and expenses goals can be combined using complementary leading and lagging measures, as in this example. The partners in an accounting firm wanted to roll out harassment prevention training to all employees. They decided to target a reduction in the number of employee complaints (including harassment) for each department. This became their leading (process) measure. At the same time, the firm also set up a combined legal expenses measure that aggregated all employee litigation expenses. As this trailed employee complaints, this latter measure became the firm's lagging (outcome) measure.
What if you are evaluating a training program for which defining such hard targets is not an option? The program sponsor may not want to, or may not know how to, tie the training program to measurable business or organizational outcomes. This avoidance of accountability may be for political reasons or simply because the intended focus of the program does not directly impinge on business results.
In this case, soft measures that focus on participant attitudes and behaviors are appropriate. Soft measures are often perceived as more subjective to determine; more influenced by personal opinion and not so readily objectively determinable. A common misperception is that the impact of a person's "soft" skills, such as skills in communication, creativity, conflict resolution and resilience, can't be measured. The message here is that even without hard data on business results, the application of a soft skill can be measured credibly with soft data.
Let's look at team building and communication skills programs. Outcomes gauged using soft measures for these kinds of programs may include the extent to which program participants:
- communicate vision
- encourage participation
- display empathy
- give constructive feedback
and so on.
You measure these behavioral outcomes typically via 360° questionnaires. This kind of multi-rater instrument has the distinct advantage of being able to give you a multiple stakeholder view that is not constrained by a single, more subjective perspective. If you are training participants in soft skills, the good news is that these kinds of skills are measurable by measuring the extent of behavior change in the training participants. This measure of behavior change is done indirectly via the questionnaire.
If you are evaluating the outcomes of a leadership training program, for example, stakeholders you could survey include the manager's manager, the manger's direct reports and peers. For a customer service representative undergoing customer service training, you could include the person's manager, their customers and peers.
Targets using soft measures are typically expressed in the form of a desired improvement in survey scores. The target is usually stated as a certain point improvement or a certain percentage improvement in the average or median score. The measure can encompass an entire survey, a specific section of the survey or select questions on the survey.
Following are two examples of this kind of target.
- improve average employee satisfaction score on employee survey by 30% by end of next year
- increase median customer satisfaction score to 4 on Customer Care survey by end of 2015
Soft measures used in targets such as these can form the basis for leading measures that are added to and underpin other hard measures. A case in point is the second example: the measure of median customer satisfaction. This measure was used during the evaluation of the Working in Teams training program mentioned above. It supported the hard measure used in the target:
- improve customer loyalty by 40%
How did this work? Very satisfied customers lead to them becoming loyal customers. So, with the purpose of gauging how the new self-managed team structure is impacting customers day-to-day, the receptionist gave each customer a survey as they picked up their car. Questions on the survey were designed to determine the extent to which each customer's needs were satisfied.
Measure Attitudes or Behavior?
When constructing questionnaires for evaluating soft skills training programs, some trainers ponder whether to include questions related to training participants' attitudes or their behaviors. Including questions that gauge participant attitudes seems natural. Having a genuine empathetic attitude, for example, is important for listening actively. Feeling inclusive of other nationalities and traditions is a precursor for acting impartially.
The principal aim of soft skills training, though, is not to change participants' attitudes and perceptions. The prime objective is to change participants' behaviors. In many cases, trainers work to first change participants' attitudes and perceptions as a way of changing their behaviors.
In some cases, though, trainers can short-circuit the step of changing attitudes and perceptions and go straight to influencing behavior. For example, a company bleeding from excessive litigation costs, following a string of harassment complaints, focused their training on the consequences of harassing behavior. The training highlighted the company's new harassment policy and procedures and walked participants through what happens when a participant transgress the policy (counseling and then termination).
Of course, the policy and written procedures were backed up by supporting systems that allowed follow through on complaints. The key thought here is that the direct, punitive strategy would achieve a more dramatic reduction in harassment complaints in a shorter period of time. In this case, with harassing behaviors diminishing quickly, genuinely respectful attitudes followed suit over time.
Once we recognize that the principal aim of training is to change participants' behaviors, we can appreciate that evaluations need to concentrate on measuring that change. People's behavior is also much easier to measure reliably than internal attitudes. So, for example, for a team building training program evaluation, some of the following factors could be measured:
- punctuality at team meetings
- proportion of people contributing at team meetings
- number of entries in the changeover log
- number or newly developed ground rules
- number of incident reports submitted
The results could then be compared with the data collected before the training began. This approach will yield much more insightful and actionable results compared with sending out a questionnaire asking whether the team building training improved team bonding and cooperation.
With proper thought and planning, you can construct a useful and complementary mix of both hard and soft measures for your training program. As we have seen, designing a training program in soft skills is no barrier to developing hard measures for the evaluation phase. The hard measures comprise the lagging indicators for the final business outcomes expected from the program. These can then be complemented with the softer leading measures. This soft data reveals the extent of participant behavior change that can be used to predict business outcomes.
The key to a credible program evaluation is deciding, up front, what business outcomes are wanted from the program. If your program sponsor has not presented you with clearly articulated goals, work with them to flesh out measurable objectives. And, as a corollary, you will need to decide at the start the participant behaviors required to achieve those outcomes.
If you cannot say what you expect as the outcome of your training program, in measurable terms, you ought to think twice about devoting resources to it. The funds may be better used elsewhere. Putting the effort into devising measurable goals right at the start provides both a valuable reality check on the utility of the proposed program and sets a solid grounding for the post-program evaluation.
Leslie Allan's comprehensive toolkit can help you in all stages of your training evaluation exercise. From initial planning, data selection and analysis to reporting results, our guide has over 20 customizable tools and templates to make your evaluation task as easy as possible.
Plus, you will learn the pros and cons of the various evaluation methods and how to isolate the impact of non-training factors on performance results. Leslie's toolkit contains everything you need to undertake a credible evaluation exercise in one volume.