Skip to content

HRD’s next “top model”: programs and evaluation frameworks

October 10, 2009

Werner & DeSimone (2005) state that the effectiveness of an HRD program is measured by how well the desired goal is achieved and this begins with measuring where performance is and suggesting where the performance is desired to go. From the pre and post- measures metrics can be developed and expectations for achievement are established. The process of evaluation begins with an assessment of the needs of the organization which facilitates formulation of the objectives of the program. Once interventions are designed then they are implemented and subsequently evaluated. Evaluation is “the systematic collection of descriptive and judgmental information necessary to make effective training decisions related to the selection, adoption, value and modification of various instructional objectives” (Werner & DeSimone, 2005, p. 187).

Essentially the process of evaluation asks the questions “Are we training the right people with the right materials” and “Are the delivery methods appropriate and timely?” To this end, subjective and objective data are necessary to determine whether or not needs are being answered by the program. Data informs the practice of education and development as well as the process of decision making about how best to leverage the human resources of the organization. Data can also identify strengths and weakness of current development practices as well as assign value to determine cost-benefit ratio (not an easy task). Different types of development may benefit certain types of workers moreso than others and data can assist in this way as well. Finally, data regarding development practices can be utilized for marketing the corporation in terms of attractiveness to potential recruits as well as for the purpose of retaining current employees.

The difficulty in assessing HRD programs is that ROI is challenging to ascribe values to and definitely poses obstacles to practitioners in terms of dollar amounts. The amounts and values in ROI calculations can differ greatly depending upon the type of organization as well as the type of work done where the ROI is to be calculated for. Regardless of the difficulty of calculating the worth of a particular program, the need is great and should provide a means for demonstrating the value of HRD as well as a mechanism for accountability. Further, a value-added approach accompanies such metrics and puts HRD on par with other high-functioning areas of the organization.

The most popular framework for evaluation is the Kirkpatrick model which focuses upon reaction (of the trainees), learning (did they learn what they were supposed to?), job behavior (will the information be utilized on the job?), and results (did the training improve the organization’s effectiveness?) Some issues with the Kirkpatrick model are that many organizations do not evaluate at all four levels and the focus is on post-training rather than inter-stage improvements. Additionally, there has been a debate among scholars regarding the Kirkpatrick model, such that it provides no measurable elements and is not based upon scientific research premises but rather loosely coupled and unsubstantiated theory (Holton, 1996). Nonetheless, Kirkpatrick provided a mechanism for the emergence of other HRD evaluation models such as the CIPP (context, input, process, product) model, Brinkerhoff model (similar to Kirkpatrick), Kraiger, Ford & Salas model (cognitive outcomes, skill-based outcomes, affective outcomes) and Holton’s framework (secondary influences, motivation elements, environmental elements, outcomes and ability/enabling elements). Phillips’ model includes ROI as a component (as well as reaction and planned action, learning, applied learning on the job and business results) and is similar to Kirkpatrick’s model above.

As is true with most HRD dimensions, program design is never going to be a “one size fits all” proposition and the gap analysis questions or where the organization has been, where it is now and where it desires to be must be answered in order for the correct program design to be chosen. Understanding the people for whom the program is to be designed is critical to the success of the program and it could be that an amalgamation of one or more designs is appropriate given the needs of the organization.

Werner, J., & DeSimone, R. (2005). Human Resource Development. Mason, OH: South-Western Cengage Learning.
Kirkpatrick, D. (1998). Evaluating training programs (2nd ed.). San Francisco: Berrett-Koehler Publishers, Inc.
Phillips, J. (1997). Handbook of training evaluation and measurement methods (3rd ed.). Houston, TX: Butterworth-Heinemann.

No comments yet

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: