AN ALTERNATIVE STRATEGY FOR IMPROVING PERFORMANCE ApPRAISAL COMMUNICATIONS William Kilbourne Paul Reed Sam Houston State University Huntsville, Texas Introduction Managers at alilevels of the organization are continually faced with the difficult task of making decisions concerning the use of human resources. These strategic de- cisions often involve making trade-offs between alternatives, determining value judg- ments and ascertaining the preferences of superiors and/or subordinates. Managers responding to such problems have been moving away from subjective trait evaluation and toward objective assessment of the degree to which goals have been met. While improvements have been made, there appears to be one problem that continues. Employees frequently do not understand exactly what their supe- rior desires in terms of performance. If a superior fails to communicate the "real" performance criteria, the value of any appraisal system will be diminished increasing potential ambiguity and conflict. Bernardin and Beatty [1] suggest that "just good communication" can be effective in reducing such conflict. It is to the end of better communication of performance appraisal criteria that this article is directed. Performance appraisal strategies have some elements in common with decision strategies in other areas, such as marketing, in that they frequently require simulta- neous ranking of multiple criteria. Marketing researchers have, for some time now, employed techniques for evaluating such judgment processes. One technique that has been employed by marketing researchers and appears to have great potential for human resource management as well is conjoint measurement analysis. Exploratory application of this technique has shown real promise in such areas as job choice, selection of applicants, and benefit preference [7]. Reed and Johnson [6J found that conjoint measurement offered a potentially valu- able tool for quantifying the relative values a supervisor attaches to various levels of performance appraisal criteria. With such information available, a subordinate could, they concluded, be better able to evaluate his/her strengths in relation to what could be considered desirable performance. The subordinate then would be in a position to make decisions concerning how to best achieve optimal performance. Optimal performance in this sense refers to achievement of the maximum performance rating available, given the subordinate's energy and ability constraints. Journal of Business Strategies, Volume 6, Number 1 (Fall 1989) 91 92 Journal of Business Strategies Vol. 6, No.1 This extends the Reed and Johnson study and will attempt to further refine and validate the use of conjoint measurement as a strategic option to assist in improving performance appraisal communication. Performance appraisal strategies usually include attempts at measuring how suc- cessfully the various tasks which make up a job are performed. Ideally, every sub- ordinate should carry out each task in an optimal manner, though practically this seldom happens. Most subordinates excel at some tasks and not at others. Assum- ing an employee is interested in scoring high on his/her performance evaluation and being eligible for available rewards, the problem becomes one of effectively allocat- ing his/her energies among the various tasks. This requires the subordinate to make trade-offs. For instance, more hours spent on task X may mean he/she will receive a higher score on a certain performance criterion, but task Y might have to be neglected with a possible lower evaluation on another criterion. What should the optimum al- location of time and energy be in order to gain the most value among the various performance criteria? Going to the supervisor for assistance may prove frustrating because of his/her inability to provide definitive guidance [2]. This may be particu- lary true if there are many tasks and associated performance evaluation criteria that go together to form a job. Obviously, a subordinate's decision on trade-offs could be greatly enhanced if the supervisor's attitude concerning the importance of the various appraisal criteria were known. Conjoint Analysis The difficulty raised by these circumstances is not unresolvable. While infrequently used in human resource management applications, there are measurement strate- gies used in psychology and marketing research for assessing human judgment. One method which has great potential in the situation outlined here is conjoint analysis (4). Conjoint refers to measuring relative values 6f things considered jointly that 'might be unmeasurable taken individually, thus permitting the development of a set of relative values (5). A typical application in marketing might involve analyzing the relative importance to potential consumers of various options on an automobile that might be offered in "special option packages" at various prices. The objective might be to determine what package of options results in the most positive buyer response given the additional cost. A manager's situation is analogous to a potential buyer making choices in that the relative desirability of various subordinate work performance components are compared. In applying conjoint analysis, the manager is asked to rank the possible packages of employee performance components. Each package consists of varying combinations of performance levels associated with the specific evaluation criteria used in the organization's performance appraisal system. Conjoint analysis of the ranking of these packages is decompositional in nature, as it attempts to identify consistencies in the rater's judgment of desirability. Here, "decompositional" refers to an analysis of the data to determine the impact each variable has on the ranking arrived at by the rater. The algorithm first identifies the variable the supervisor Fall 1989 Kilbourne f3 Reed: Performance Appraisal 93 implicitly has chosen as the most influential in a subordinate's rating, then the second most influential, and so on. Weights are then assigned to each variable, so that when summed, the result is a ranking of performance packages that approximates the manager's ordering (2). Methodology Unlike the Reed and Johnson study, the approach taken in the present study to demonstrate the value of conjoint analysis was to apply it to an actual appraisal situation. By comparing the actual results to those that would be predicted on the basis of conjoint analysis, the value of the technique in effectively communicating the appraiser's true weighting of performance criteria becomes apparent. Performance Criteria Since most performance appraisal strategies, as they stand, do not conform to the requirements of conjoint analysis, some minor modifications may be necessary in scoring. This was true in the present study which uses actual subordinate evaluations in an organization setting as the demonstration vehicle. The evaluation system and its modifications are described here. There were three criteria evaluated in the present study, quality of work, quantity of work, and interpersonal relations. While the system as constructed is relatively formal, the evaluator still has latitude in effectively weighting the importance of the different measures and, thereby, the final overall evaluation of the ratee. Each of the areas was scaled to an ordinal ranking of excellent, good and poor by using a 33 and 67 centile split. This effectively reduced the actual interval scaled ratings to the ordinal requirement of MONANOVA, the specific type of conjoint analysis to be used, in a straightforward way. Each individual's rating was then reassigned according to this system. Thus, for each of the three evaluation areas, quality, quantity and relations, there were three possible evaluations, excellent (E), good (G) and poor (P). Analysis In the next phase of the study, the evaluator's "true weights," or utility scores, for the various performance criteria were determined using MONANOVA. To accomplish this, the three evaluation criteria were each defined in terms of the three levels of performance. Criteria, or performance standards, for determining the appropriate level of performance for each of the three evaluative areas were provided to the rater. A set of cards was then constructed such that each card contained one combination of varying levels of the three performance criteria. Further, there was one card for every possible combination, or in this case, 27 cards. The evaluator's task was to rank order the set of cards from lowest to highest preference. In the event that there are too many combinations for the task to be feasible, orthogonal arrays (3) can be developed to effectively reduce the number of combinations to be evalua.ted though this wa.s not considered necessary in the present study. The results of this procedure are presented in Table 1 94 Journal of Business Strategies Vol. 6, No.1 Table 1 Rank Ordering of Criteria-Evaluation Combinations Quantity E G P Quality E G P E G P E G P Relations Excellent 27 24 22 21 19 12 9 6 3 Good 26 23 15 20 16 11 8 5 2 Poor 25 17 13 18 14 10 7 4 1 (E=Excellent, G=Good and P=Poor) Rank Orderings Before discussing the MONANOVA results, several things about Table 1 should be noted. First, the evaluator preferred a combination of poor quantity, poor quality, and poor relations (rank = 1) least. The most preferred performance combination was excellent quantity, excellent quality, and excellent relations (rank = 27). Next, the evaluator was willing to trade off first relations and then quality in order to retain excellent quantity as long as possible, i.e., until his 7th most preferred combination (rank = 21). This trade-off of relations first and then quality, is particularly notice- able in the last nine combinations, i.e., rank =: 9 through rank = 1. MONANOVA Results On the basis of the set of responses noted in Table 1, MONANOVA was used to determine the appropriate utility values for each evaluative level of each perfor- mance criterion. The results of this analysis, using Kruskal's PC-MDS version of MONANOVA (8) are presented in Table 2. From this table, a second set of per- formance evaluations can be determined for each of the ratees, i.e., the actual set and a new set using the rater's utility values applied to the ordinal ranking created previously. Table 2 Derived Utilities for Each Level of Each Criterion Excellent Good Poor Quantity 1.492 0.835 -2.327 Quality 0.506 0.029 -0.534 Relations 0.246 0.000 -0.246 To find the total utility associated with the evaluator's most desirable combination (rank = 27) of the performance criteria in Table 1, the individual utilities can be read in Table 2 and summed. By adding the utility scores for the Excellent ratings in each of the criteria (quantity, quality and relations), the total utility for the most preferred combination is seen to be 2.244. Similarly, the least preferred combination can be found by summing the Poor utility values for each of the performance criteria yielding Fall 1989 Kilbourne tJ Reed: Performance Appraisal 95 a utility score of -3.107. The evaluator's preference for any other combination can be found in a similar fashion. A test of consistency of the rankings based on the utilities generated by MO- NANOVA indicates that the statistical ranking generated was significantly correlated (Spearman's RD2=.98) with the respondent's actual rankings shown in Table 1- A graph of the derived utility functions is shown in Figure 1. It clearly portrays the almost overwhelming importance of quantity to the evaluator. A poor quantity evaluation in effect negates the positive effects of excellent quality and relations in the total utility structure of the evaluator. Conversely, excellent quantity more than makes up for poor quality and relations. Figure 1 Criterion Utility Values 2,-------------------------------, 1.5 --"·-Doontity·· [f) Q) ::J o > >. ;":::' 0.5 ·-·· .. ·· ... _...·..·-ooolity-- Relationso -~ _--..-..~.- - . -0.5 -1 .----- --.~- - - - - .. -.--- - _ _ - - - ---- - -1.5 --.-..- -- - - -.- -.._.._ --·-.-00- --.-.- - 2.5 L..-.. -,-- ---,- --,. ----' Excellent Good Evaluation Poor 96 Journal of Business Strategies Vol. 6, No.1 Strategy Correspondence It should be recalled that in the original appraisal strategy, the actual evaluation was made on the basis of interval scaled data that were subjectively interpreted by the evaluator with no clear indication of how different areas would affect the total. Furthermore, this study was made two months after, and unrelated to the actual evaluation. Hypothetically, the evaluator could have used the same weights indicated in MONANOVA, but there is no assurance of this. A comparison of the two methods is presented in Table 3. Table 3 Actual Performance Ratings vs. MONANOVA Ratings Employee 1 2 3 4 5 6 7 8 9 Actual Rating Rank 5.96 5 6.91 1 6.48 3 2.87 9 5.48 7 6.20 4 5.58 6 5.05 8 6.85 2 MONANOVA Rating Rank -2.067 5 2.244 1 1.275 3 -3.107 8-9 T -2.298 6-7 T -0.055 4 -2.298 6-7 T -3.107 8-9 T 1.998 2 As can be seen, there is a very close correspondence between the two rankings. The only deviations are the ties for sixth and seventh place and eighth and ninth place. It should be noted that the individuals in these ties were sixth and seventh place and eighth and ninth place in the original ranking. Consequently, no indi- vidual was reranked to a lower or a higher position. This indicates the evaluator was very consistent in making the actual evaluations and then later ranking the crite- ria combinations that were used to determine the utlities calculated by MONANOVA. Debriefing Session Results Many subordinates of the evaluator were surprised at the results of this study. The rater's emphasis on quantity had been underestimated. Quality was found to carry less weight than expected and relations more. A few ratees stated that they planned to change their relative emphasis on performance criteria. The chairperson concurred with the results concerning his evaluation of quantity and, like the subordinates, was somewhat surprised at his weighting of quality and relations. After a few days' thought, he indicated that the MONANOVA utilities for these latter two criteria were probably correct. Fall 1989 Kilbourne & Reed: Performance Appraisal Discussion 97 If one can assume that these various utility values are made available to employees at the beginning of an evaluation period, then each can determine the precise incre- mental benefit to be derived from expending more time and effort in one area and . less in another. For example, if an individual desired to move from good quantity (0.835) to excellent quantity (1.492), he/she could analyze 9 different combinations of quality and relations in order to find the one that permitted the most available time and energy with the least loss of utility. For those utility values to be of any real use, however, they should correspond closely to how the evaluator actually scored those combinations in the most recent annual evaluation period. Given the close correspondence between the actual and MONANOVA based ratings, the informed subordinate of this evaluator should be able, at least in the short-run, to make knowledeable decisions concerning the selec- tion of tradeoffs from among the various criteria. Because of these informed decisions, the ratee should then stand a better chance of receiving a higher performance rating by adoption of a different performance strategy given his/her energy and ability con- straints. In brief, the findings of this study validate the earlier research by Reed and Johnson and indicate that, as a performance evaluation strategy, conjoint analysis offers a unique method for communicating performance appraisal criteria. Whether the ratee does in fact receive a higher appraisal because of the use of MONANOVA based utilities is open to question. This question can best be answered by conducting a longitudinal study of multiple evaluators and ratees over at least a two year period. References 1. Bernardin, J. and Beatty, R. Performance Appraisal: Assessing Human Behavior at Work. Boston: Kent Publishing (1984). 2. Churchill, G. Marketing Research: Methodological Foundation, Chicago: Dry- den Press (1983). 3. Green, P. "On the Design of Choice Experiments Involving Multifactor Alterna- tives." Journal of Consumer Research, Vol. 1 (1974), pp. 61-68. 4. Green, P. and V. Srinivasan. "Conjoint Analysis in Consumer Research: Issues and Outlook/' Journal of Consumer Research, Vol. 5 (1978), pp. 103-122. 5. Johnson, R. "Trade Off Analysis of Consumer Values," Journal of Marketing Research, Vol. 11 (1974), p. 121. 6. Reed, P. and H. Johnson. "Quantifying Trade-offs Among Multiple Appraisal Criteria/' in Proceedings, Human Resource Management and Organizational Behavior (1987). 98 Journal of Business Strategies Vol. 6, No.1 7. Sukla, P. and J. Bruno. "A Review of Conjoint Measurement Analysis Appli- cations for Human Resource Management," in Proceedings, Human Resource Management and Organizational Behavior (1984). 8. Smith, M. PCS_MDS Multidimensional Scaling and Conjoint Analysis. Provo, UT: BYU Press (1987). pp. 6-1 to 6-19. An Alternative Strategy for Improving Performance Appraisal Communications