Industrial Psychology - Unit 4.6

Q.8.       What are errors in Performance Appraisal? What do you mean by Rating and Ranking systems?                                                         (AKTU. 2010 - 11)
Ans. Error In Performance Appraisal: -
Guilford (1954) has classified the different kinds of constant errors which are apt to occur in the rating process and suggests certain precautions which can be taken. Constant errors are those which result from some systematic bias on the part of the rater and are usually somewhat easier to cope with than those rater errors which are nonsystematic or random.
Error Of Leniency: -
One of the major problems is to equate different raters for differences in their standards so that their ratings can be compared. Some raters might best be described as being generally “easy” or lenient, while others may be classed as being “hard” or severe in their judgments or ratings. ‘When a rater is overly severe he is said to be making an error of negative leniency, while easy raters make the error of positive leniency.

The Halo Effect: -
The halo error is a tendency to let our assessment of an individual on one trait influence our evaluation of that person on other specific traits. Therefore, if we felt that Worker X was a top-notch employee in one respect, we might tend to rate him very high on all traits, even though he may be rather mediocre on some.
This is a very common type of error and is also one that is very difficult to correct. Symonds (1925) has suggested that it is most likely to occur with the following traits:
1. Traits not easily observed
2. Unfamiliar traits
3. Traits not easily defined
4. Traits involving interpersonal reactions
5. Character traits
Logical Rating Errors: -
This error is quite similar to the halo error. A logical error involves a rater who tends to give a person a high score on one specific trait simply because he feels the individual possesses a lot of a second specific trait and he feels the two trails are logically related. When a rater tends to overestimate the true relationship between traits, he probably commits this rating error.
Contrast And Similarity Error: -
It refers to a general tendency on the part of a rater to judge others in a manner opposite from the way in which he perceives himself. If he perceives himself as being very honest, for example, his tendency would be to rate others slightly lower than normal on the “honesty” dimension. The opposite of a contrast error, which might be called a similarity error, is for the rater to rate other people in the same way he perceives himself.
Central Tendency Error: -
Reluctance results in their tending not to use the extreme scale scores on the rating instrument. This, in turn, results in a substantial change of shape in the distribution of scores for that rater, as shown in figure. Notice that the dispersion (variability) of the judgments is much less for the rater making a central tendency error. This kind of error thus results in a restriction of range of the criterion scores - an artifact which can have an effect on subsequent validity coefficients. One of the better ways to avoid this error is by using the forced – distribution system.


The effect of a central tendency error upon the shade of a distribution of ratings
Proximity Error: -
The last error we shall consider usually comes about from the way in which various items have been placed or ordered on the rating form. Sometimes referred to as an “order-effect,” this error illustrates the influence that surrounding items have on the rating one gives a person on a particular item.
The most common procedure for minimizing proximity error is to have several different forms of the rating scale, with the items in a different order on each form.
Rating System: -
The mechanics of any rating scale system are quite simple. The task of the judge is to make a judgment concerning the degree to which the individual possesses, or is described by, a particular characteristic. While it may take many different forms, its one distinguishing characteristic is that the judge may give two individuals the same score, a feature not found in the other two methods.
Numerical rating scales: -
  This form is probably the most popular and common type of rating scale. Such scales are often called “graphic scales.” An example is given be1ow:

Standard Scale: -
Another type of rating scale involves the use of a set of standards or examples for comparison purposes. These are used in place of, or along with, the verbal anchor points found with the graphic scale. A standard scale designed to evaluate dependability might look like this:

Cumulative Points Scale (Check List): -
Many rating scales require the judge to check or indicate which of a number of statements, adjectives, or attributes are descriptive of the person being rated. For example, the rating form shown in figure is representative of an adjective check list.
Check only those adjectives which describe the person being rated.
______friendly ______tenacious ______selfish
______eager         ______willing         ______radical
______withdrawn         ______cruel         ______greedy
______aggressive         ______stingy         ______stubborn
______spoiled ______defiant         ______helpful
______happy         ______conservative ______quiet
A typical rating descriptive check list
Critical Incident Check List: -
The critical incident technique is a procedure developed by Flanagan (1954). As normally used, it can probably best be described as a check list rating procedure. However, it is also sufficiently different in its development that it deserves separate mention. The method involves three distinct steps:
1. Collecting critical incidents
2. Scaling the incidents
3. Constructing the check list scale
Ranking System: -
A characteristic of a rating system is that it permits two or more individuals to have the same rating or scale value. Simple ranking requires that the judge order the individuals from highest to lowest. Thus, a particular group of supervisors might rank ten employees as shown below:
         Supervisor A Supervisor B Supervisor C
1.     Axel            Cerny     Axel
2.     Bond              Axel     Bond
3.     Cerny            Dixon     Dixon
4.     Dixon            Bond     Cerny
5.     Engle            Engle     Frye
6.     Frye            Green     Engle
7.     Green            Frye     Houghton
8.     Houghton            Jones     Green
9.     Inman            Inman     Inman
10.     Jones            Houghton     Jones
Each man can then be assigned an average rank.
A B C Average Rank
Axel 1 2 1 1.3
Bond 2 4 2 2.6
Cerny 3 1 4 2.6
Dixon 4 3 3 3.3
Engle 5 5 6 5.3
Frye         6 7 5 6.0
Green 7 6 8 7.0
Houghton 8 10 7 8.3
Inman 9 9 9 9.0
Jones 10 8 10 9.3
Ranking systems have the inherent advantage in that they are extremely simple to explain and are usually easily accepted by the persons assigned as judges. The procedure “makes sense” to them. It also has the advantage of permitting a rater to rank fairly large numbers of individuals without great difficulty. A rule of thumb recognizes that satisfactory and reliable results with a ranking system can be achieved with Ns as large as 50 to 60.
Guilford (1954) has pointed out another advantage of ranking. Since the judge is forced to make man-to-man comparisons rather than absolute comparisons, he must compare individuals when deciding to place one above the other.