SR/SSD 98-17
5-1-98

Technical Attachment

360° Performance Feedback
Jose M. Garcia, Jr.
Meteorologist In Charge
National Weather Service
Amarillo, Texas

I. Introduction

One sees very little in writing concerning managerial thoughts and theories from scientific and technical supervisors and managers. For meteorologists, perhaps a research paper concerning this season's El Nino might be an easier task. However, for meteorologists that are supervisors, both are equally important areas that deserve research. Managers in scientific and technical fields should consider sharing their thoughts concerning managerial issues that affect scientific or technical personnel. Keeping this in mind, this paper has been prepared to relay findings and thoughts on a very important tool for scientific and technical managers and supervisors.

Working in the rotating shift environment of a National Weather Service field office, one often hears that managerial theories or practices used in the private sector will never work in this environment. After all, in the provision of weather services, rotating shifts are used to collect weather data, make forecasts, and issue life saving warnings. How could teamwork, diversity, 360° feedback or any other workforce theories relate to this type of work? A process such as 360° feedback is actually highly attractive, not in spite of, but because of the shift environment. Managers are not available at each shift. Therefore, what better way to obtain feedback than to have it from a variety of sources, especially from those who have firsthand knowledge of operational work. Also, "research indicates that supervisors rate more honestly and more rigorously when their ratings are supported by other informal sources, such as 360° feedback."

According to Mark Edwards and Ann Ewen, in their book 360° Feedback, "the 360° feedback process, also called multi-source assessment, taps the collective wisdom of those who work much closer with the employee: supervisor, colleagues (peers), direct reports (subordinates), and possibly internal and often external customers. The collective intelligence these people provide on critical competencies or specific behaviors and skills gives the employee a clear understanding of personal strengths and areas ripe for development." To paraphrase, 360° feedback is simply a process by which supervisors and employees can receive much more information concerning performance. It allows employee empowerment, since each individual has a say in their own and also another's performance evaluation. Much of what will be related in this paper is based on Edwards and Ewen's book, as well as firsthand experience, having used 360° assessments over the past two years. All citations in this paper can be attributed to Edwards and Ewen. This paper will offer new users of 360° feedback some insight on this process, and some pitfalls to avoid.

II. The First Attempt

To begin the 360° process, established models were sought. Based on consultations with colleagues, a model used at another weather office was found. Modifications were made to this model based on the knowledge and workings of the Amarillo office. A form for each individual position was created, consisting of two parts. The first part pertained to basic skills, knowledge, teamwork and communication abilities. Items in each section were rated on a scale from one to four, with one indicating the strongest agreement and four indicating the strongest disagreement with each survey item. A four-point scale was selected, with two scales of disagreement and two scales of agreement. A midpoint in the scale was avoided so that raters would make a definite decision. However, an option existed to simply choose a "don't know" response. The second part was a narrative response to three questions. See Exhibit I for an example of the form used for forecasters. Keep in mind each individual position had a form with questions which reflected the duties of that position.

Labor-management partnerships have recently received a great deal of attention, and have become an important part of National Weather Service operations. Therefore, once the forms and procedures were prepared, the local Union Steward was then consulted. All the survey forms were provided to the steward for comments and revisions. More important, a clear consensus between labor and management was agreed upon in determining the process. This was an extremely important step to accomplish before presenting the 360° process to the staff. Once a consensus was reached, a staff meeting was held to begin and explain the process to all employees.

The process was made anonymous and, for the most part, voluntary. Completing the evaluations was mandatory for senior forecasters and program managers, since these individuals are responsible for performance input as required in their positions. Evaluation packages were prepared for each individual that included rating forms for every person in the office (around 20 individuals encompassing 11 different positions). It was recommended that individuals provide their own input, beyond the evaluation survey, but no specific form was provided. Individuals who provided feedback could simply do so by providing written documentation on their own performance.

Employees were asked to complete evaluation surveys of fellow employees; then seal them in an envelope they turned in at a central receiving point. They also sealed, but returned evaluations on the Meteorologist In Charge to the Union Steward. Evaluations were not collected and opened until the deadline, which was approximately two weeks after the initial staff meeting. Employees were asked to provide their honest, constructive input, and not use the evaluations as a platform to "trash" someone they disliked.

The Meteorologist In Charge compiled each individual evaluation and the responses for the staff on a fresh form. Compiled responses for the staffs of subordinate supervisors were passed to those supervisors for their use. All original evaluation sheets were destroyed to protect anonymity. Comments were also typed, further ensuring the anonymity of the rater. Evaluations for the Meteorologist In Charge were compiled by the Union Steward, and never handled or seen by the Meteorologist In Charge. The Meteorologist In Charge received his feedback after the Union Steward compiled the results. He and the steward met and reviewed the feedback together.

III. Pitfalls and Successes

The introductory staff meeting was one mimediate success. The staff and office leadership were highly motivated and excited about their first chance to have formal input in the rating process. It is believed that for many this was their first taste of true empowerment, and they felt they finally had some input into the management of the office.

All went well with the logistical aspects, such as employees filling out forms and anonymously returning sealed envelopes. At the deadline, all of the forms were collected and compiled. Each employee's evaluation surveys, which totaled as many as 15 to 20, were gathered and results compiled. In the open question area, responses were compiled and retyped to ensure anonymity. This did take some effort and time, especially considering the large response rate for each employee. In reviewing the feedback, only a few employees had provided separate documentation on their own performance.

In compiling the results, some pitfalls in this first year process became evident. First, the scale used in the numerical rating section was a four-point scale. As a result, little variation in ratings from one employee to the next occurred on any particular survey item. From a supervisory standpoint, this feedback provided little assistance in gauging employee performance. Secondly, despite the training effort and the reiteration that the narrative sections be used to provide honest, constructive feedback, the pattern of responses became quickly evident. There were notably fewer responses to the question on strengths of the employee. Instead, raters focused on the areas that needed improvement and their suggestions for change. Many responses focused on legitimate performance concerns, but a large number of respondents relayed concerns on behavioral characteristics, unrelated to performance.

Edwards and Ewen suggest, "when work associates are assured that they will remain anonymous, they are willing to provide insight they might not reveal in a face to face meeting." Based on the comments received, there was little doubt the process had successfully ensured the anonymity of raters. However, considering behavioral comments created a dilemma. As a supervisor, two options were available to solve the dilemma. One, the behavioral comments could simply be tossed away. Second, all responses, whether behavioral or performance related, could be compiled. The latter option was chosen since it was the first attempt at the process, and it was felt employees would not like supervisors deciding what comments they should or should not receive. This decision was fortuitous, because Edwards and Ewen show that in a similar survey done at an electronics manufacturer, the supervisors edited and sanitized comments. They say this action "devastated the integrity and acceptance of the multi-source process" at that location.

Following the compilations, each employee was given a verbal review of the process during their evaluation meeting. They were informed how the data were compiled, and how the feedback was used as an additional informational source in the overall performance rating. It was relayed that the 360° feedback comprised only part of the evaluation. Most of the evaluation (75% or more) was dependent on supervisor and employee input.

Despite the verbal review, employees were not properly prepared to receive and interpret the 360° data. They continued to be openly concerned over how much supervisory emphasis the 360° feedback would receive in the final performance rating. Their concern likely resulted from the uncomfortable feeling of receiving feedback from individuals they may not often work with. However, compilation of the data suggested these concerns to be unfounded. The 360° process had been successful, in that, most respondents were quite honest. Many even stressed they had not rated an individual because they did not work with that person, or worked very few times with that person.

Unfortunately, when employees received their feedback they looked directly to the narrative response section. In particular, they focused on the improvement areas. They' interpreted these as negative comments. Although employees may have received many comments praising their strengths, any comments related to improvement or change, even those which were constructive were interpreted negatively. The wording of the narrative questions., along with a failure to provide some advanced information on interpretation, produced these results.

Going into the first attempt, the goal had been additional input and fair indicators into an employee's performance evaluation. Some reading research on 360° degree models had been accomplished initially but, as it turned out, little was actually known and the preparation could have been much better. In reviewing this first attempt, and after the benefit of Edwards and Ewen's book, it was concluded that a disaster was narrowly averted. "The first exposure to 360° feedback is the most difficult for most participants . . . " according to Edwards and Ewen. The first year process certainly was difficult at this office, but in surveying employees, they felt a 360° feedback evaluation deserved further consideration.

IV. Course Corrections

Preparation for the initial process was clearly poor. So with disaster averted, the first course of action was to find out more about this process. An Internet search generally yielded companies or individuals who could do the process for you. With an office of 25 persons or less, it was felt this solution would be too cost prohibitive. A literature search at the local bookstore yielded a dozen or ' so titles related to 360° feedback - enter Edwards and Ewen and their book 360° Feedback.

Immediately, it was discovered what had been done on the first attempt was far from 3600 feedback. We had actually performed 180' feedback. A true 360° process involves 'input from other customers, external sources and teams an employee may participate in. While we continue to call this a 360° process, there are other potential feedback sources besides those within the employee's environment.

The first step in revamping the process was to re-engineer the survey form. "The best design is to keep surveys and survey elements simple and understandable." Consideration was given to simplifying the form to just numerical ratings, without the narrative feedback. However, Edwards and Ewen indicate that while "narrative comments are not as anonymous as rating scale input... most people quickly learn to make comments that are constructive and specific yet anonymous." Consequently, a simple general comment section was put on the new survey form (see Exhibit 2), and the narrative questions were removed. Survey forms for each position in the office were again created.

"A reasonable 360° feedback survey probably should use between 20-35 items. The survey response time should be short - less than 15 minutes." To create a concise form, rating items that reflected the employee's work were created. Items in which peers had firsthand knowledge were preferred. For this, the original critical elements and items within the employee's existing performance plan were evaluated and given much more focus as potential survey questions. Using these items directly reflected the work the employee was responsible for. Critical elements and items with which peers may have little knowledge were left out of the 360° rating survey form for the second year process.

Finally, in formatting the rating survey form, the scale required serious reconsideration. As mentioned earlier, the original four point scale provided little useable feedback. With a narrow range, Edwards and Ewen say "all scores look the same, and it is hard for those who receive feedback to feel motivated to change a behavior that is less than half a scale point different from a strength. A wider scale, such as a 10-point scale, results in far richer profiles that show a greater difference among behavior criteria." Thus, the 10-point scale was introduced on the new rating survey, along with a "not applicable or not observed" choice. Once the rating items, scale and comments section were determined, this left a short, generally one or two page survey.

Next, a process was needed to include a person's own input, as well as satisfy the employee's concern on who provided the feedback. Edwards and Ewen suggest that each employee should complete a self-evaluation, and determine their own evaluation team. They suggest a small number, six or less, for each evaluation team. For the Amarillo office, guidelines were set for each employee to choose evaluation teams of five individuals, preferably from persons they work with most frequently. Employees were asked to choose individuals they felt could judge them fairly. Edwards and Ewen also suggest supervisors may add to the team, but not subtract. Therefore, two individuals were added to the team by requiring employees to select two program managers as respondents. For some employees, this still allowed them the option to select from four program managers. This brought the evaluation team for each employee to a total of seven, in addition to their self-rating. The only limit applied, was the employee not select their immediate supervisor. That person would ultimately provide overall performance feedback regardless.

Except where noted above, all other aspects of the first attempt were applied. An introductory meeting was again conducted, but detailed instructions on the process were provided. Much more time was spent in highlighting the use of the survey and it's importance in the performance appraisal process. The "weighting" of the feedback was also discussed, with the emphasis put on the importance of employee feedback. With this is mind, it was stressed that employees should complete and use the self evaluation as a method to reflect their work and accomplishments.

V. Greater Success

The second year attempt went far better. Employees were very excited about having control over selections for their evaluation team. Overall, they did quite well with this. Only two employees, out of 20, did not get all respondents to turn in the rating survey. Compilation of survey responses did not suggest any bias in employees' choosing evaluation teams. One would think that each employee would choose their best friends and therefore skew the results. If this were indeed the case, outlier responses would have occurred from program managers. Since very few outlier responses were shown in the results, it appears most employees followed the basic guidelines and choose individuals who truly provided constructive feedback.

The 10-point numerical scale ratings were more useful than the four point scale. As an experiment, the overall rating for each element was calculated as an average of the total number of respondents. A final rating was then calculated eliminating any outliers (a response that is far different from the majority). Edwards and Ewen suggest that "one highly discrepant respondent, or outlier, can substantially skew the average score because there are so few respondents." in this case, only seven respondents for each employee.

The difficulty was in determining outlier responses. On the ten point scale, it was determined that if only one respondent was three or more points different from the majority, this should be considered an outlier. If two responses had this difference, these were not considered outliers. Based on this method, there were very few Outliers, but in cases where there were, a definite difference in scores resulted as compared to the total average. This method may need further refinement, but Edwards and Ewen's outlier assessment was supported in the Amarillo survey. Eliminating outliers provides much more accurate information, especially for employees who are high or low performers. Otherwise, averaging with outliers may bring down a high performer, or build up a low perfomer on any particular item.

As far as the Comment section, comments on the second attempt were certainly far less without the open ended questions. Very few positive comments were received, but comments did focus more on performance issues, rather than behavioral issues. In future training, it will be stressed that employees consider commenting on positives aspects of performance as well.

The self evaluation was also successful. Many more employees took the time and effort to attach accomplishments in their comment section, giving supervisors even more information. The employee's self evaluation responses were not averaged with all respondents on this attempt. The individual rating survey was allowed to stand on its own so that an employee might compare their own to their evaluation team responses. In this way they could decide themselves if they had any outlier responses in comparison to the team response. From a supervisor's perspective, asking an employee why they felt their response might be different from that of their teams was also easier. This helped the employee focus on their own self evaluation and paved the way for the goal setting portion of the evaluation.

Survey items were now written to reflect critical performance indicators in an employees) performance plan. This made the numerical results of the survey useful as feedback into the employees' overall performance evaluation. Employees were also much more at ease with this second attempt. There was far less concern on how the results would be interpreted, also less negative focus on the comments. From a supervisor's standpoint, the time involved in compiling these shorter surveys and calculating the numerical rating for each item, was also far less intensive.

VI. Summary

The 360° feedback process can and does work even in a shift working environment. It is especially useful to a supervisor who has limited contact with shift working employees, by giving the supervisor additional knowledge and information. However, the information cannot be used as the sole method for performance evaluation. "Most commonly, 360° feedback serves as a supplement to, not a replacement for supervisory review." Employees also benefit from the feedback, and are provided with an opportunity to return feedback of their own. Edwards and Ewen relate that "employees are often more strongly motivated to change their work behaviors to attain the esteem of their co-workers, than to win the respect of their supervisors alone."

As the National Weather Service transitions to a two-tiered (pass/fail) performance evaluation system, this assessment tool could be even more important. It gives the supervisor a method to discern between high and low performers. It allows for goal setting, gives an employee a direct feedback mechanism and encourages communication. Survey questions or items can be developed that will reflect the performance indicators for each critical element.

There can also be many problems created in establishing a 360° feedback program. This paper has been written to help supervisors initiate the process and, hopefully avoid some pitfalls associated with this. Setting up an initial survey can be time intensive, but the results will be well worth the time.

VII. Acknowledgments

Thanks to my National Weather Service colleagues, Steven Cooper, Doug Crowley and Richard Elder for their insight and review of this paper. Special thanks to my wife, Suzanne, who helps me get through everything in life.



Exhibit I

EVALUATION

OF

WFO AMA JOURNEYMAN METEOROLOGISTS

NAME OF METEOROLOGIST BEING EVALUATED:

PART A

Next to each performance statement, indicate:

1. DEMONSTRATES ABILITY TO PROVIDE BEST SERVICES POSSIBLE

2. ENCOURAGES OPEN, TWO-WAY COMMUNICATION

3. OPERATIONAL SKILLS

PART B

Your narrative responses to the following questions are critical to the success and credibility of the meteorologist evaluation process.

Please take the time to provide constructive, honest feedback.