Choosing 360: A Guide to Evaluating Multi-rater Feedback Instruments for Management Development

Front Cover
Center for Creative Leadership, 1997 - Business & Economics - 46 pages
0 Reviews
This book presents a nontechnical, step-by-step process that shows how to evaluate any 360-degree-feedback instrument intended for management or leadership development. The 360-degree-feedback instruments collect information from different sources about a target manager's performance, and they offer multiple perspectives. The 16 steps in evaluation are: (1) finding out what is available; (2) collecting a complete set of materials; (3) comparing the intended use to instrument characteristics; (4) examining the feedback scales; (5) familiarizing oneself with the instrument development process; (6) learning how items and feedback scales were developed; (7) finding out how consistent scores tend to be; (8) assessing basic aspects of validity (whether the instrument measures what it claims to measure); (9) thinking about face validity; (10) examining the response scale; (11) evaluating the feedback display; (12) understanding how breakout of rater responses is handled; (13) learning what strategies are used to facilitate interpretation of scores; (14) looking for development and support materials; (15) comparing cost (value for the price); and (16) considering length a minor issue. At the end of the report, there are a checklist of the steps, a glossary of technical terms, and a list of suggested readings. (Contains 12 references.) (SM)

From inside the book

What people are saying - Write a review

We haven't found any reviews in the usual places.


Collect a Complete Set of Materials
Compare Your Intended Use to Instrument Characteristics

13 other sections not shown

Common terms and phrases

360-degree-feedback instruments A.M. Morrison Agreement within Rater American Psychological Association analysis assessed behavior boss breakout of rater Brittain Leslie Center for Creative characteristics cluster analysis coefficients compared comparison Comparison to Ideal Comparison to norms complete Computer programs Computer scoring Computer Software construct validity Content validity correlation Correlation coefficients Creative Leadership criterion measure D.P. Campbell development guide Differential item functioning direct reports evaluating Evaluating Multi-rater Feedback example face validity factor analysis feedback display feedback report feedback scales Feedback to Managers few instrument developers Glass Ceiling goal-planning graphic Greensboro grouping items hand-scored Highlighting Highlighting High/Low Scores Highlighting Largest Self-rater impact importance to job important includes individual instru instrument development instrument measures instrument scores instrument's scales internal consistency interpret Interrater interrater reliability Item response theory Item-level feedback item-scale item-scale correlations items and feedback items and scales job or success L.R. Sayles large number leader leadership or management logistic regression look M.M. Lombardo M.N. Ruderman management development management or leadership managerial manual ment middle managers more/Do less Multi-rater Feedback Instruments Multimethod needs norm group norms number of items organization organizational P.J. Ohlott Palus peers performance or effectiveness personal computer Predictive Validity Psychological testing Psychology psychometric Psychware rater data rater observation ratings rational/intuitive related to effectiveness relationship reliabil response scale S.S. Gryskiewicz sales tax sample sample feedback San Francisco scores are related self-rater self-rater discrepancies SIOP skills standard score statistical statistical analysis Step Stock strategies supporting materials taken the instrument target manager test-retest reliability theory Tornow trainer translated validity study validity—The Van Velsor vendor vidual W.H. Drath whether within rater groups within scales within-group reliability

Bibliographic information