Wednesday, September 19, 2007

360-degree Feedback Programs

Michelle Golden, whose own blog is always worth reading, writes in to ask about 360-degree reviews and upwards evaluations, especially in light of what she perceives to be a much needed shift from a labor force ("asset") mentality to a knowledge-worker mentality. She writes:

Personally, I find them to be a very effective tool when executed properly (which I believe I do and have done for small firms up to a Big 4) though I see them poorly executed sometimes with what can be morale-damaging consequences.

In a well-done 360, I appreciate the contrast that comes to light between the subject's views of him/herself, their managers' perspective, their peers' perspective and their direct reports' perspective. And sometimes, the clients' perspective.

Interestingly, but not surprisingly, the subject usually underestimates or overestimates (significantly and consistently) how they perform across a broad variety of management and leadership areas such as decision-making, crisis management, teaching, learning, follow-up/accountability, delegation, etc. Done well, I've seen 360s build confidence around strengths and indicate a clear path of important areas for people to work on.

My questions for you are:

1) What do you think of 360 evaluations for those who manage others or will be doing so?

2) Do you know of or use other tools that help establish measures of management characteristics such as those just listed?

3) If used, do you think they should be private to the subject for personal development or as a tool the organisation uses to evaluate the effectiveness of their people?

Michelle, I find this question to be on a par with the question: "Should we ask our clients for feedback on how we are doing?" It astounds me that, even in some very elite firms, that battle is still being fought, yet it rages on, as does the debate over 360 for managers.

The simple truth is that, if you really want to be more effective at anything (sports, playing an instrument, romance, managing) you have to find a way to get constructive feedback, somehow. In life, the absence of complaints is not a dependable indicator of the absence of opportunities to improve.

So, it all starts with that big "IF" -- do you care enough to want to improve? If so, then we're just discussing mechanics. If you don't (and the vast majority of people do NOT want to do what it takes to improve unless they are absolutely compelled to) then no 360-degree program is going to prove effective: there are too many ways for such systems to be gamed, subordinates intimidated, feedback to be ignored and change made optional.

We have discussed getting feedback on this blog particularly in the discussion: Getting Good at Getting Feedback (16 people joined in on that one so far).

I also reported in another blog post on a manager who asked his people to evaluate him and promised to resign if he did not improve by 20% (Teaching Guts), which tells you something about my view on your third question, Michelle.

In my experience, the overwhelming majority of 360-degree programs fail to deliver the desired benefits of actual improved managerial performance for one (or all) of the following reasons:

a) There is actually a lack of understanding of what the manager's role is, so it's hard to provide feedback to and evaluate the manager if what he or she should be good at is ambiguous (or has a high level of deniability -- "That's not my job" "That shouldn't matter if I deliver", etc.)

b) Feedback is collected with highly structured, bureaucratic questionnaires which do not address the relevant behaviours and characteristics. They are too formal.

c) The feedback is delivered in such a way (eg without coaching) that the recipient is allowed to "misinterpret" what the information is really saying

d) The feedback is kept "confidential" so there is no "embarrassment factor" if the manager fails to improve. The system relies on best intentions -- the system is not a strict accountability system, which it needs to be if it is to work. Mangers exempt themselves from accountability when they can.

My quick summary is that a manager who really wanted to improve would not need the formality of a company-wide 360-program to get there, and managers who do not wish to be held accountable will not only not be helped by the system, they will ensure that it has no teeth!

There's more, Michelle, but let's see what you and others in the real world who have direct experience with 360-degree programs have to say.

11 comments:

Anonymous said...

I know I am in the minority here but in my start ups, I refuse to allow any 360-degree type format feedback process or system.

It all stems from the airforce, I suppose, with their sophisticated rating systems.

These systems consumed officers, took way too much time, and served in the end, as gigantic UN-motivators.

They all seem so good on paper, and even a few people actually are trained and do them well ... but, by and large, most managers do not do them well and they end up de-motivating your work force.

Anonymous said...

I've found 360 degree reviews very valuable for myself; even though sometimes the feedback is hard to hear, it's worth having.

I miss it in my current environment, and I'm thinking about how to get it.

The parts that worked best in my firm were those where you got the groups together for a discussion with the HR people, and collected comments; the tick a box surveys didn't have the same power.

The key, though, is that comments are, by definition, less anonymous.

It's important for it to be safe for people to make comments about those above them, as well as for the feedback to be heard and acted upon.

Anonymous said...

Formal 360 degree reviews can work where the manager has a large number of reports, so that the employee's commenting up can believe that their unflattering assessments will be lost in the bulk.

If the number of persons surveyed is fewer than eight to 10, many employees will not be candid, fearing retaliation.

I have heard many colleagues curse and fudge upward reviews; nobody has ever given the slightest credence to assurances of anonymity and non-retaliation -- nor should they.

Anonymous said...

I have used multi-rater feedback in organisations and/ or for clients since 1994.

It generally is an incredibly effective tool for providing managers with the most candid feedback that they have typically ever received.

Some tips:

• Aim for feedback from 8-12 people;
• Train the respondents (if possible) in how to rate;
• Allow written comments;
• Do not cut the data by peer or report; keep all together to lessen fears re. confidentiality;
• Provide professional assistance and coaching to participants afterwards; and
• Make sure the rating items are highly behavioural and follow good principles of survey construction.

Robert Edward Cenek www.cenekreport.com

Anonymous said...

I don't understand why people make these things so complicated but then I suppose for every problem, there's an industry just waiting to be born.

To your IF point -- that's the biggy.

If and when I get asked these kinds of question my initial shot is always the same: "I am here to comfort the disturbed and disturb the comfortable".

The ones who squirm are in the comfortable camp.

Next question: "What do you know about yourselves?"

Usually there's a heavy silence or some claptrap about facts and figures.

Third question: "Do people come to you with issues or do you feel like you're pulling hen's teeth?"

By this stage, most meetings are giving off a distinct air of depression, tinged with suspicion.

It's at this point I know whether what I'm saying will fall on largely deaf ears.

Fotrunately, there's nearly always at least one person who does want to learn something.

And as I'm sure you know Richard, change happens one person at a time.

This stuff is so common.

I describe it as professionally institutionalised arrogance born out of the entitlement nature of tenured partnership.

Anonymous said...

I recently put together some thoughts on 360-degree programs for my clients.

Here are some (very highly highly edited) excerpts:

Too many 360 processes use broad, voluminous, non-enterprise specific, leadership or competency frameworks that are unwieldy to work with and make it difficult to establish a sense of priority.

The most important question is, "What handful of behaviours, if enacted consistently by leaders and employees alike across the organisation, will result in improved outcomes for our stakeholders given the particular challenges and pressures we face?"

Those being rated should be substantially involved in the design of both the instrument and process with appropriate professional guidance.

Both those giving and receiving feedback should undergo training regarding these roles and the most effective ways of performing their function.

The instrument should have a significant qualitative feedback component (open ended questions), which ideally should appear before the quantitative questions (rated items) in the questionnaire layout.

Even quantitative items should allow for written comments to allow respondents to explain ratings given.

Ratings scales should allow for more than the traditional 3 or 5-point ranges.

Seven or even ten-point ranges have been found to provide richer sources of feedback, especially when combined with substantial qualitative data and appropriate training.

External mediators can be useful in assisting ratee's to analyse their results.

Research indicates most people miss or misinterpret vital messages from their data.

Particular care needs to be taken in using feedback from peers.

Research has shown that peer feedback is generally not as useful as either subordinate or boss' feedback.

It is not yet clear whether peers cannot, or will not, provide useful feedback.

Here again, external facilitators can be useful in mediating the process.

Consider whether or not comparing people's results to some group (whether organisational or industry), as is commonly the case, is useful in the context of development.

How does it help that someone knows they are above or below "average"?

When used for follow-up purposes, intervals of 12-18 months are suggested.

However, more and more, as the need for fast behaviour change and attendant performance results grows, re-assessment intervals down to 1 month may be more useful depending on individual circumstances.

Some practitioners maintain that if noticeable behaviour change does not occur within 90 days from beginning an intervention, it never will.

Personal development plans should focus on developing existing strengths to well above average levels, while weaknesses should be addressed as a priority only if they represent "fatal flaws".

Some fatal flaws include not showing initiative, little focus on results, resistance to new ideas and not learning from mistakes.

The ratee's boss must be involved in some way in the creation, resourcing and "sign-off" of the personal development plan.

Four broad outcomes are possible:


1) Behaviour changes in the desired direction, as does performance.

Great!

Use this outcome to strengthen the understanding of how behaviour links to performance.

Be careful though, also consider that perhaps an unrelated factor such as changed market conditions may be responsible for the positive results.


2) Behaviour changes, performance doesn't.

Here, check your hypotheses about what behaviours lead to performance.

Examine assumptions and supporting evidence.

Also consider as in point 1 above that unrelated factors (eg market conditions) have been responsible.

If that is the case, perhaps new or changed behaviours are required to address the external reality.


3) Behaviour doesn't change, performance does.

Once again, check your hypotheses about what behaviours lead to performance.

Examine assumptions and supporting evidence.

Also examine the interventions that were supposed to lead to behavioural change but didn't.

See if further advantage and leverage can be achieved through whatever factor/s led to increased performance


4) Both behaviour and performance don't change.

Interventions haven't worked, probably due to misjudgement about the makeup and strength of the current organisation culture, less likely poor execution of the interventions themselves.

Anonymous said...

I like them, and I don't mind saying I think we do them the right way here at H&K.

Anonymous said...

OK, Leo -- I'll bite!

What is it about the way you do them at H&K that makes them work well?

anything you can share with the rest of us?

Anonymous said...

Thank all of you for excellent contributions to the discussion!

I think Richard's point, that if you're looking to improve, any way that you can get constructive feedback is of value.

It is quite challenging to institutionalise this!

Richard mentions failure to deliver "desired benefits of actual improved managerial performance".

So what can be done to boost this?

Sadly, "GL" above says that 360 results demotivated workers in the service.

Actually, there's a book out there called "Abolishing Performance Appraisals: Why They Backfire and What to Do Instead" that I've ordered ... haven't received yet ... but as the title suggests, it discusses the traditional appraisal model needs to go and, much as David suggests, recommends people manage themselves, seeking out feedback, and plotting out their self-improvement.

Still and all, I imagine most won't take the initiative to do or change anything preferring to simply stagnate.

As a species, the majority sure seem pleased with status quo. <sigh>

Back to the topic, for evaluation effectiveness, Robert Creed hits the nail on the head with his item #6: "making sure that the instrument evaluates behaviour" [versus intention] which is all part of the art of asking the right questions, the right way.

Probably the core flaw with most surveys of any time.

Peter Gwizdella's point about enterprise specificity is great, too.

And so is involvement of participants in the development of the process.

This does help assure their comfort which is essential to whether they will regard the results and elect to use them constructively.

I don't think I would ever compare an individual to "average" but do often look at group results aggregating them by level and/ or department.

They usually inspire incredible discussion about group dynamics, performance and effectiveness.

Facilitating this conversation in the direction of the future keeps it positive and from slipping into a "rehashing the past" session.

(This is another reason facilitation is essential in delivering results).

I really like Peter's point about focus on strengths versus flaws (unless fatal).

So often, we look only at what isn't great about us instead of appreciating what is.

Then we spend precious energy on developing weaker points to be average instead of honing good things to be exceptional things.

I also appreciate the suggestions about the ratee's boss becoming involved in the ratee's personal development plan.

The broad personal development plan outcomes listed are very thought-provoking.

These seem invaluable for ANY change initiative at all.

Great stuff.

Thank you all for responding.

Anonymous said...

Great post!

As a manager I've had the great pleasure of working with people and figuring out how to motivate them.

However, I've had mixed results with 360 Degree Feedback reviews.

If you're doing them in your company for the first time I found it's imperative that you explain well in advance to everyone what the process is, how it's conducted, who will see the results, and what they will be used for.

Without setting the stage like this well in advance you'll get a lot of upset staff who really don't understand what the purpose is and how it could possible help them.

Prepare in advance, and you'll see a better result in the end!

Anonymous said...

Interesting post & great comments.

Many Thanks