Easy Graders

Employee ratings "on a curve" are oft maligned. Sometimes it's a case of "all my children are above average." But is it always?
A recent article in Vanity Fair about management practices at Microsoft suggests stack ranking to a forced curve has been part of what has undermined the company. Is this sour grapes or a smoking gun? And has the company really been undermined, or do we expect and accept friction in the workplace and some failed initiatives as a healthy process?
Reading the article, I suspect it's a symptom of Microsoft's challenges, not the cause. I'm certainly not arguing it's a perfect process at Microsoft.
Harvard Business School did some interesting studies a few years ago on *one* version of "ratings on a curve" called "rank and yank" (where the bottom ten percent are performance managed up-or-out) and found it worked great -- for a few years. And then had diminishing returns, arguably becoming more negative over time. And it varied by type of industry and type of workforce. But I don't think that is precisely what we are talking about.
The fact that *one* flavor of "rating on a curve" doesn't work as well as some would like, over time, doesn't mean that "rating on a curve" is inherently flawed. What is done with the data, and how the process is used, is key.
Some possible benefits are clear: In a company that is too large for everyone to know everyone else, "rating on a curve" helps prevent some managers or orgs from becoming "easy graders" or vice-versa. That, of course, is one reason "grading on a curve" has been, at times, popular in education. It helps take the vagaries of individual professors' assessments out of the process, creating a more "fair" system.
But let's take a step back. One danger is when performance management becomes about the rating, and not about regular, positive and constructive feedback and discussions. The point of performance management is to improve performance, not to rank or rate people. The ideal is to take as much subjectivity out of it as possible (ie, make it fair) without losing sight of 1) all the critical nuance and subtle factors that, in the end, we cherish and champion -- and, 2), of course, what the actual work results are.
There are other uses too, like for workforce analytics, such as "Wow, who knew we had a disproportionate number of top performers coming out of university X, where we would have never suspected to find such top talent?" Such insights let us focus our recruitment and partnership efforts with such an institution. But if we cannot rely on our ratings being reliable and more or less normally distributed from one org to another, we have a problem. The correlation might have little to do with the quality of the graduates of that university, and lot to do with the fact that they all got recruited into an org of "easy graders." So these people appear to be outperforming, when, in fact, they are just average. This is just one example, almost at random.
Here's a question: Would it be better if employees never knew their rating? That is, if the discussion was about the work and performance, and the ratings were just something determined and collected but not shared? I'm not suggesting this is a good or even possible solution, but I think it's worth putting any idea on the table to explore -- since I know from personal experience on the receiving end that the rating process seldom feels good, or motivates me.






Popular Posts