By Ben Dattner with Darren Dahl
When organizations focus too much on blame, they often miss the opportunity to reflect on whether there may be systemic factors encouraging people to act in ways that are counterproductive.
In 1975, veteran business executive Steven Kerr penned an article for the Academy of Management Executive called “On the Folly of Rewarding ‘A’, While Hoping for ‘B’,” which has become a classic in the years since. Many problems in organizations are created because of faulty incentives and flawed reward systems that are set up to accomplish one thing but actually motivate people to do another, or even the opposite.
As Kerr puts it: “Managers who complain about lack of motivation in their workers might do well to consider the possibility that the reward systems they have installed are paying off for behavior other than what they are seeking.”
Kerr describes several such examples of these mixed messages, ranging from doctors who diagnose healthy people as being sick in order to avoid potentially being blamed to missing an illness to universities that claim to prioritize professors’ teaching ability, yet only evaluate and reward them for publishing.
Another example might be a high school basketball player who excels at passing the ball, which makes his teammates better. But because his coach and the colleges that might give him a scholarship credit only a player’s ability to score, the player passes less and shoots more — which actually hurts his team’s chances of winning.
I once consulted to a company that quantified and measured everything the employees at the organization did using a “Six Sigma” approach, which is a methodology that prioritizes reducing error rates through rigorous statistical analysis and process control. Although there were some benefits to the approach, the employees I interviewed strongly felt that their incentives to boost their Sigma scores not only didn’t fairly capture their performance; they also didn’t correlate closely enough with the organization’s true needs.
First of all, there was a temptation to “game” the system, for example, by starting a project later so that the total time to complete it would be shorter, or to not conduct certain kinds of transactions that would have been profitable but would have lowered their Sigma scores. And second, some aspects of their Sigma scores were totally out of their control, and were determined, for example, by local market conditions.
Their concerns, however, fell on deaf ears. The leader of the department was a self-professed “evangelist” of Six Sigma, and was not open to making any modifications of the evaluation process.
This kind of cultural problem was defined well by Harvard researcher Ronald Heifetz, who along with his co?author and Kennedy School colleague Marty Linsky made the helpful distinction between “technical” versus “adaptive” approaches to organizational problems. The aforementioned company was trapped in a purely technical approach to the challenges it faced, was a “closed system,” and was not able to successfully adapt as the world changed around it.
The demoralized employees felt punished by a system that didn’t reflect their true contributions, that incentivized them to work against the real interests of the organization, and that penalized them unfairly for trying to do the right thing.
On the other end of the technical-adaptive spectrum was a financial services organization that hired me to work with its human resource department to help it assess its performance appraisal system. The organization was committed to assessing all aspects of the system — from the way it was administered, to its frequency, to who participated, to the kind of criteria that were assessed.
My firm and I were asked to review the entire process from start to finish, and to collect information from participants about what could make it more efficient and effective. The project helped HR make substantial improvements to each step in the system, and the fact that the changes were made was much appreciated by all of the company’s employees.
I have found that the best way for an organization to encourage employees to reflect and evolve is to do the same itself. This healthy organizational culture emphasized learning, and dysfunctional blame was almost entirely absent from the experience of working there.
Excerpted from The Blame Game: How the Hidden Rules of Credit and Blame Determine our Success or Failure,by Ben Dattner with Darren Dahl. Copyright 2011 by Ben Dattner. Reprinted with permission of Free Press, a division of Simon & Schuster, Inc.
The folly of rewarding A while hoping for B Kerr’s observation on “The folly of rewarding A while hoping for B” simply illustrates the occasionally tainted up rewards systems that most companies have in place. Kerr’s central point is that we can expect people to rationally do (or pretend to do) the things that are rewarded rather than the things we say they should do. The idea behind this article is that, while the goal of a reward system is to reward and encourage a certain activity, it may in fact encourage an activity the organization is looking to avoid. Kerr gave an example of medical doctors diagnosing patients with illnesses that they may not actually have because doctors understand that the repercussions of missing a diagnosis is much more severe than treating the patient for an illness he or she does not have. He used university professors as another example. The desired behavior is for professors to be good teachers, yet they are given tenure and raises based on how many publications they achieve. Therefore,