Drift to Low Performance | Customers, Etc.
An exploration of why systems with unclear performance indicators naturally drift downwards
This is the 5th post in a series on support systems (though it’s evolved to be more about business systems), which began with Your First Support Model. The most recent post in this series was about Reinforcing Feedback Loops.
Imagine it’s the end of your junior year of high school and you get your first real summer job. Mowing lawns and babysitting is for the kids. You’re going to make some real money this summer.
You follow an ad for a local home goods wholesaler that pays $9/hour for a warehouse job. You’ll be picking items for orders, packing them into boxes, and staging them to be picked up or shipped to customers. Easy enough.
You’re incredibly motivated to get started. You quickly figure out the ins and outs of picking orders and by the end of the week, you’re pretty sure you’re picking more orders than anyone else, even the workers who have been there for years. Your boss tells you what a great job you’re doing. Everyone else sort of gives you the side eye.
A few weeks in, you start to get pretty bored. Your pace has slowed a bit—oddly, this seems to make the other workers happy—and you’re starting to wonder if you should have spent your summer playing video games, or at least finding a job that paid a little better. You decide to talk to your boss to see if there’s a way you can make any more money.
“Look, kid.” Your boss is patronizing, but he means well. “You’re a hard worker and you’re doing a great job, but this is a nine-dollar-an-hour job. I know you’re just here for the summer and are going to go on to do great things. You work hard, so feel free to clock out a bit early on Fridays and I’ll make sure you get paid for the full hour.”
You thank him for his time and get back to work. Your pace continues to slow throughout the summer and by the last week, the other workers are inviting you to join them when they go out for lunch. All in all, a fine summer at a fairly unremarkable job.
What you experienced in this imaginary summer was a drift to low performance. Without adequate performance standards, you found yourself matching the expectations being set by those around you. What causes drift to low performance? How do you prevent it?
Modeling drift to low performance
Drift to low performance is one of the “systems traps” that covered in Thinking in Systems. Before we model what drift to low performance looks like, let’s return to the basic support ticket model we built a few weeks ago:
In this model, there’s a stock of tickets and a “Solving Tickets” function—presumably, a team of people working together—that are trying to get the queue down to zero. However, the valve you see in that drawing isn’t just a function on a spreadsheet. It’s actually composed of real humans, and those humans are themselves complex systems that affect the rate of the team’s ability to solve tickets at a high rate.
Let’s model the system as if it were a system of three support agents, except instead of just having a stock of tickets, we’ll also introduce a stock of motivation for each individual agent.
You’ll notice in the system above that the balancing feedback is exactly the same. The team responsible for “Solving Tickets” is still trying to get the desired number of tickets in the queue down to zero. What’s changed is that we’ve added “Work Motivation” for each individual agent, which is in a reinforcing feedback loop against the perceived minimum number of tickets solved of the lowest performing team members. Put another way, all things being equal—pay, benefits, etc.—people will naturally drift to producing the same work output as the lowest performing member of the team1.
Leaderboards considered helpful
In Thinking in Systems, the way out of this particular trap is to keep performance standards absolute, letting “standards be enhanced by the best actual performance instead of being discouraged by the worst.” Rather than team members relying on a perceived state of performance and drifting to the bottom, make the performance standard explicit and set the bar high.
One common example of highlighting explicit performance standards among team members is in sales. Sales teams often use a leaderboard to highlight top performers and demonstrate to the rest of the team what the best performers look like. At the end of a quarter, this can motivate team members to put in the extra work necessary to move ahead on the leaderboard.
When we look at how the system is modeled out for an individual seller, we can see their work motivation increases as they observe the best leaderboard performance. Where does the extra motivation come from? Their stock of energy. As they draw from their stock of motivation to successfully close deals, that generates a commission for the sales rep, which in turn is very motivating, creating a reinforcing feedback loop as more deals are closed2. The rep might be exhausted at the end of the quarter, but if their commission check is large, they’ll probably be happy just the same.
Leaderboards considered harmful
What happens if we apply a pure leaderboard model to a post-sales balancing feedback loop, say, a customer support inbox? Will it be effective? In the short term, almost certainly. But in the long term, it will burn people out. Let’s model it out:
You’ll notice this model looks almost exactly like the sales model. “Closing Deals” has been replaced by “Solving Tickets”. We still have the “Best Leaderboard Performance” as a motivator to draw from our stock of energy to solve more tickets (and inch up the leaderboard). However, unlike sales, where a team member can draw down their energy level and be rewarded with a higher stock of paid commission, there’s not a similar reward on the support side. Yes, you can inch up the leaderboard, but it’s not the same as getting a commission.
Can you offer a performance bonus for closing out more tickets? Sure, but you’re naturally not going to pay as much of a bonus as you would on the sales side of the business. As we discussed last week, the retention side of the business is focused on minimizing cost to maintain a desired number of customers. It’s a balancing feedback loop, and you don’t want to over-invest in balancing feedback loops3. Sales, in contrast, is a reinforcing feedback loop. The more money you put in the sales machine, the more you’re going to get out on the other side. Therefore it makes sense to invest in sales, which includes paying commissions to your sales team for closing deals.
Another difference is that retention systems—especially customer support—tend to be relatively constant in terms of the demand on the system. Customers don’t care whether it’s the end of the quarter—if they have an issue, they’re going to email you right away and expect a response. Sales, in contrast, tends to be more cyclical. There’s a push at the end of the quarter to close deals, so you draw down extra energy to get the job done. Once you move into the next quarter, though, things relax a bit and you can rebuild your energy level. If you apply the leaderboard model to a support team and expect them to grind all the time, they’ll naturally get burned out because there’s no built-in time to recover.
While leaderboards in support are certainly better than nothing, they’re not my favorite way to set high performance standards and avoid drift to low performance.4
How do we avoid drift to low performance?
In looking at the leaderboard model, we need to remember that support tickets aren’t purely a function of agent throughput. If we only focus on churning through tickets and agent throughput, we’ll build a system that excels at agent throughput. But tickets come from somewhere. In addition to looking for high performance at solving tickets, we also want to look at where tickets come from and what feedback loops need to exist to prevent customers from needing to reach out in the first place.
Next week, we’ll look at strategies for setting high performance standards on a support team that don’t necessarily involve a leaderboard.
You might object that not every team naturally drifts to low performance, but that’s usually because there’s some mechanism to set the performance bar higher, perhaps by filtering who’s allowed to join the team. In a completely open system with no standards, performance is going to tend to drift downward.
Many sales teams also use tiers of commissions, offering bonus commission % for deals closed above quota, e.g. deals normally might pay 3% commission, but once you hit your quota they pay 6%. Thus, even if you hit your quota, you’re still motivated by an even higher performance bar above the one you just hit.
You don’t want to invest in balancing feedback loops in isolation, but lots of balancing feedback loops are also immediately tied to systems that have reinforcing feedback loops. e.g. you want to spend as little as possible on customer retention (balancing feedback loop), but you also want the customer experience to be so remarkable that customers want to tell their friends (reinforcing feedback loop).
The types of leaderboards I’m against in customer support are the ones where it’s super visible and everyone on the team is checking it all the time. For managers, monitoring team members’ performance is a critical function of the job, but how you share the data matters. Instead of putting it on a leaderboard and making that the focus of attention, share the data with team members individually with a small note about their performance relative to the rest of the team. Put more focus on behaviors that will lead to the output you want to see and less focus on the numbers themselves.