Measuring Customer Experience
This week we take a look at how measuring customer experience becomes part of the experience itself and why it’s important to be clear on your CX strategy. If you like what you read, I’d be grateful if you hit the thumbs up at the bottom.
If you were to add a customer experience management practice to your business—wait, why on earth are you adding something as boring as a “customer experience management practice”? And people are going to do actual work in that practice? And update their LinkedIn with titles such as “Customer Experience Manager” and all that? “Because customer experience is a differentiator for our brand”, you might say. Or “we want to be a business that people rave about”. Or “trust is critical in our industry—we have to get it right at every touch point in the customer journey.” Okay, I’ll play along.
If you were to add a customer experience management practice, one of the primary motivations would likely be because you intend to measure customer experience. This is reasonable. It’s not enough just to say, “we’d like to better align around customer experience.” You want to measure how things are going and have some way to report back that you’re actually improving things.
Before you began your CX management practice, you were likely already measuring certain operational metrics across various touch points of the customer journey. For example, you might measure how long it takes to get back to a customer by email when they first write in (first reply time), or you might measure how many days on average a prospect waits for their contract to be reviewed by legal (average contract turnaround time). You can come up with dozens of measurements to indicate the health of the customer experience for each touch point in the customer journey.
At some point, you might ask yourself whether customers really value whatever it is your operational metrics point to. What if you think they value responsiveness, but actually they care a lot more about the quality of the reply so they don’t have to engage in a lot of back and forth? It’s at this point, if you decide to actually ask customers about their preferences and feelings and all that, you start wading into the waters of “experience data”. You’re not just going to measure the operational data and assume you know what customers value. You’re going to ask customers directly¹.
Photo by Isaac Smith on Unsplash
The Yard Stick Becomes The Measure
Have you ever come off a particularly awful interaction with customer service, only to be treated with a follow-up text (a text!) asking how likely you are to recommend Brand™ to your friends? Like, it’s clear that the person who was supposed to be supporting you had no clue what was going on, but now they’ve seemingly doubled down on their incompetence by sending you a loyalty survey? And I don’t suppose that after you rate their brand a solid 2 out of 10 and leave some honest feedback about their product and service, someone from Brand™ will follow up with you to make things right. I mean, you can almost predict that the companies that are prone to deliver poor customer experiences are also going to be bad about how they go about measuring said experiences.
That’s the funny thing about experience metrics. Unlike operational metrics, how you approach measurement, that is, how you ask customers about their experience, becomes part of the experience itself and therefore affects the measurement. If you approach measuring CX in a clumsy way that is in itself a bad experience, you should expect customers to respond with frustration and perhaps maybe a rebuke of the measurement itself. The yardstick you use to measure customer experience necessarily becomes part of the measurement itself.
(Wouldn’t it be funny if operational metrics worked the same way? Your response back to the customer takes longer then they expect and they respond by asking you what Service Level Agreements (SLAs) you have implemented and if you’re measuring average response time and if not, why not, you take too long to respond. They’re probably a support wonk.)
The "Why" of CX Measurement
Keeping in mind that the way you go about measuring customer experience is part of the experience you’re delivering to customers, you’ll want to be very clear about the “why” of your measurements². This means being clear about your overall CX strategy and how measurement fits into that strategy. The book Outside In has a good take on this:
The strategy discipline is critical because it provides the blueprint for the experience you design, deliver, manage, and measure. Without it, you, your employees, and your partners won’t know whether to deliver an experience like the one at Costco, at Apple, or somewhere else entirely.
If you’re not clear on your strategy and you get a negative survey response (or lots of negative survey responses), how do you know if you should change? If an Apple store manager gets a complaint that their aisles aren’t wide enough and it’d be nice to buy in bulk and could the shopping carts please be larger, I’m guessing that won’t weigh too heavily on the manager’s shoulders. But if they get a complaint that the associate at the genius bar was not only rude but wasn’t in fact a genius, that might be something they take rather seriously. The strategy provides contexts for the measures.
This applies to operational metrics as well, though there’s less of a risk of your customer directly observing that your measurements are misaligned with your CX strategy. This is something we explored a few weeks ago when we looked at sales commission plans and how incentives drive behavior. What are incentives if not intended operational metrics, and if so, shouldn’t those intended operational metrics align with an overarching CX strategy?
The key takeaway here is that as you measure customer experience, both directly by asking customers and indirectly by recording operational data, you want to be clear about the strategy you intend to provide. Only then will your metrics be able to tell a common story. And hopefully the way you ask customers about their experience will be more thoughtful as a result.
Etc.
Things I’ve read:
How Private Equity is Ruining American Healthcare. This could also be called “Incentives Matter: US Healthcare Edition”.
I guess I didn’t read a whole lot of customer stuff this week, but I did paint my office over the long weekend, so maybe that’s why.
Qualtrics has a way of breaking this down between operational data,“O-data”, and experience data,“X-data”, (X’s and O's—get it?). Here’s a quick overview. This line ended up fitting pretty nicely in their eventual acquisition by SAP because SAP is supposed to provide the O-data to Qualtric’s X-data.
I suppose there’s some value to “just getting the data” as a starting point, so you might add a few different surveys to get a baseline before coming back and clarifying your survey strategy. The risk is you treat surveys like lots of other projects, whereby you put it in place and don’t take the time to revisit its purpose.