While brainstorming topics to write about this week, I had an idea to write a multi-part series on surveying customers. Why should you send surveys? What type(s) of surveys should you use? If you use NPS, will people make fun of you on LinkedIn?
As I began writing, I started having a flash of déjà vu: have I already written about this topic? I didn’t make an entire series about it already, did I?
Not quite, but I have written about surveys a lot more than I had remembered. Of the 133 posts of Customers, Etc., 10 of them mention they word “survey”. Part of the fun of writing something a while ago and forgetting about it is returning to it later and going huh, I wonder if that still holds up?
This week, rather than write an entirely new post (or series) on surveys, I’m going to revisit some older content that’s still relevant today.
Measuring Customer Experience
The 6th post on this newsletter, Measuring Customer Experience provides an introduction to CX management. If you’ve ever had the thought, “we should be measuring our customer experience”, immediately followed by, “I wonder what survey we should use”, this post is for you.
The most important thing to take away from this post is that the yardstick becomes the measure:
That’s the funny thing about experience metrics. Unlike operational metrics, how you approach measurement, that is, how you ask customers about their experience, becomes part of the experience itself and therefore affects the measurement. If you approach measuring CX in a clumsy way that is in itself a bad experience, you should expect customers to respond with frustration and perhaps maybe a rebuke of the measurement itself. The yardstick you use to measure customer experience necessarily becomes part of the measurement itself.
Brand, NPS Surveys
In my first semester of business school, I wrote Brand, NPS Surveys, which provides a high level understanding of NPS:
I’ll tell you the moment that NPS clicked for me as a concept that’s more than just a survey. I was listening to an episode of the Prof G podcast with Scott Galloway—I can’t recall which one—and he was talking about why businesses such as Apple and Amazon spend so much on original content. “It’s all about NPS”, I recall him saying. …
In this context, “NPS” doesn’t really have to do with the survey. It’s more of a shortcut for “how well do consumers trust your brand?” Surveys can help you measure that, but it’s what’s behind the score that’s important, not the survey itself.
In another podcast with Scott Galloway, he mentions that “brand is a shortcut for due diligence,” meaning, we’re willing to spend less time researching a product or solution the more we trust the brand. I already trust Apple with my phone and my computer—how much effort do I really need to spend researching new headphones when I already trust Apple? Hence $12 billion in AirPods revenue.
Hopeful Customer Service
Not all surveys are about measuring overall experience. In Hopeful Customer Service, we explore customer satisfaction (CSAT) surveys, which are associated with a specific transaction and often with the quality of a particular agent.
What do you think happens when we take our focus off the numbers and instead turn our gaze to the humans on both sides of the relationship? Customer support agents end up feeling more like people, rather than a collection of numbers, which in turn means they can deliver a more human experience to customers. This isn’t rocket science—and the numbers are still there if you want to look at them—but it really is as simple as choosing what to focus on.
Information Flow
Information Flow is all about the systems behind surveys. If you’ve ever suffered from the malady of thinking it’s okay to just send a survey into the ether, get a score, and hope for the best, this post is for you.
One of the traps we can get into when we focus heavily on process analysis and minimizing costs is that we can become blind to positive reinforcing feedback loops that may not be modeled clearly in our processes. For example, one of the positive reinforcing feedback loops associated with customer support is that in delivering a remarkable customer experience, customers can become so enamored with the service they receive that they’ll go out of their way to tell their friends, creating an additional pipeline of new customers, who then can potentially receive remarkable service, tell their friends, etc., etc., etc.
One of the ways we fail to account for positive reinforcing feedback loops is we simply don’t have information about their existence. As CX leaders, it’s our job to make sure others within the organization understand the reinforcing feedback loops of which we’re a critical part.
(Note to self: I need to revisit all my posts about support systems.)
Hard-to-measure Quality
Sometimes the most important things are also the hardest to measure. In Hard-to-measure Quality, we talk about the distinction between measuring quality and promoting behaviors that increase quality.
One thing we did differently at FullStory was implement peer review of support tickets. We still did manager-led review, but peer review put each agent in the shoes of a manager, making them responsible for evaluating their peers on key questions related to support quality. When you’re responsible for evaluating your colleagues, you pay a lot more attention to the qualities that go into a good support response. You want to be fair, so you intrinsically put in the effort.
We found this approach to be incredibly effective driving behavior to produce higher quality responses. Granted, it was hard to measure the improvement—CSAT scores were already very high—but we heard qualitatively from speaking with customers and peers across the business that they valued the work we did on the support team. I wrote about how we arrived at this process in the post Beware of Zombie Values.
WeWork Customer Experience
In WeWork Customer Experience, I dive into what it felt like to be a first-time paying customer at WeWork. My favorite takeaway:
Create a closed feedback loop with your customers. You can’t always get the experience perfect, but if someone takes the time to share about their experience, follow up with them and share specifically about how their feedback will be used. This is another one of those areas where it might seem really hard to implement, but you just need the right systems in place to get the physics right, after which everything just works.
Hint: if you’re rolling out a customer feedback/survey tool, make sure “create a closed feedback loop with customers” is a clear part of your goal. If the main goal of your rollout is, for example, to “get a [NPS|CSAT|CES] score from customers”, you’ll likely end up with a score, but responding to customers will seem like too much effort.
What else?
If you’re reading this post via email and you’ve made it this far, could you hit reply and tell me what else you’d like me to write about when it comes to surveys? And if you’re not yet a subscriber, click the button below to get Customers, Etc. in your inbox each week.