How to Make your Survey Better than Nordstrom, Lowe’s, and Wal-Mart

by Martha Brooke on June 22, 2017

I know you get asked to take surveys all the time, because I do. Even the shortest business trip results in at least 5 surveys: Delta wants to know about your flight; Hilton wants to know about your stay; Enterprise asks about your car rental and on and on. But the most prevalent of all surveys is that one at the bottom of your sales receipt, the request from Apple, Kohl’s, Nordstrom, Target and virtually all retailers to “tell us how we did.”

So last fall, two of my analysts and I set out to measure the quality of those point-of-purchase surveys (Point-of-Purchase Survey Study). We thought it would be interesting to know what level of science and engagement the nation’s largest retailers bring to their surveys. The surveys say they want to know about our experiences as a customer, but do they really want to know? Or, is this just PR spin?

Well, friends, unfortunately… it’s PR spin. The nation’s largest retailers run tragically poor customer satisfaction surveys, they’re bad for customers, bad for companies—they’re a waste of time and money all the way around.

So what are these big retailers, like Amazon, Apple, Wal-Mart, Kohl’s, and Target, doing wrong? Are there lessons that can be learned from their mistakes? And how can you make your survey better than some of the biggest companies in the world?

Let’s look at the two main problems: First, the vast majority of the surveys were riddled with biases, so we can’t imagine they provide anything but highly skewed data. And second, most of the surveys failed to show they care about their customers and the experiences they had.

Let’s look at the problem of bias. There were five types of biases in these surveys, each negatively affecting data accuracy in different ways.

  1. Leading Questions— Known within psychology as priming, leading questions are designed to elicit a particular response. Ace Hardware asked: “How satisfied were you with the speed of our checkout?”

    This question is phrased in a way that assumes the customer is at least somewhat satisfied.

  2. Forced Wording—The Gap asked customers: “Rate your agreement with the following statement: The look and feel of the store environment was very appealing.”

    “Appealing” is a weird word. It’s probably not how customers think about their experience in a store like Gap. They’d be more likely to think “it’s a mess,” “that was fun,” or “it’s well-organized.” Furthermore, the question would seem to have an agenda behind it—as in Gap executives want to hear that their store environment was very appealing.

  3. Faulty Scales—Wal-Mart asked its questions on a 1-10 scale. This scale introduces two problems: first, there is an even number of selections and therefore no true midpoint:Selecting a 5 would imply a lower than neutral score, while selecting a 6 would imply a higher than neutral score.

    The second problem with Wal-Mart’s scale is that there is no zero and some experiences are just that, zeroes, not sort of poor, plain old bad.

  4. Double-Barreled Questions—This is where one question asks about multiple topics, usually that’s two questions compressed into one. Lowe’s asked customers: “Were you consistently greeted and acknowledged in a genuine and friendly manner by Lowe’s associates throughout the store?”

    Here, we see four questions in one. Yikes! Does Lowe’s want to know if the customer was greeted OR acknowledged? And was that greeting/acknowledgement friendly OR genuine?

    Imagine Lowe’s finds that 85% of customers say “No,” they were not consistently greeted/acknowledged in a genuine/friendly manner. Obviously they need to make improvements—but what? Their greetings or their acknowledgements? How friendly they are or how genuine they are?

    The best survey questions provide clear and actionable insights. To improve, Lowe’s should instead divide this question into four, or even better, consider what they really want to know and devise a clearer way to ask it.

  5. Question Relevance—Ace, Gap, JC Penney, and O’Reilly Automotive all asked about their associate’s product knowledge (e.g. “Please rate your satisfaction with the employee’s knowledge of parts and products)—and none of these retailers offered the NA option. It’s likely that a large portion of shoppers didn’t ask a question of any associate and so would have no way of accurately providing customer feedback.

    There are two ways to ensure questions are relevant to the customer. One way is to use survey logic and gating questions such as “Did you ask an associate for help?” Only customers that respond “Yes” will be asked about the associate’s product knowledge.

    Another way to do this is even simpler: offer the N/A option, this way, when the question is irrelevant, you won’t have bogus responses clogging up your data.

On top of the myriad data accuracy issues, our Point-of-Purchase Survey Study showed that retailers have little regard for their customers.

For example, Walmart asked 4 introductory questions irrelevant to the customer’s experience, and required the input of 2 receipt codes. Really? That’s a hassle.

But the biggest, most consistent engagement mistake? Many of the surveys were just too long—the average length was 23 questions. A survey should certainly never take longer than the interaction itself, in fact, it should take less time.

Family Dollar asked a whopping 69 questions in their survey—with 10 seconds a question that’s over ten minutes spent reflecting on items that cost a buck.

Designing a quality customer satisfaction survey is a process, requiring multiple edits to reach the best version. Throwing in every question is how NOT to design a survey. Think about what you want to know, and carefully craft your questions.

It’s also important to set expectations at the outset, communicating how long the survey will take, and then meeting that expectation. Nordstrom advertised their survey as 2 minutes, but with 25 questions it took closer to 5 minutes.

Most retailers didn’t provide any estimate of survey length, and instead simply let their customers click into the abyss.

To execute a customer satisfaction survey that’s better than just about every major retailer, get serious about accuracy and engagement:

  • Ensure your survey collects accurate and actionable data. Eliminate biases such as leading questions, forced wording, and faulty scales.
  • Make every question clear and relevant to the customer.
  • Show the customer that you respect and value their time by designing a survey that only asks what’s necessary and that states at the outset how long it will take.

If you follow even a few of the guidelines we’ve provided here, your survey will be leagues ahead of the biggest companies in the world. For additional hints about how to improve the quality of your customer feedback, get our Genius Tips. And if you’re interested in more about the first of its kind, Point-of-Purchase Survey Study, check out the 2-minute video or ask us for the complete report.

Finally, as always, if you have questions about your own customer satisfaction survey design, say hello, we’re happy to help.

One Concept that Improves All Your Customer Service Interactions

by Martha Brooke on May 18, 2017

In the US market alone, there are hundreds of customer service consultants offering thousands of customer service improvement strategies which begs the question: does anyone need yet another customer service improvement plan? I think, decidedly, yes, for the simple reason that most customer service remains lackluster and inconsistent—while executives routinely believe their customer service is better than it really is. (For more information on this, just ask, we’re happy to share.)pic

So why does customer service tend to be largely reactive, inefficient, and overly transactional?

From having evaluated tens of thousands of customer service interactions, I find that when customer service disappoints it’s almost always because it has been managed in an overly general, cookie-cutter way. The result is that customers are treated more similarly than they really are, as though they have the same needs, expectations, and perceptions. But of course, that’s not true. Each customer is unique, making their inquiries at least a little bit different. So when companies treat everyone the same, rarely are customers fully engaged or completely satisfied.

Antidote! What I outline here is a plan that actually improves customer service. I know this plan works because we’ve been using it for more than a decade to improve customer service for clients in a wide array of industries. And the reason it works is that the entire plan hinges on a single proven concept—one that’s paid huge dividends for our clients: specificity. That’s specific ways to add value, relative to specific scenarios, measured by specific scoring rules, summed with specific metrics and last but not least, coached with specific model answers to build necessary customer service skills.

If your immediate reaction is, “…but that’s not scalable!”, I assure you it is. There’s a well-crafted process behind this plan, so it’s actually more scalable than the usual approaches to customer service that are less formally conceived.

Step 1: Decide What Specifics You Will Add

First, you need to decide what specific, extra value you can add to each customer service interaction. This “specific extra” becomes a way to involve your associates—and it’s a powerful way to create a lasting, positive impression in customers’ minds. Examples of “specific extras” include brief, meaningful educational content; or a policy that is clearly and frequently articulated like Zappos has with its easy-to-return shoes.

Adding value through “specific extras’ is about consistently doing a little bit more, on top of addressing the question at hand or solving the problem.

Where to start? Gather your customer service improvement team and brainstorm. Then see how each of your good ideas can actually play out in real interactions. Sometimes those great ideas are clumsy when put into execution. So adding a specific extra is both imaginative and iterative, and requires a little bit of trial and error to land on what’s right for your brand and goals.

Step 2: Take a Complete (and Specific!) Inventory

In order to improve your customer service, you need a clear and specific picture of who contacts you and why. Don’t assume customers who ask the same question need the same answer. And don’t assume that your customer service report or software analytics are picking up on unique scenarios, because at present, software is not sophisticated enough to tease out this level of differential nuances.

The solution is to observe a statistically-valid number of your customer service interactions (emails, chats, face-to-face, etc.) and classify them by touchpoint, inquiry type, customer state of mind and customer objective.

Once you’ve figured out the possible combinations of touchpoints and customer characteristics, you’ll have your list of specific customer scenarios.

Step 3: Define Specific Criteria

You can’t manage what you don’t measure. So for each unique customer scenario, develop specific scoring rules. When figuring out what to measure for each customer scenario, start with the four dimensions common to all customer service interactions:

  • Timing: Was the customer’s time valued?
  • Information: Were the customer’s questions answered clearly, accurately, and proactively?
  • Connection: How engaged was the associate? Was the interaction tailored to the customer’s situation?
  • Differentiation: Did the associate demonstrate that your company is special in some way?

To make your scoring rules usable, break the four dimensions down into specific elements (usually there are between 8 and 20 elements) and weight these elements depending on what’s most relevant to the specific scenario.

For example, when a caller asks a retailer where their package is, connection and information will be most important. But when a caller asks about products they have not yet bought, providing persuasive information and differentiating your brand will matter most.

There is no doubt that developing scoring rules that measure each element is extremely time-consuming. But to be accurate (and truly useful), scoring rules must be specific and include explanations about how to apply each rule.

Step 4: Track Specific Metrics

Measure often and keep track of progress using specific metrics based on the elements you’ve defined in your scoring rules. Metrics that are specific show you exactly where and how you need to improve. Metrics that lack specificity (read: net promoter scores and C-SAT scores) don’t give you exacting details about where your customer service is going wrong.

Manage and share these metrics with a dashboard that enables you to coordinate improvement efforts across teams. Dashboards are also a great way to engage associates with the customer service improvement process.

Step 5: Provide Specific Examples

Finally, provide specific examples that show associates exactly what you are looking for in how they handle each specific scenario. If you can’t show your associates model answers, you’re missing a vital tool, because while it’s possible that associates could build out these models, they probably don’t have the time.

And without clear models, while some of your associates may make great choices, the fact is, some could unknowingly tarnish your brand.

Specific examples may sound like you want rote answers to customers’ questions, but you don’t. To prevent that hollow, robotic quality that creeps into customer service, coach associates on the structure behind each model answer, giving them the customer service skills they need to improvise off those structures and develop their own unique responses.

Superior customer service is specific, and specificity is the key to customer service improvement. It’s about specific ways to add value, understanding specific customers and their specific situations, measuring with specific scoring rules, tracking specific metrics, and providing specific examples that give associates the skills they need to deliver the highest levels of customer service. When you follow these five steps and embrace this concept of specificity, you will be well on your way to improving your customer service.

Perhaps the best way to think about this ‘specificity concept’ is as a mindset that actively focuses on awareness of variation and difference. This is a decidedly different way of thinking and perceiving that social scientist Dr. Ellen Langer describes as mindfulness, in contrast with the usual mindless ways most of us tend to our experiences. Let me know how it goes!

May 2017: Dedicated to Improving Customer Service & Measuring CX ROI

by Martha Brooke on May 4, 2017

Martha will be speaking at two events this month. On May 8, you can find her in Los Angeles at the ASOA ASCRS Annual Congress, sharing the stage with Randy Baldwin, CareCredit VP Industry Marketing. They’ll be presenting “10 Laws You Need to ACE the Customer Experience.”

Attendees will learn how to improve customer service in their ophthalmology practices. Some of the specifics covered will include: how to meet different patients’ needs, building trust through specificity, and best practices for greeting customers. Attendees will walk away with clear next steps and concrete actions to improve customer service immediately at their own practice.

Back home on May 11, Martha will be speaking at the Portland CXPA Luncheon, along with Matthew Selbie of Opiniator, and Sherrie Austin of Maritz. The topic is ‘Showing ROI on Your CX Investment’, and each will share examples of what real clients have been able to achieve by improving the customer experience.

6 Steps to Improve Your Customer Satisfaction Surveys

by Martha Brooke on March 24, 2017

I will be moderating conversations about VoC next week at the Operations Summit. It got me thinking, it’s practically a given that every company will issue a customer satisfaction survey as part of their VoC program. But it’s NOT a given that every survey will improve customer satisfaction.improve your customer satisfaction survey with our 6 step process

Think about your own satisfaction survey for a moment. Are you collecting accurate data? Is the data actionable? Are you able to identify clear gaps and opportunities?

Customer listening programs often suffer from a host of flaws and biases. In fact, in our recent study of point-of-purchase surveys we found that the largest US retailers pack their surveys with tired, biased, and often irrelevant questions.

And when clients come to us with their surveys, here are some of the common flaws:

Surveys so long they alienate customers.

Surveys that force customers to choose from irrelevant multiple-choice options.

Surveys whose customer comments never get properly analyzed.

Good surveys produce good data, and good data reflects the experiences your customers actually have with your company. Good data also shows where you need to improve.

This 6-step process will improve your VoC program by providing a customer satisfaction survey that gets to the heart of customers’ expectations, their perceptions and how they feel about their experiences with you.

  1. Evaluate your current survey(s) and map your unknowns.

    Work through your current survey(s) to identify irrelevant questions and biases. Check for:

    Neutrality: Are your questions impartial so you don’t force the answers you want?

    Engagement: Are questions conversational, so that customers want to respond?

    Relevance: Are you employing branching logic to ensure you’re maintaining relevance with customers throughout your survey?

    Sampling Biases: How well do your respondents actually represent your customer base?

    Actionability: Are you asking for information that can be put to use?

    Next, take a step back and consider what you don’t know—where might you have gaps in your understanding of customers’ journeys? Are there areas from your previous surveys that were inconclusive?

  2. Tailor your language.

    Think about your industry and customers. How would your customers describe their experiences with you? Ask your team:

    Who are our customers? How engaged are they?

    What words do they use?

    What’s most relevant to them?

    A classic example comes from the hospitality industry. Hotels often ask about quality of “housekeeping” on their surveys—but when customers they open their hotel room door, they aren’t looking for “housekeeping,” they’re looking for “clean.” Tailoring your survey’s language to match the customer’s is how you uncover the best data about how customers really feel.

  3. Develop branching logic.

    Consider your customers. Have you done a persona study? Does each persona interact with different touchpoints? For example, don’t force an end-user to click mindlessly through questions specific to distributors; it will result in junk data.

  4. Draft your questions. Iteratively.

    If you think a survey can be built in a day, you’re wrong. You’re asking customers to spend their valuable time taking your survey, so you’ll need to spend your valuable time building it.

    Questions should be put through detailed development and rigorous review processes. Return to step 1, and vet your newly drafted questions against the list of common problems. Then edit, and edit again. In fact, we recommend getting internal AND external feedback on your survey questions—before you edit one last time.

  5. Code and analyze the data.

    Once you’ve got your survey responses in, it’s time to find the signal in all that noise. Hopefully you have a large, statistically-significant, set of respondents, so your findings are predictive and forecastable.

    As part of your survey analysis, it is critical to code the open-ended comments. And by code we don’t mean simply read or make a word cloud. You need to scientifically parse and categorize the comments because this is how you bring that data to life in meaningful, actionable ways.

  6. Present your findings—graphically.

    To get your  team on board with your VoC results, curate your metrics down to a simple few and incorporate infographics. Use a dashboard to get everyone involved with the data and next step actions.

    Some great customer experience metrics that we advocate for include: Quality of Customer Interaction™, Customer Effort, Competitive Edge, and Persuasion Scores.

Not all surveys are created equally. In fact, many customer satisfaction surveys are disengaging and result in inaccurate data. But our 6-step process, gives you the framework you need for a stellar survey—one that collects accurate customer feedback, motivates teams to improve in specific ways, and shows customers their voices are heard!

GUEST POST: Jerry Sokol on Customer Experience Metrics

by Martha Brooke on December 20, 2016

Jerry Sokol writes for Biz Meets Tech and he recently wrote about something that’s always top of mind at Interaction Metrics—good customer experience metrics.

Numbers are only useful insofar as they help you improve—simply tracking outcome metrics or just collecting data is never the answer. The right customer experience metrics don’t just tell you that customers are unhappy—they show you where, why, and what specific customer types are the most displeased.

If your numbers can you tell you the specifics of the problem, then next steps become abundantly clear. Jerry explains the dilemma businesses face when it comes to customer experience metrics, and poses good questions about how to improve customer service.



Jerry Sokol: Business thrives on numbers. Because numbers remove ambiguity and allow for clinical decisions. Now, of course numbers can be misused (“lies, damned lies, and statistics”), but for now, let’s sidestep that and talk about numbers being used for good.

Finance does this best. Finance metrics all boil down to “how much did this cost and how much did I make?” Very simple. Since money is involved, you had better believe that the backup is there to not only support those numbers, but help point to how to improve them.

As we get away from finance, things are not so clear. We want metrics that unambiguously tell us what’s going on with our operation, much as profit and loss do for finance. In fact, we want metrics that can, in some way, eventually correlate to profit and loss. And I would submit that most of them aren’t very good.

This is particularly true in customer care. Don’t get me wrong – the standard call center metrics absolutely have their place, and I’ll talk about that later. But they correlate very poorly to customer satisfaction, real cost and true agent performance. What’s worse is that they are frequently misused to drive bad and costly practices.

The industry attempts to address this problem have been poor. The mantra is that “we are swimming in data”, and through the magic of better organizational tools, the problem will be solved. We’ve had better charting of the same numbers, data warehousing, Business Intelligence tools, and now Big Data². Yet I still see the struggle for actionable information virtually everywhere I go.

From what I see, we’re going about this backwards. We are delivering metrics without first considering what QUESTIONS we want answered.

Given that the entire raison d’être of a call center is to address a caller’s concern, the #1 question to answer is “Did we solve the customer’s concern?” OK, First Call Resolution almost answers that, and is pretty well understood (if not used) by most call centers.

And the follow up is, “If not, why not?” Now it gets tougher.

Zooming out further, what about the most obvious question? How about “Why did they have to call in the first place?” This information – the tracking of call drivers – is rarely done well if at all.

With those three questions, I’m sure most of you could figure out what kind of information you would have to collect, and even come up with metrics that you could use. But, I’ll help more in future posts.

As a final note, ACTIONABLE is the important word. Your metric can be useful, but if the backing data doesn’t point to how you can improve, well…



Jerry Sokol is an independent consultant that improves customer service by improving how a call center’s processes and technology work together and align to the business.  Read more about him at bizmeetstech.net/

Voice of the Customer: It’s NOT About You!

by Martha Brooke on December 14, 2016

What’s the point of doing a customer satisfaction survey? Well, rather obviously, to gauge how customers perceive you, and where their expectations are being—and not being—met.

This requires walking in the customer’s shoes and designing your survey from the customer’s perspective. Sound simple? It is…sort of.

Unfortunately, organizations run a high risk of tunnel vision, becoming acutely interested in only the problems they think are most pressing. So they pack their questions into customer surveys, despite the fact that their interests have NOTHING to do with customers’ actual experiences.

Let’s look at an example:

Flawed Customer Satisfaction Survey

“Please rate the balance of graphics and text on *******.com.” (1-10, or Don’t Know).

The biggest problems with this question are:

  • It’s unclear: Does a bad rating indicate too much text, too many graphics, or both?
  • It’s out of touch with the customer experience: Customers don’t think, “Hmm, the graphics-to-text ratio has balance problems” They DO think, “This webpage sucks, I can’t find a simple answer.”

The question of graphic balance is best saved for a web design or UX team. As I often say, your customers are not your analysts. Leave questions out of the survey that do not reflect the customer’s immediate experience.

Examples of questions that customers CAN answer and that provide actionable insight for your customer satisfaction survey are:

  • “Who else might you buy similar merchandise from in the future?”
  • “How would you rate your call with our support team?”
  • “Did you get an answer to your question that made sense?

When you combine customer-centric survey questions with good customer feedback research methods, you’ll get to the heart of the customer experience.  So, take a hard look at your customer satisfaction survey and make sure it’s actually relevant to your customers. Start listening to your customers now!

GUEST POST: Customer Experience Measurement: Why it is Vitally Important but Badly Done! – by Ian Golding

by Martha Brooke on December 12, 2016

Today I’m pleased to share a guest post by Ian Golding. Ian drives home the critical point that customer experience metrics are essential to transforming experiences. It’s always great to share ideas about how improve customer service from like-minded thinkers.


When it comes to the profession that Customer Experience has now become, one of the most important and significant competencies required by all organisations is that of measurement. The ability of a company to robustly and continuously capture a ‘fact based’ understanding of how capable they are of meeting customer needs and expectations is critical if they have a desire to achieve customer focused success.

However…. There is usually one of those, whilst many businesses may be confident that they are indeed ‘measuring’ the Customer Experience, my overwhelming concern is that too many are doing so badly. I recognise that this is a rather bold statement to be making, so allow me to clarify my top 4 reasons why I think this:

1. Failure to measure sufficient ‘voices’!

I believe that the most robust Customer Experience measurement systems should contain a combination of three important ‘voices’- Voice of the Customer (VOC); Voice of the Employee (VOE); and Voice of the Process (VOP). Many organisations are (and have been) measuring VOC for a number of years. Although this is important, VOC in isolation of any other form of measurement may not enable those capturing it to understand the relationship between the things they do (processes that enable the customer journey) and the way they make the customer feel (customer perception). If this is the case, determining exactly WHAT it is that needs to be ‘fixed’ to improve customer perception is challenging (and sometimes impossible).

Over the years, I have interacted with a number of organisations who capture VOC (and VOC alone) yet do not understand what they are supposed to do with it – what prioritised actions should be taken. If these businesses were complementing and correlating their VOC with VP and VOE, they would be in a very different position.

2. Failure to measure the true ‘end to end’ customer journey

The theme of ‘not measuring sufficient voices’ continues in my next assessment of the state of Customer Experience measurement today. Customer Experience Measurement is Key Whilst many companies are capturing at least VOC, too often, the VOC they are capturing is NOT representative of the ‘end to end’ customer journey – and sometimes of all types of customer. If you are only asking customers about a part or parts or the customer journey, the feedback being received will only be representative of the parts you are asking them about – it is not rocket science.

Despite this, I have observed too many scenarios where the measurement system is not representative and as a result, an organisation in this position is likely to do one thing – JUMP TO THE WRONG CONCLUSION! It is so important to get to the TRUTH with Customer Experience measurement – nothing else matters than having a clear understanding of the things that need to be addressed to have the greatest effect on improving customer perception and as a result, commercial goals. One thing I can say with certainty – if your customer metric looks too good to be true – it probably is!

3. Failure to align business process to the customer journey

The most robust Customer Experience measurement systems are structured by correlating business processes with the customer journey. If we think of all organisations as a combination of ‘layers’, whilst the top layer is the customer journey, the middle layer is made up of business processes. It is a business’s processes that enable the customer journey to happen. The third layer comprises the technology that enables business processes to deliver the customer journey…. Are you still with me?!

For a business to be ‘focused’ on knowing WHAT to address to improve the customer journey – in other words, to be able to narrow down to a small number of priorities – it must be able to measure ‘CAUSE and EFFECT’. If you measure how capable business processes are at doing what they need to do (the CAUSE), as they improve, you should see an improvement in the way customers feel about what you do (the EFFECT). This principle only works on two conditions – a) you measure all business processes that enable the delivery of the customer journey; and b) your business processes are ALIGNED to the customer journey.

In my experience, so many companies designed and implemented business process (and technology for that matter) without even knowing that the customer journey existed, too often, business processes do NOT align to the customer journey. As a result, it is difficult (and sometimes impossible) to correlate a change in the performance of processes with customer feedback. Getting this right – enabling the relationship between process and customer journey is perhaps the most powerful way of understanding how to improve the customer experience (based on fact).

4. Failure to take ACTION

Failing to take ACTION on Customer Experience measurement is a BIG mistake to make. It is more common to see this being the case than you may imagine. All over the world, business leaders seem unsure as to whether or not they should be using measurement as the carrot or the stick! In reality, neither of these analogies is appropriate (in my opinion).

When it comes to Customer Experience Measurement, the number – an NPS score for example (which some may see as either the carrot or the stick) – is not the sole reason for measuring. Regularly I come across leaders who are focussed on the score and the score alone – almost obsessively. I do believe that a score is important – to link to business strategy and to act as a ‘talisman’ for everyone to follow, BUT unless you know what to do with the measurement being captured, it is POINTLESS.

There are more examples I could add of what I consider to be ‘bad’ examples of Customer Experience measurement – but rather than me rambling on, I would love for you to add your reasons by commenting on this article. As a profession, it is the responsibility of Customer Experience Professionals all over the world to educate businesses how to measure the Customer Experience well – I hope you will take some time to add your expertise for others to learn from!

About Ian Golding:

A highly influential freelance CX consultant, Ian advises leading companies on CX strategy, measurement, improvement and employee advocacy techniques and solutions. Ian has worked globally across multiple industries including retail, financial services, logistics, manufacturing, telecoms and pharmaceuticals deploying CX tools and methodologies.

Retailers Waste Time with Critically Flawed Surveys

by Martha Brooke on November 1, 2016

It seems that we’re asked to take a customer satisfaction survey with nearly every purchase. But do you ever wonder…do they really care what I have to say?

Our 2016 Customer Listening Study, the first of its kind, evaluated the customer satisfaction surveys of 51 top US retailers. The main finding: retailers like Lowe’s and Wal-Mart waste customers’ time—and their own—with critically flawed surveys. No company was completely scientific in its approach; nor did any company fully connect with customers in a thoughtful, compelling way.

Yet retailers issue millions of customer satisfaction surveys daily—which begs the question of whether these surveys are worth the paper they’re written on. To find out, we objectively evaluated 15 survey elements, in areas such as information quality, customer engagement, and branding cues.

The average survey quality score was 43—an F grade. We found two main problems with the surveys:

    • They collected largely inaccurate data.
    • And failed to demonstrate active customer listening.

We also found that:

    • With 23 questions on average, the surveys were excessively long.
    • 32% of all questions lead customers to give answers that companies want to hear.
    • 7-Eleven had the best survey—it was 13 questions, none of which were leading or used biased wording.
    • Family Dollar had the worst survey—it had 69 questions, 29 of which were leading.
    • Nordstrom, the most known for customer service, stated its survey would take 2 minutes—but with 25 questions, it took 4-5 minutes.

This study highlights how easy it is to produce a flawed survey. The findings should be considered by any company with a customer listening program.

To get more value from their customer satisfaction surveys, retailers should apply a scientific methodology, and be sure to connect with customers to show they’re listening.

The retailers selected for the 2016 Customer Listening Study were the National Retail Federation’s (NRF) top retailers, omitting supermarkets and membership stores. Surveys were collected between June 23 and July 27, 2016. Download the Study Report, or watch the 2-minute video.

Have a question about the Customer Listening Study, or want to learn about designing an intelligent customer satisfaction survey? Drop us a line.

3 Ways to Get More from Your Mystery Shops

by Martha Brooke on October 14, 2016

At conferences, I’m often asked about mystery shopping, and it got me thinking: there are a lot of misunderstandings about what mystery shopping can and can’t do.Customer Service Evaluation Mystery Shops

Some see mystery shopping as a simple check-in; others see it as too artificial for an accurate customer service evaluation. Many executives assume they get the same insights from reviews, social media, and direct customer feedback.

Others think it’s all about consumer retail, such as clothes shopping at the mall. They are wrong. Mystery shopping has tremendous value for B2B as well as B2C, and is vital to a thoughtful and well-vetted customer service evaluation.

Obviously mystery shoppers are NOT real customers, and don’t always reflect your average customer—whether that’s a machine parts distributor or a fashion-forward teen. What mystery shopping offers is a high-precision tool to improve customer service in your most vulnerable areas. In addition, it’s highly customizable for different goals. And because it examines what actually happens, it enables you to track actionable customer experience metrics.

Here’s how to use mystery shopping to improve customer service:

1) Use mystery shopping to conduct a thorough, airtight customer service evaluation.

Designed and performed well, mystery shopping ensures that nothing slips through the cracks. It can test almost anything, plus track specific customer experience metrics, such as how associates:

+ Solve problems
+ Explain information
+ Represent your brand
+ Handle different personas (e.g. skeptical, confident, or angry customers)

Testing many things at once is an efficient way to target weak spots, focus on goals, and measure frontline performance against precise criteria.

2) Use mystery shopping for the most accurate apples-to-apples comparison with your competition.

You don’t exist in a vacuum; competitors are always part of the equation, but it’s hard to get an accurate comparison. Customer satisfaction surveys reveal how customers perceive you but they don’t measure concrete differences between you and your competition.

Mystery shopping looks at actual performance, using the same criteria to evaluate you and your competitors for an objective comparison. For example, for an investment strategies client, we used a high net worth persona to call nine of our client’s competitors asking similar questions about market volatility. This enabled us to show our client best practices from the field for handling this specific type of question.

3) Use mystery shopping to test your most difficult situations.

Most of your customer interactions are probably fairly cut-and-dry, with little risk involved. But for every fifty interactions, you might have one critical opportunity to keep or lose a customer. Mystery shopping is the best way to test how your frontline handles these high-risk interactions.

For example, for a client in healthcare, we designed a scenario in which a parent called in with her child having an asthma attack—a rare event, but critical to our client’s brand when it did occur. If you simply listen to twenty-five calls, it’s unlikely you’ll run across high-risk situations like these. Mystery shopping hones in on the moments when your brand and customer loyalty are most vulnerable.

In short, mystery shopping takes the mystery out of customer service. It ensures the most comprehensive customer service evaluation, accurately compares you with competitors, and tests high-risk situations. Done well, it provides actionable customer experience metrics, and uncovers clear steps to improve customer service.

Mystery shopping incorporates Interaction Thinking™, because it recognizes the fact that customer interactions are comprised of many nuanced details and elements. When designed to capture this complexity, mystery shopping pinpoints where you need to make the greatest headway. So, to improve customer service efficiently, incorporate expert mystery shopping into your current customer service evaluation program. You’ll have clear, actionable insights on where to improve most, and the concrete next steps to get you there.

A Really BIG Customer Satisfaction Survey No-No

by Martha Brooke on September 28, 2016

Our company recently began working with a website optimization service. As the introductory phone call drew to a close, our new account manager asked me to take a customer satisfaction survey—and told me, “That’s how I get paid.”

This was useless, awkward, and inappropriate. It showed that what I had to say didn’t really matter. Plus, this was my account manager—there’s no way I’d give him a poor rating that might sour our weekly phone calls.

This entire approach to customer satisfaction surveys ran counter to Interaction Thinking™ because it overlooked how interactions can create value for the company and customer alike. The company could have gotten accurate data (we’ll get to that in a minute), and the customer could have had a great onboarding experience, unencumbered by feelings of obligation or guilt.

Furthermore, the survey was painfully generic. For example, one question was the ubiquitous Net Promoter Score (NPS): “How likely are you to recommend us to a friend?” Not only is NPS so overused that many customers are numb to it—it’s often just irrelevant. Who would I recommend my account manager to? Most of us don’t discuss niche web services with friends. The rest of the questions were equally trite and focused on broad outcomes, not specific nuances.

By the way, if this had been a tech support call, linking employee pay to survey ratings could favor quick fixes that might seem right at first—but don’t fully resolve the issue, and leave customers calling back a week later.

Now, about that data: unfortunately, this company’s survey only gathered selective, biased feedback and inaccurate customer experience metrics. It revealed no valuable insights about the actual quality of the customer experience, or how to improve.

If companies want to master the customer experience to build customer loyalty, they need satisfaction surveys that collect accurate, valuable customer experience metrics—while never sacrificing positive, worthwhile experiences for customers.