I recently had an experience that highlighted a common flaw among organisations that conduct customer satisfaction surveys.
It all started when I dropped my phone in some water and approached Optus for a replacement. The new phone arrived quickly, but I soon discovered it came with its own set of problems. I was told, rather unhelpfully, that because it was a replacement rather than a new phone, it would need to be sent off for repairs and I would be without a phone for up to two weeks. Fortunately, I chose to visit the Apple store and they replaced the faulty phone on the spot. Crisis averted!
Optus then called to conduct a survey about my satisfaction with their service, asking me a series of questions about the professionalism, timeliness, courtesy and interaction with call centre staff. There was just one problem: they didn’t ask if I was satisfied with the outcome. While the service was commendable, they didn’t resolve my problem and I wasn’t given an opportunity to share my feelings about the experience.
It’s a common problem among both public and private organisations. Mark Friedman, who developed the Results-Based Accountability model of evaluating programs and services, said the world’s shortest survey would comprise two questions:
- Did we treat you well?
- Did you get a satisfactory outcome?
Many organisations ask variations of the first question, while ignoring the second. Of course, we all have an obligation to provide good customer service. Without it, we may not be able to achieve the broader outcomes of the program or service. However, without understanding whether the customer enjoyed a satisfactory outcome, it’s difficult to measure results and improve future performance. There needs to be a balance of both questions, so you can gain insights that will be useful for your organisation.
There are a number of ways to frame the second question, if you run programs or services where the customer’s satisfaction with the outcome is deemed irrelevant. These may include advisory, regulatory or enforcement programs. In these cases, you should ask questions that ascertain whether the service has contributed to changes in behaviour. For example: Did you acquire any skills in this training program that you can apply in the workplace?
Framing the questions in terms of behaviour rather than reactions (‘would you recommend this service?’ instead of ‘how did you feel about this service?’) will also provide you with more useful and accurate information with which to evaluate your programs and services. This feedback will often highlight common problems, which can form the basis of your key performance indicators for monitoring organisational performance.
According to Optus, I had a great experience, but my feedback didn’t tell the whole story. By balancing their survey with questions about satisfaction and outcomes, they’ll get a better picture of my experience and can address any problems in the delivery of their service.
Nexus has substantial experience working with organisations to develop and evaluate customer satisfaction surveys that improve organisational performance. Contact us to find out more.