Speech analytics has huge potential, but extracting the right data, that will genuinely boost the customer experience, is trickier than it looks.
Let’s take an example of a company with 8 million customer calls a year, which is typical of a customer services team in a large corporate. Usually there will be a quality assurance (QA) team that analyses calls for compliance reasons. Typically they will manage to listen to no more than 0.7% 1 of all customer conversations – which is tiny.
However, computational speech analytics can listen to all of them and many large corporates already have such a system. Speech recognition can analyse all conversation points (with about 70% accuracy) and identify words and phrases that indicate whether the customer is receiving the service they should be, or becoming increasingly unhappy or dissatisfied.
But it is much more difficult to get right than many organisations think. There are numerous pitfalls, and the ‘art’ of programming the speech analytics to seek out the conversational data that presents a true reflection of how the customer feels is far more complex than many people think.
It’s a long way from being an ‘out of the box’ solution, as a number of organisations have discovered to their cost. It takes a whole new set of skills to analyse the data in a manner that will actually drive up customer satisfaction levels. That means doing it without bias, and not in a way that encourages advisors to game the system.
When speech analytics goes wrong
The first encounter I had with computational speech analytics done badly was in a large contact centre operation around four years ago. My job as a conversational analyst was to listen to the calls coming in and determine which ones were driving a great customer experience, and which ones were not.
The organisation had put a speech analytics system in place and removed their QA teams at the same time in the belief that they were no longer needed because the computer was doing their job.
And the CSat results were going down and down, repeat contact stood at about 16%, the level of complaints wasn’t shifting, and average handle time (AHT) wasn’t improving.
We did some deep analytics on their data and looked at 33,300 customer interactions.
Turned out there was zero correlation between the speech analytics insight and the actual level of customer satisfaction, as stated by the customer at the end of the call.
When ’sorry’ seems to be the wrong word
But deep within those conversations there were some really fascinating findings that have gone on to inform a lot of what we do now. For example, making an apology made no difference to the level of customer satisfaction. Neither did being polite and personable.
These were givens that might have impressed customers 10 or 15 years previously, but now made very little impact. Smile when you dial is no longer sufficient; it’s a hygiene factor.
The out-of-the-box speech analytics programme would give an advisor a green tick if it heard an apology, because it was coded to think apologies were good. If it didn’t hear an apology the advisor would be marked red.
Consequently advisors were apologising for everything! And it was just making customers more and more dissatisfied.
Computers can be biased too
Another measure where call advisors could earn a green tick was whether they finished a call with the phrase ‘is there anything else I can help you with?’ That too became an automatic way of playing the system even when it has having zero effect.
Worse still, some of the coding had infused the analytics with a personal bias. We discovered that if any advisor used the common, colloquial term ‘to be honest’ they were immediately scored down. That was because one person in the IT department who coded the system personally hated that phrase so, without any input from anywhere else, just built it into the coding as something that should never be said.
Four years on, that organisation is rethinking its approach to speech analytics, has rehired its QA people, and is managing to get the customer satisfaction figures going back up again.
It’s time to rethink the role of quality assurance
And that brings me to a particular point about quality assurance teams and where QA should meet AI in the future.
Back in 1997 Gary Kasparov, the reigning world chess champion, was beaten by a computer for the first time. He’d played against IBM’s Deep Blue in 1996 and won, but in the following year IBM upgraded their machine and it triumphed (though some say Kasparov played unusually badly on the day).
Today there are deep learning machines infinitely more powerful than Deep Blue so you would think they would beat humans effortlessly, all the time. But that’s not the case. When you team an average to good chess player with a good chess computer, together they are quite often able to beat a chess supercomputer playing on its own. The human involved can direct the computer to follow a particular strategy or chose from options presented by several computers.
The point is, the combination of human and AI – that augmentation of human intelligence with artificial intelligence – can do more than AI can manage on its own (at least for the moment).
And that’s where I see computational speech analytics going: using humans to guide the ‘grandmaster’ programme to what it should really be looking for, and the outcomes that will actually boost customer experience.
I think there is huge potential in QA teams taking that function on; retraining and reskilling to bring what they already know about excellent customer experience to the AIs that go looking for it.
For many organisations, especially those who’ve taken on speech analytics quite recently, there is a major skills gap in this area. Consequently it’s one of the subjects we’ve looked at very closely in Capita, and I really do feel we’ve developed some exceptional abilities in this area.
Yet there is a huge risk too that ‘tangibility bias’ creeps in, and the accountants – and not the directors of customer experience – get to choose the new operational model. I know a number of utility companies with over 110 QA team members; that’s a £26 million 5-year saving. But what we advocate is to up-skill these team players to deliver better business outcomes; just the average chess player with an AI tool.
We’re breaking new ground in human augmented speech analytics, and I think it’s going to be a hugely important element of customer management in the coming years.