If you follow the AB curiosity blog, you’ll know that I’ve been writing quite a bit recently about voice search, voice assistants and chatbots.
Some people may think that none of the above are relevant to their business because voice assistants are primarily used to set cooking timers, look up local business directions and play music. That’s mainly because the user experience in voice search actions is often a bit rubbish. How many times have you asked Alexa, Siri or Google Home a question and they come back with “here’s what I found online…” or some variation?
The problem is that as humans we type around 40 words per minute, but speak around 150. Speaking is so natural to most of us that it’s a far easier route to getting the information we seek than typing our question into a device. But the experience falls down when the AI can’t respond in a conversational manner with relevant information.
Part of it is to do with the data available. For instance, Google relies heavily on the Knowledge Graph to return answers to voice search queries, but the context in voice queries is essential for understanding. Conversational search is something the big search engines have been working on for some time, in fact, Bing was way ahead of the game in the beginning but that doesn’t seem to be the case anymore.
Whether or not voice search becomes truly mainstream will depend on the user experience and the overall satisfaction. And this, in turn, depends on the preservation of context. Voice assistants can often return something relevant for a first question, second and even third question, but refer to a point earlier in the conversation or query something given in a previous answer and the context may be lost. For instance, a 2016 research paper studied user satisfaction of intelligent assistants.
There’s no doubt that the presence of virtual assistants in our daily lives is only going to increase, but whether or not voice search becomes mainstream will depend on user satisfaction and delivering positive outcomes.