AI and Analytics for Business

Updates

3 Strategies for the Future of Voice-Enabled AI

Female professional using virtual assistant at desk

From modest beginnings of the Audrey system, a system built by Bell Labs in the early 1950s that could only recognize numbers said aloud, the business of voice has exploded. At CES 2020, Amazon announced there are “hundreds of millions of Alexa-enabled devices globally now.” At the heart of this disruption around voice-enabled technologies, there are two key trends: (1) rapid adoption of the Internet of Things (IoT) and (2) advances in psycholinguistic data analytics and affective computing.

With the global penetration of smart devices, close to half of all consumer searches online are predicted to originate from voice-based searches by the end of 2020. As a complement to the availability of smart devices, there is rapid development of AI tools and data-modeling techniques for inferring emotion and intent from speech. For instance, neural-network language models are being combined with techniques from linguistics and experimental psychology for a real-time inference of human intention.

Consider the impact already realized: 200 million Microsoft Teams participants interact in a single day using the collaboration tool, customer handling times have been reduced by 40 percent in call centers, and voice shopping is predicted to become a $40 billion business in the next two years.

As companies globally embark on the journey of realizing benefits from voice analytics, what strategic considerations are in play? Here are our three recommendations.