No matter how advanced computers and software get, our voices will always be the best communication tool we have. After all, we’re born with the innate ability to communicate and, for most of us, talking fast becomes as easy as thinking.
It’s no wonder then that the race to develop computers and AI systems capable of interpreting our voices has dominated the focus of large tech companies over the last decade.
But the endeavor to build machines that can listen as well as we can has been going on for far longer than that.
Believe it or not, the first computer able to recognize speech was developed back in 1952 by Bell Laboratories. Adorably nicknamed AUDREY, the machine was breakthrough – even if it could only interpret single digit numbers spoken by one individual!
Voice recognition technology as we know it only really started to gain traction amongst consumers 60 years later with the release of Siri on the iPhone 4s.
Though basic by today’s standards, the press and public alike were bowled over by Siri’s ability to answer basic questions, perform small tasks, and respond in a natural, conversational manner. For the first time, people felt comfortable talking to an AI.
Naturally, the competition wasn’t going to stand still and let Apple take all the glory. Google entered the fray in 2012, releasing Google Now for their Android operating system. Microsoft and Amazon jumped on the bandwagon soon after, unveiling Cortana and Alexa in 2013 and 2014 respectively.
Just five years later, voice recognition technology has moved far beyond the smartphone. It’s now a standard feature of smartwatches, laptops, games consoles, smart speakers, and more. People aren’t just using it to search the web either. They’re controlling smart home devices, sending texts and emails, and even making purchases – all with just their voices.
With over 118 million smart speakers now in US households (a 78% year-on-year increase), it’s clear that voice-activated technology is here to stay.