Search takes flight: From Panda to Penguin to Hummingbird
Google has unveiled a brand new algorithm (code named “Hummingbird”) to improve the interpretation user search queries and it’s the first algorithm rewrite in over 10 years! There’s something very exciting about how this new algorithm is focused on improving the way Google interacts with users whereas previous upgrades were more about improving the way Google evaluates the content it indexes to generate search results. One of the main drivers of this rewrite is that Google is anticipating more complex spoken queries in the years to come. I believe this represents a broader shift in not only search engine technology but how we will interact with computers and the internet in the near future.
It’s a Bird! It’s a Query! It’s still Google.
No one noticed the algorithm change that’s been already running for a month and, in truth, we really shouldn’t expect any significant changes to the search results will see in the near future: this is mostly because Panda and Penguin have already done a good job cleaning up search results and Caffeine improved on what results are surfaced. The long term impact of Hummingbird will be felt later on as our interactions with Google shift towards spoken voice requests that will naturally be longer and more complex than what we would normally type. In additional to signals from search history, location and language used, longer conversational search queries will provide more information for Google to derive your intent from to generate what should be better and more accurate search results–the so called semantic search.
Notice the microphone icon in the search query box: give it a try!
What does Hummingbird really represent?
This new algorithm signals how three technologies have come: mobile, AI and, most notably, speech recognition. These will define how humans interact with and through computers in the future and Google is now laying the groundwork for it.
We’re fast approaching the point where mobile internet traffic is set to overtake fixed line access. We basically can and increasingly do more through mobile devices because of improvements on how we an interact through these devices due to their increased power, usability and the availability of broadband access on them. And, just like anything else, we’re going to want more from them. We’re also seeing a broader business trend where handset makers like Motorola and Nokia snapped up by Google and Microsoft, along with Apple reinventing itself as a mobile device maker.
The use of AI is nothing new at Google and will continue to grow in accordance to Moore’s law. It’s already being used to evaluate content and search results, and now, with Hummingbird, our interactions with the search engine via the search queries we launch.
When I first read about Hummingbird, I immediately thought about futurist Ray Kurzweil, who now happens to be director of engineering at Google. Before Google, he has lots of success in text-to-speech and speech recognition technologies, amongst others . Ray’s book The Age of Spiritual Machines made many bold predictions in the late 90’s and, in addition to the implications of having increasingly intelligent computers, something that always stayed with me what the role speech recognition would have in the next technological leap.
Right now our interactions with computers are limited by how much we can enter through keyboards, mice and touch screens. Speech recognition technology has had some success but most of us still prefer poke our smart phone screens than talk to Siri. However, at some point speech recognition will operate at a sufficient level enough for us to abandon those input devices that have enslaved* us so far and the impact of this shift to speech has the potential to be revolutionary.
* Maybe I was a bit overly dramatic there but anyone that ever had to deal with a sticky keyboard or dirty trackball knows what I mean.
We’ve already seen where this going, actually.
Star Trek. Yeah, Star Trek. There was the PADD in the next generation series but for the most part there weren’t any smartphones or portable computers being lugged around. There were communicators that enabled person-to-person communication but what’s to note is that interaction with the ships Duotronic computer was by talking to it: you asked it a question, it gave you an answer or displayed what you needed on the closest shared screen.
Are we headed the same way with our interaction with computers and services like Google? I think so. Instead of manually entering queries or other inputs, we’ll be able to talk to computers to ask them to find out things for us and even perform some tasks like ordering a table at a restaurant or booking a flight. We may become less dependent on on screens because the computer will just tell us what we need to know and perhaps even narrate to us longer texts (and hopefully summarize long e-mails). We’ll be using our mobile devices as a way to communicate with a computer acting as out personal assistant (and best friend).
The speech recognition piece combined with the ability to accurately interpret what’s said is key for this new interaction model and it seems that Google’s already one step ahead. Why not give it a try?