Using the Nexus 5’s touch-free audio controls to instigate search is cool. But there’s even more Google can do to turn up the volume on voice.
One of the Nexus 5 smartphone’s best new features is the always-listening, touchless control over Google Now, the name by which the platform’s personal assistant is known.
By saying the words “OK, Google” when the phone is unlocked, you can launch any of Google Now’s actions — like searching the Web or dialing a number — without having to touch the screen. Several Motorola phones did this prior to the Nexus 5’s Android 4.4 KitKat OS.
Voice-activated Google Now is a terrific little convenience that can save time or give you the freedom to go hands-free. It’s also another stepping stone for what Google, and other companies working on voice actions, can build out next.
For instance, as long as I’m entirely hands-free, I’d like to be able to use secondary voice commands, or rather, a series of commands, to keep both hands on the wheel, or in a chicken I’m stuffing, or wrangling a squirmy child or pet.
What if, when my cell phone rings, I can vocally instruct Google to answer the phone, and then to turn on the speakers, so I can keep doing what I’m doing uninterrupted?
Similarly, what if Google Now were able to interpret requests to adjust the phone’s volume or brightness, or open the Settings menu and then open another submenu while you decide on your next selection?
There’s a tremendous amount that Google’s voice actions can do, like call a business you search out by name — as long as there’s only one instance of the shop near you. Otherwise, the search assistant may present you with a list of choices that you won’t be able to narrow down until you manage to free a hand.
Likewise, if you rattle off very specific instructions, your Android phone can set a reminder for a certain time, but you’ll still need to tap the screen to confirm the reminder. In my voice actions future, you’ll be able to daisy-chain voice commands to set the time and approve the reminder, which the software will understand based on the context of the initial request.
In other words, as long as I’m still in the reminders app, Google Now should assume that commands relate to the reminder app, unless I completely switch tacks and request something else (“OK, Google. How long will it take to drive to Schenectady?”)
I imagine a Google Now that can juggle a handful of commands as adeptly as a human who hears step-by-step dictation: “OK, Google. Search for “best restaurants in San Francisco. OK, Google: Scroll down. OK, Google, pick the menu for Boulevard.” And so on.
Even if you do have access to your digits while using the phone, it would be great to have options to intersperse voice actions with typing, which I already do now when dictating short messages or notes.
Say you’ve just taken a photo or batch of photos you’d like to immediately send to a contact. I envision an even more intelligent assistant savvy enough to execute the command “OK, Google. Send these photos to Jason.” after selecting them in the gallery. It would also help, of course, to be able to vocally launch Google Voice Actions from the photo gallery app.
What I’m proposing would absolutely require a far deeper level of integration with the operating system’s many menus, submenus, and apps. Yet it’s a direction I think we’re headed in, and one in which Google (and Apple, and Nuance, and others) are very capable of achieving.
I, probably like some of you, have in the past been skeptical about speaking commands into my phone, at least in public areas. Yet the practice is already becoming more commonplace (at least here in Silicon Valley).
As the architects of voice commands tap into deeper and deeper corners of our electronics, we will come to rely on using a complex chain of commands — both on the phone and, surely, in other electronic devices around the home.
“OK, TV. Channel 5.”