Tag Archives: google now

French hackers intercept Siri and Google Now to control phones

Researchers claim to have intercepted the digital assistants to control the iPhone and Android devices, broadcasting silent commands from 16 feet away

French researchers claim to have remotely accessed iOS and Android digital assistants and silently delivered commands by using headphones with inbuilt microphones as antennas.

The team from the French government’s Network and Information Security Agency (ANSSI) claim to have discovered “a new silent remote voice command injection technique”, meaning they were able to intercept Siri and Google Now via radio from up to 16 feet away.

An Android device or iPhone with a pair of headphones containing an inbuilt microphone – such as Apple’s standard earbud model – plugged in effectively turns the cord into an antenna, converting electromagnetic waves into electrical signals the phone perceives to be audio commands, without actually speaking a word.

In theory, this means the digital assistants could be hijacked into sending texts or emails, making searches or calls or direct the handset to malicious websites, though the researchers required an amplifier, laptop, antenna and Universal Software Radio Peripheral (USRP) radio.

“The possibility of inducing parasitic signals on the audio front-end of voice-command-capable devices could raise critical security impacts,” researchers José Lopes Esteves and Chaouki Kasmi wrote, as spotted by Wired.

Last month a hacker claimed to have discovered a 30-second method ofinfiltrating a locked iPhone via Siri, which Apple fixed with the updated software iOS 9.0.1.

How to protect yourself

  • Attacks like this are extremely improbable, but in theory could happen. The researchers have suggested the companies improve the shield on their headphone cords, or introduce personalised phrases to wake digital assistants.
  • If you’re really worried, you could disable voice activation or turn the digital assisant on your phone off.
Advertisements
Tagged , , , , , , ,

Google Now has opened up to third-party developers

Google’s Android search experience evolved from a plain text box into Google Now in 2012, and for the first time the company is allowing third parties to add app data directly to that interface. Whereas before all the cards in Google Now came directly from Google, they will now start showing the data from apps that you use. This has the potential to take Google Now to a whole new level of usefulness.

While it started on Android, Google Now cards have migrated to the iOS Google app and Chrome desktop browser since their first appearance in Android 4.1. Google Now is designed to bring together data on your location, search history, and preferences to present relevant information before you have to ask for it. For example, if your calendar has an appointment with a time and location, Google can reach out into its vast storehouse of data and figure out where you are, where you need to go, and what traffic is like on the way there. The result is a handy notification card that tells you when you need to leave and offers to load turn-by-turn directions.

There have always been cool experiences like that in Google Now, but they’ve been too few and far between for casual users to really get accustomed to checking the cards. You only use the parking location or news card every so often, but maybe you use Runtastic, Shazaam, and Mint much more often. These are among the 30+ apps that now have access to Google Now in partnership with Google.

Google hasn’t created a fully open public API, so it’s not a free-for-all to cram your feed full of cards. Instead, select developers have been given tools to add contextual data from their apps to Google Now. So what kind of stuff will it do? If you’re a Duolingo user, Google Now might have quick lesson links in Now to help you brush up on your French. Shazaam will have a card with the songs you’ve recently identified. The Coinbase app could also pop up a card when the value of your Bitcoin hoard rises or falls. The Google Now demo site has more examples of the new app-based cards.

My gut feeling is that the expanded cards will be great for Android users. If developers are smart about the data they expose in Google Now, you could find yourself opening the actual app less often. You could just pop over to Now and check up on multiple bits of data from different apps. That’s great for Google because the search screen is the hub of Google’s Android experience. If you want to use all those fancy cards you need to have search history, location, voice, and a slew of other features enabled. App-based cards could also serve as a collection of non-urgent, but useful data that might otherwise clutter up the notification shade.

Because the search interface is so core to what Google does, it makes sense there isn’t a public API for apps to use right now. Google wants to control what content gets into the card stack, even if it’s coming from a third-party. If it gets too messy, no one will want to use it. Maybe some sort of limited access will be opened up to everyone later, but for the time being Google is moving cautiously. The new cards will arrive in an update of the Google app in the coming days.

Tagged , , , , ,

The killer voice feature Google Now needs next

Using the Nexus 5’s touch-free audio controls to instigate search is cool. But there’s even more Google can do to turn up the volume on voice.

Google Now on Nexus 5

One of the Nexus 5 smartphone’s best new features is the always-listening, touchless control over Google Now, the name by which the platform’s personal assistant is known.

By saying the words “OK, Google” when the phone is unlocked, you can launch any of Google Now’s actions — like searching the Web or dialing a number — without having to touch the screen. Several Motorola phones did this prior to the Nexus 5’s Android 4.4 KitKat OS.

Voice-activated Google Now is a terrific little convenience that can save time or give you the freedom to go hands-free. It’s also another stepping stone for what Google, and other companies working on voice actions, can build out next.

For instance, as long as I’m entirely hands-free, I’d like to be able to use secondary voice commands, or rather, a series of commands, to keep both hands on the wheel, or in a chicken I’m stuffing, or wrangling a squirmy child or pet.

What if, when my cell phone rings, I can vocally instruct Google to answer the phone, and then to turn on the speakers, so I can keep doing what I’m doing uninterrupted?

Similarly, what if Google Now were able to interpret requests to adjust the phone’s volume or brightness, or open the Settings menu and then open another submenu while you decide on your next selection?

There’s a tremendous amount that Google’s voice actions can do, like call a business you search out by name — as long as there’s only one instance of the shop near you. Otherwise, the search assistant may present you with a list of choices that you won’t be able to narrow down until you manage to free a hand.

Likewise, if you rattle off very specific instructions, your Android phone can set a reminder for a certain time, but you’ll still need to tap the screen to confirm the reminder. In my voice actions future, you’ll be able to daisy-chain voice commands to set the time and approve the reminder, which the software will understand based on the context of the initial request.

In other words, as long as I’m still in the reminders app, Google Now should assume that commands relate to the reminder app, unless I completely switch tacks and request something else (“OK, Google. How long will it take to drive to Schenectady?”)

I imagine a Google Now that can juggle a handful of commands as adeptly as a human who hears step-by-step dictation: “OK, Google. Search for “best restaurants in San Francisco. OK, Google: Scroll down. OK, Google, pick the menu for Boulevard.” And so on.

Even if you do have access to your digits while using the phone, it would be great to have options to intersperse voice actions with typing, which I already do now when dictating short messages or notes.

Say you’ve just taken a photo or batch of photos you’d like to immediately send to a contact. I envision an even more intelligent assistant savvy enough to execute the command “OK, Google. Send these photos to Jason.” after selecting them in the gallery. It would also help, of course, to be able to vocally launch Google Voice Actions from the photo gallery app.

What I’m proposing would absolutely require a far deeper level of integration with the operating system’s many menus, submenus, and apps. Yet it’s a direction I think we’re headed in, and one in which Google (and Apple, and Nuance, and others) are very capable of achieving.

I, probably like some of you, have in the past been skeptical about speaking commands into my phone, at least in public areas. Yet the practice is already becoming more commonplace (at least here in Silicon Valley).

As the architects of voice commands tap into deeper and deeper corners of our electronics, we will come to rely on using a complex chain of commands — both on the phone and, surely, in other electronic devices around the home.

“OK, TV. Channel 5.”

Tagged , , ,

We’ve heard about Siri, we’ve heard about Google Now, but Microsoft? Microsoft has… “Cortana”

Summary: Microsoft is working on its ‘Cortana’ rival to Apple’s Siri and Google Now, which will be integrated into all flavors of Windows in the future.

Back in June, screen shots of an early Windows Phone operating system build leaked (via a Lumia phone allegedly purchased on eBay). At that time, next-to-no attention was paid to an app, listed as “zCortana,” that was on the phone.

halocortana

But that Cortana app (with the “z” indicating it was a test build) is central to what Microsoft is doing to compete with Apple’s Siri and Google Now. And Cortana is back in the news this week withpassing mentions by those tracking what’s happening with Windows Phone as it moves toward the “Blue” release in the early part of 2014.

Cortana takes its codename from Cortana, an artifically intelligent character in Microsoft’s Halo series who can learn and adapt.

Cortana, Microsoft’s assistant technology, likewise will be able to learn and adapt, relying on machine-learning technology and the“Satori” knowledge repository powering Bing.

Cortana will be more than just an app that lets users interact with their phones more naturally using voice commands. Cortana is core to the makeover of the entire “shell” — the core services and experience — of the future versions of Windows Phone, Windows and the Xbox One operating systems, from what I’ve heard from my contacts. 

In Microsoft CEO Steve Ballmer’s strategy memo from July about Microsoft’s reorg, there were hints about Cortana. Ballmer mentioned that Microsoft will be working, going forward, on “a family of devices powered by a service-enabled shell.”

That “shell” is more than just the Metro/Modern/tiled interface. Ballmer continued:

“Our UI will be deeply personalized, based on the advanced, almost magical, intelligence in our cloud that learns more and more over time about people and the world. Our shell will natively support all of our essential services, and will be great at responding seamlessly to what people ask for, and even anticipating what they need before they ask for it.”

The coming shell won’t simply surface information stored on users’ phones, PCs and consoles like a search engine can do today. It also will “broker information among our services to bring them together on our devices in ways that will enable richer and deeper app experiences,” Ballmer said in his memo. (That “brokering” is handled by Bing’s Satori, which intelligently interconnects entities, i.e., information about people, places and things.)

Microsoft execs — especially Ballmer — have been talking up Microsoft’s plans to launch a new kind of personal assitant technology since 2011. At that time, Ballmer was touting publicly the idea that users would be able to tell their PCs to “print my boarding pass on Southwest” and have their systems automatically jump into action. The magic behind the scenes would be a combination of Microsoft Bing, Tellme speech technology and some natural-language-plus-social-graph concoction. (Microsoft moved its speech team into its Online Services unit, seemingly to facilitate work with the Bing team, at the very end of 2011.)

But other Microsoft execs said that this kind of assistant would be unlikely to appear until somewhere between 2014 and 2016. Earlier this summer, Bing officials told CNET that Microsoft had decided towait until it had something revolutionary, instead of evolutionary, to debut this kind of new assistant technology.

Cortana is yet another reason why Microsoft is unlikely to sell off Bing. Bing is more than a Web search engine; it’s also the indexing and graphing technology that will be powering Microsoft’s operating systems, too.

Tagged , , , , , , ,