Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Wednesday, May 16, 2018

Hacking Alexa, Siri and Google Assistant With Hidden Voice Commands

A lot of people have written about how getting one of these voice-activated digital assistants is voluntarily bugging yourself.  The reactions have widely varied, but for the software to recognize when you call it, ("OK, Google"...) it has to be listening at all times.  It's a deliberate design feature, or decision.  Most people who read here, at least, would be aware that they were installing a full time listening device in their homes.  To some, they assume it's an invasion of privacy and don't want these things; to others, it's something they ignore for the perceived benefits of the digital assistant.

The New York Times tech blog reports that a group of researchers in a couple of institutions have been able to secretly activate the systems on smartphones and smart speakers, simply by playing music with sub-audible (to humans) sounds hidden in it over the radio.
A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon’s Echo speaker might hear an instruction to add something to your shopping list.

“We wanted to see if we could make it even more stealthy,” said Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper’s authors.
In a way, this isn't much of a surprise, right?  They're taking advantage of the "always-on, always listening" nature and trying to see just what the algorithms can extract from the other sounds.  I'd think the designers would do this.  Further, hijacking these things is nothing new.  Remember when Burger King grabbed headlines with an online ad that asked ‘O.K., Google, what is the Whopper burger?”  It caused Android devices with voice-enabled search to read the Whopper’s Wikipedia page aloud.  The ad was canceled after viewers started editing the Wikipedia page to make it more ... let's say comical.  Not long after that, South Park followed up with an entire episode built around voice commands that caused viewers’ voice-recognition assistants to spew adolescent obscenities.

A research firm has said that devices like Alexa, Siri and Google Assistant will outnumber humans by 2021, and add that more than half of American homes will have a smart speaker by then, just three years away. 

These security researchers aren't leaving bad enough alone. 
Last year, researchers at Princeton University and China’s Zhejiang University demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear. The attack first muted the phone so the owner wouldn’t hear the system’s responses, either.

The technique, which the Chinese researchers called DolphinAttack, can instruct smart devices to visit malicious websites, initiate phone calls, take a picture or send text messages. While DolphinAttack has its limitations — the transmitter must be close to the receiving device — experts warned that more powerful ultrasonic systems were possible.

That warning was borne out in April, when researchers at the University of Illinois at Urbana-Champaign demonstrated ultrasound attacks from 25 feet away. While the commands couldn’t penetrate walls, they could control smart devices through open windows from outside a building.

This year, another group of Chinese and American researchers from China’s Academy of Sciences and other institutions, demonstrated they could control voice-activated devices with commands embedded in songs that can be broadcast over the radio or played on services like YouTube.
Security researchers have a habit of saying that releasing information like this isn't bad because they think the bad guys have either thought of it already, or they would think of it on their own.  Maybe, although some times just knowing something is possible can keep the experimenter going during the inevitable times when things just don't seem to be working.  The article does say these exploits haven't been found "in the wild", but as more people become aware of the possibility, I'd expect them to start showing up.

Hopefully, the research being revealed will get the companies selling this software to try to get ahead and make their devices more robust.  My version: I have an older iPhone (6s) with Siri.  It's possible to configure the phone to listen all the time, so that when you say, "Hey, Siri" it answers.  I have that turned off, and have read Siri does not actually send data back when it's disabled.

I'm going to close with one of the last paragraphs in the article, because it contains the very best phrase in the whole piece. 
“Companies have to ensure user-friendliness of their devices, because that’s their major selling point,” said Tavish Vaidya, a researcher at Georgetown. He wrote one of the first papers on audio attacks, which he titled “Cocaine Noodles” because devices interpreted the phrase “cocaine noodles” as “O.K., Google.”
For some reason, it reminds me of this meme:



Sunday, April 3, 2016

The Future of the Web

The not too distant future:
Inspired by this post at Bayou Renaissance Man (and courtesy of ).

Otherwise, I got nothing.  Busy day around the house today. 



Tuesday, March 15, 2016

Techy Tuesday - You Car Will Soon Be Watching You

As self-driving cars approach, one of the hurdles that must be crossed is that under some versions of the concept, the car will hand control back over to the driver.  Sort of a robotic version of, "I don't know.  You take it".

So what if the driver is catching some Zs, or reading a book, or engaged in something else that's distracting?  EE Times reports on one approach: optical recognition of the driver and detecting these things by watching the driver and passengers. 
The U.S. Department of Transportation's National Highway Traffic Safety Administration (NHTSA) defines vehicle automation in five different levels. Level 3 implies “Limited Self-Driving Automation.”

Under Level 3, (Roger) Lanctot said, “there is an implied need to monitor the driver to ensure he or she is available to/capable of taking control of the car as it transitions from automated driving back to being driven.”
So here's a functional reason to monitor the driver's alertness.  But you gotta know it won't stop there.
But beyond safety, and insurance companies with a vested interest in knowing how drivers drive, what can carmakers gain from monitoring drivers?

FotoNation’s Benmokhtar rattled off a few factors motivating carmakers to install a driver monitoring system. They want, for example, to build a car that knows the personal preferences of driver and passengers, so that the car can tailor comfort issues such as seat positions and entertainment content. Auto companies are also eager to design a car for ride-sharing or as a service. In either case, a car, for security reasons, should be able to identify the driver and authorize him or her to drive, while allowing the driver to pre-load choices in navigation and entertainment.
In the last few weeks, I've heard the idea floated around that at some point in the near future, we'll stop buying cars and instead arrange for a car to pick us up by appointment.  Kind of a driverless Uber; you schedule a car and it drives itself to the pickup point.  While I find that hard to imagine, it seems this is the intended market for those features.  After all, if you own the car, the ability to "tailor comfort issues such as seat positions and entertainment content" just doesn't seem that important; the seat will be where you left it.  It doesn't seem that important for a couple that shares a car, but in the "driverless Uber" concept, I can see it being important.  It sounds like the kind of idea they'd push for millennials, who are famous for wanting and having every detail of their lives personalized.  I recall stories that millennials were not buying cars in the numbers their parents and grandparents did, but I also see that's no longer the case. So that's increasingly not the market for this sort of system, either.

A major player in the optical recognition systems market is FotoNation.
But FotoNation believes eye-tracking/eye-gazing technologies come with limitations -- especially related to cost and implementation issues for OEMs and Tier Ones.

FotoNation claims a technology that can do away with high-cost cameras. “We can use a VGA camera – instead of a mega-pixel sensor – to track a driver’s head position/orientation,” which in turn helps determine eye location, said Benmokhtar.

Technically speaking, FotoNation’s system isn’t an eye-tracking system. Rather, it tracks heads. The technology tracks 50 points on the driver’s face, enabling the monitoring system to extrapolate eyebrows and even measure accurately how open or closed an eye is, according to the company.

Combined with other data points such as mouth opening (that indicates a driver is yawning) and head orientation (whether it’s nodding), the system can determine driver drowsiness, for example.
Beyond the simple monitoring of the driver, there's interest in 360 degree vision around the car, for collision avoidance and other situational awareness.  That's harder to disagree with than the car monitoring the driver. 

This monitoring raises lots of troubling questions.  Is there any expectation of privacy?  If you're in your personal car, I would think so, but what about in a driverless car you rent for a commute or trip home from the bars?  If there are other people in the car, do they get monitored as well?  What about the other people in the car's rights to privacy?  Is the data saved?  Is it uploaded to an insecure "cloud" server somewhere?  If it's your personal car, who owns the data?  The car maker, the driver, or someone else?  Does it become admissible in court?  

As I said the other day, it's a brave, new world, isn't it.