Artificial Demon Voice controls your Phone

Gepostet vor 2 Monaten, 4 Tagen in #Science #Tech #AI #AlgoCulture #Audio #Hacks

Share: Twitter Facebook Mail

Wissenschaftler haben eine Methode entwickelt, um Voice Commands in Sounds zu verstecken, die sich für das menschliche Ohr wie extrem und hundertfach komprimierte MP3s anhören. Mit diesen „Hidden Voice Commands“ können sie Android-Phones im Umkreis von bis zu drei Meter kontrollieren: The Demon Voice That Can Control Your Smartphone. (via Superpunch)

Voice interfaces are becoming more ubiquitous and are now the primary input method for many devices. We explore in this paper how they can be attacked with hidden voice commands that are unintelligible to human listeners but which are interpreted as commands by devices. We evaluate these attacks under two different threat models. In the black-box model, an attacker uses the speech recognition system as an opaque oracle. We show that the adversary can produce difficult to understand commands that are effective against existing systems in the black-box model. Under the white-box model, the attacker has full knowledge of the internals of the speech recognition system and uses it to create attack commands that we demonstrate through user testing are not understandable by humans. We then evaluate several defenses, including notifying the user when a voice command is accepted; a verbal challenge-response protocol; and a machine learning approach that can detect our attacks with 99.8% accuracy.

Sowas ähnliches gab's 2015 schon für Computer Vision, als sie Algorithmen erfolgreich mit Noise und Pattern verarscht hatten, Paper als PDF: Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images.

vis0

Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study [30] revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library).

Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-theart DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call “fooling images” (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.

vis1 vis2 vis4 vis5

Next Level, photorealistic Style-Transfer

In ihrem neuen Paper stellen Fujun Luan, Sylvain Paris, Eli Shechtman und Kavita Bala eine neue Style-Transfer-Methode vor: Figure 1:…

Blacked Out Censorship-Poetry Generator

Schöne Spielerei von Max Kreminski, ein JS-Droplet, das Websites in Blacked Out Poetry verwandelt, basierend auf Liza Dalys █ Blackout…

This Bot kills Fascists

So-So-Working-Object-Detection-Algorithm + Woodie Guthrie = Fascists.exe | „This bot kills fascists“.

„Alexa? Are you connected to the CIA?“

„I always try to tell the truth.“ This reminds me of those Guilty-Dog-Videos:

Palm Generator

Es sollte mehr entspannte Algorithmen mit Urlaubs-Attitüde geben wie den hier: „The Palm Generator is a Three.js module to create…

AI Brainscans

Graphcore aus Bristol visualisieren künstliche Intelligenzen und Neural Networks: Inside an AI 'brain' - What does machine learning look like?…

A Banana Keytar and more from Stupid Hackathon: Inverted Eyetracker-Pong, Robot Porn Addict or the Shitty Sharpie Tattoo Gun)

Ein weiterer Fav vom Stupid Hackathon NYC 2017, die Banana-Keytar von Amanda Lange. Auch geil: der Twitter-Bot Robot Porn Addiction,…

Fotorealistische Pics aus der Gameboy-Camera

Roland Meertens generiert fotorealistische Farbbilder aus den Pics der Gameboy-Camera: Creating photorealistic images with neural networks and a Gameboy Camera.…

Pix2Pix: Neural Network Katzen-Compositing als Browsertoy

Schönes Spielzeug von Christopher Hesse, der ein Neural Network auf Bild-Paaren trainiert hat und mit dem man nun Katzen, Schuhe…

Automatic Handgun Detection via Machine Learning

The latest Step into an OCP-approved Ed-209-compatible Future: Automatic Handgun Detection Alarm in Videos Using Deep Learning (PDF). Usage Guide:…

ALF-Trump and other algorithmic Abominations

Großartiger neuer Twitter-Feed von Chris Rodley: Algorithmic Horror – Concept art for horror movies generated by an algorithm mit so…