Google WaveNet: Neural Network-generated Text-2-Speech

Gepostet vor 4 Monaten, 9 Tagen in #Tech #AI #AlgoCulture #Language

Share: Twitter Facebook Mail


Google hat seine Neural Networks auf Stimmerzeugung trainiert und ein neues Verfahren zur Synthetisierung von Sprache erfunden. Die Ergebnisse sind deutlich besser, als das, was man bisher von Text-2-Speech-Synthesis so kennt:

Die füttern ihr WaveNet mit Lautsprache, bis zu einer „echten“ Text-2-Speech-Anwendung fehlt also noch ein kleines Stück (und das Teil ist ohnehin eher… langsam: „it takes 90 minutes to synthesize one second of audio.“)

Interessant: Da die Algorithmen aber auch auf Audio-Samples trainiert sind, können sie auch einfach losreden ohne Text-Input und die Ergebnisse hier beinhalten dann auch „menschliche Spuren“ in der Kunst-Stimme, also Atmen, Lispeln und subtile Schmatzer:

Discover Mag: Google DeepMind’s WaveNet AI Sounds Human, Rocks the Piano
Technology Review: Face of a Robot, Voice of an Angel? – DeepMind’s use of neural networks to synthesize speech could finally make computers sound more human.

generating speech with computers — a process usually referred to as speech synthesis or text-to-speech (TTS) — is still largely based on so-called concatenative TTS, where a very large database of short speech fragments are recorded from a single speaker and then recombined to form complete utterances. This makes it difficult to modify the voice (for example switching to a different speaker, or altering the emphasis or emotion of their speech) without recording a whole new database.

This has led to a great demand for parametric TTS, where all the information required to generate the data is stored in the parameters of the model, and the contents and characteristics of the speech can be controlled via the inputs to the model. So far, however, parametric TTS has tended to sound less natural than concatenative, at least for syllabic languages such as English. Existing parametric models typically generate audio signals by passing their outputs through signal processing algorithms known as vocoders.

WaveNet changes this paradigm by directly modelling the raw waveform of the audio signal, one sample at a time. As well as yielding more natural-sounding speech, using raw waveforms means that WaveNet can model any kind of audio, including music. […]

Here are some samples from all three systems so you can listen and compare yourself:

[…] If we train the network without the text sequence, it still generates speech, but now it has to make up what to say. As you can hear from the samples below, this results in a kind of babbling, where real words are interspersed with made-up word-like sounds:

[…] By changing the speaker identity, we can use WaveNet to say the same thing in different voices:

[…] Since WaveNets can be used to model any audio signal, we thought it would also be fun to try to generate music. Unlike the TTS experiments, we didn’t condition the networks on an input sequence telling it what to play (such as a musical score); instead, we simply let it generate whatever it wanted to. When we trained it on a dataset of classical piano music, it produced fascinating samples like the ones below:


RechtsLinks 19.1.2017: Die völkische Umkopfung der AfD, Big Data im Wahlkampf und eine Fake-News-Studie

Sascha Lobo auf spOnline: Björn Höcke in Dresden – Schauen Sie diese Rede!: „Ein Gedankenexperiment: Wenn sich einhundert Menschen versammeln,…


Algorithm watching Wolf of Wallstreet

Tolles Computervision-Experiment von Støj: An algorithm watching a movie trailer: „A program removing everything but the objects it recognises when…


Graffiti Grammar Police

Punctuation Vigilantes for the rescue: „Their names? Agent X and Agent Full Stop. Their mission? To fix illegible and grammatically…


How Louis CK tells a Joke

Neuer Clip von Evan „Nerdwriter“ Puschak (Patreon) über Sprache und Rythmus von Louis CK.


Neural Enhance with creepy artificial Artifacts

Mario Klingemann (Vorher auf NC: Video-Frames sorted by Audio, Typographic Gears) hat einen Algorithmus für künstliche Artefakte gebastelt, die die…


Auto-Generated queer 120 Page-Sentence-Identity

Good one from 0x0a: Monologue. Who am I? Can any one answer ever be definitive enough to define oneself? Monologue…


Miniature-Playground for Captcha-Solving AIs

„I'm not a Robot“ my ass. Mini World of Bits ('MiniWoB') is a benchmark for reinforcement learning agents who interact…


Neural Network Names from the Future

Nate Parrott hat ein Neural Network auf 7500 Vornamen trainiert und generiert nun neue Namen aus der Zukunft, inklusive „…


Text 2 Bird

Neural Networks generieren Vögel aus Textbeschreibungen in 256x256 Pixeln (das nennen AI-Forscher „High Res“, haha). Die Methode sah qualitätsmäßig vor…


Donald Trumps Name in Gebärdensprache

Donald Trump in Gebärdensprache, oben der Vorschlag für die American Sign Language als GIF, unten der für Österreichs Gebärdensprache. Now…

Style-Transfer für Audio

Dmitry Ulyanov und Vadim Lebedev mit einem ersten Ansatz für Style Transfer für Audio oder anders formuliert: Neural Network machen…