A.I.s erzählen Geschichten aus Fotoserien, generieren Bilder aus Text und erzeugen eine Stimme

Gepostet vor 1 Jahr, 2 Monaten in #Misc #Tech #AI #AlgoCulture

Share: Twitter Facebook Mail

Drei hübsche neue AI-Bits, alle via CreativeAI:

1. ein Neural Network, das Stories aus Bilderserien generiert (Paper: Storytelling of Photo Stream with Bidirectional Multi-thread Recurrent Neural Network). Zur Erinnerung: Vor zwei Monaten waren wir noch bei (ziemlich guten) Beschreibungen von Einzelbildern.

ai1 ai2

Visual storytelling aims to generate human-level narrative language (i.e., a natural paragraph with multiple sentences) from a photo streams. A typical photo story consists of a global timeline with multi-thread local storylines, where each storyline occurs in one different scene. Such complex structure leads to large content gaps at scene transitions between consecutive photos. Most existing image/video captioning methods can only achieve limited performance, because the units in traditional recurrent neural networks (RNN) tend to "forget" the previous state when the visual sequence is inconsistent. In this paper, we propose a novel visual storytelling approach with Bidirectional Multi-thread Recurrent Neural Network (BMRNN). First, based on the mined local storylines, a skip gated recurrent unit (sGRU) with delay control is proposed to maintain longer range visual information. Second, by using sGRU as basic units, the BMRNN is trained to align the local storylines into the global sequential timeline. Third, a new training scheme with a storyline-constrained objective function is proposed by jointly considering both global and local matches. Experiments on three standard storytelling datasets show that the BMRNN model outperforms the state-of-the-art methods.

2. So ungefähr das Gegenteil: Ein Neural Network generiert Bilder aus Textbeschreibungen (Paper: Generative Adversarial Text to Image Synthesis). AI-generierte „Fotos“ sind nach wie vor ziemlich LoRes, dürfte noch ein bisschen dauern, bis da wirklich praxistaugliche Ergebnisse bei rauskommen. Es braucht allerdings nicht wenig Vorstellungskraft, um sich vorzustellen, wozu solche Algorithmen in ein paar Jahren in der Lage sind, zum Beispiel „diktierte“ Comics im Stil von Jack Kirby oder „Ey Siri, generier mir mal ’ne Steppenwolf-Verfilmung mit Wolverine in der Hauptrolle.“

ai3 ai4 ai5

Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories such as faces, album covers, room interiors etc. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.

3. Ein Neural Network generiert eine Stimme. Hierzu einfach ’nen Kurzfilm von Chris Cunningham vorstellen, am besten Rubber Johnny:

This is a recursive neural network (LSTM type) with 680 neurons and 3 layers trying to find patterns in audio and reproduce them as well as it can. It's not a particularly big network considering the complexity and size of the data, mostly due to computing constraints, which makes me even more impressed with what it managed to do.

The audio that the network was learning from is voice actress Kanematsu Yuka voicing Hinata from Pure Pure. I used 11025 Hz, 8-bit audio because sound files get big quickly, at least compared to text files - 10 minutes already runs to 6.29MB, while that much plain text would take weeks or months for a human to read.

I was using the program "torch-rnn" (https://github.com/jcjohnson/torch-rnn/), which is actually designed to learn from and generate plain text. I wrote a program that converts any data into UTF-8 text and vice-versa, and to my excitement, torch-rnn happily processed that text as if there was nothing unusual.

Algorithmic Image-Watermark Remover

Google hat ein neues Paper über die automatische Entfernung von Wasserzeichen in Pics, netterweise nennen sie das Paper „On the…

10 PRINT CHR$ (205.5 + RND (1)); on a Commodore Pet

10 PRINT CHR$ (205.5 + RND (1)); 20 GOTO 10 ist sowas wie ein Miniatur-Programm in Basic für Generative Graphics,…

DeepMind releases StarCraft AI

Google und Blizzard haben ihre StarCraft AI veröffentlicht: Testing our agents in games that are not specifically designed for AI…

Self-Driving Cars hacked with Love and Hate

Wissenschaftler der Uni Washington haben die Computer-Vision-Systeme von selbstfahrenden Autos gehackt – indem sie Love und Hate in colorierten Pixelfonts auf…

Cellular Automata Cube

Cubes.io: Conways Game of Life als 3D-Spielzeug mit Cubes und Spheres und Schnickschnack als Evolution-Nullpunkt, von wo aus die ganzen…

Neural Network-Faces synched to Music

„My first attempt to map a song made by @kamptweets onto GAN generated proto-faces.“ Bohemian Rhapsody next. The Three Nightingans.…

AI-Animations with human Sounds

Google vor ein paar Tagen so: „Yay, wir haben hier 'ne neue AI-based Animation-Tech, hooray!“ (Paper) Hayayo Miyazaki über AI-based…

Visual AI-Spaces Auto-Pilot

Ich habe schon ein paar mal über Mario Klingemanns Arbeiten hier gebloggt, derzeit jagt er Neural Networks durch Feedback-Loops und…

Synthesizing Obama from Audio

Im Mai bloggte ich über ein damals noch nicht veröffentlichtes Paper zur SigGraph2017, in dem sie eine Methode für generative…

Generative Pearls

Cool fractal and generative art by Julien Leonard. I dig his explanation from his about-page: „I create algorithms that connect…

Moarph

Mario Klingemann does some weird shit again with CycleGAN Feedback Loops (Neural Networks feeding their results back to each other).…