Pix2Pix: Neural Network Katzen-Compositing als Browsertoy

Schönes Spielzeug von Christopher Hesse, der ein Neural Network auf Bild-Paaren trainiert hat und mit dem man nun Katzen, Schuhe und Fassaden malen kann: Image-to-Image Demo – Interactive Image Translation with pix2pix-tensorflow.

So ein bisschen wie Adobes Neural Network Compositing oder die Variante von Andrew Brock, nur 'ne Nummer kleiner und im Browser.

Darius Kazemi hat dann das Canvas-Objekt im Code der Website manipuliert und Fraktale, Garfield, Noise und Katzenbilder da reingecodet:

The pix2pix model works by training on pairs of images such as building facade labels to building facades, and then attempts to generate the corresponding output image from any input image you give it. […]

Trained on about 2k stock cat photos and edges automatically generated from those photos. Generates cat-colored objects, some with nightmare faces. The best one I've seen yet was a cat-beholder.
Some of the pictures look especially creepy, I think because it's easier to notice when an animal looks wrong, especially around the eyes. The auto-detected edges are not very good and in many cases didn't detect the cat's eyes, making it a bit worse for training the image translation model.

Hier noch ein paar meiner Kritzeleien: