Neural Style Transfer-App Prisma has Video, still can't dance

Prisma, die StyleTransfer-App, kann jetzt auch bis zu 15 Sekunden Video. Das hier kriegt ihr damit aber trotzdem nicht hin:

Video von lulu xXX (Originalvideo hier), die bereits AI-Hendrix durch den Neural Moebius geschickt und es tatsächlich geschafft hat, Poppy noch weirder zu machen.

Aktueller technischer Stand von Neural Network Style Transfer übrigens: Real Time (auf einer Pascal Titan X-Grafikkarte), inklusive Webcam-Demo. Paper: Perceptual Losses for Real-Time Style Transfer and Super-Resolution.

camWe consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks.

We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster.