Machine generated TED-Talks

Noch mehr schöne Neural Algo-Experimente von einem Herrn Samim (Tweeties):

TED-RNN — Machine generated TED-Talks: „I wrote a web-crawler in python that gathers the transcripts of all TED talks. […] A selection of the results then got fed to a text-to-speech synthesiser, creating three convincing TED-speakers which I named Jürgen TEDhuber, Ada LoveTED and Isaac TEDimov.“

Obama-RNN — Machine generated political speeches
Generating Captions – Describing Videos with Neural Networks

neuralcomics

Generating Comics – A Creative Human Machine Collaboration:

For this experiment, I collaborated with Kedamami, a talented Japanese Manga creator, living in Berlin. To get started, she created two custom Mangas. We then processed them with the recently published Neural Algorithm of Artistic Style (StyleNet). StyleNet captures & transfers the stylistic information of images. Countless style guide images & parameters were tested. Finally, the StyleNet outputs were manually re-processed by Kedamami.

One of the primary aims of this experiment was to explore StyleNet as a creative authoring tool. For this, we used the upcoming “DeepUI” — a powerful open-source user-interface for StyleNet/DeepDream, that enables interacting with visual CreativeA.I systems in a useable and fast way. It is optimised for animations, enabling the creation of results similar to this wonderful Alice in Wonderland StyleNet video by Gene Kogan. [Vorher auf NC: Picasso-Algorithm applied to Picasso