Googles algorithmic Image Captions

img

Google experimentiert mit automatisch generierten Bildbeschreibungen: Über ein Convolutional Neural Network (CNN) identifizieren sie zunächst die Objekte im Bild, mit einem Recurrent Neural Network (RNN) bauen sie dann aus diesen Stichwörtern komplette Sätze. Ich freue mich schon auf die Glitches!

imgggPeople can summarize a complex scene in a few words without thinking twice. It’s much more difficult for computers. But we’ve just gotten a bit closer -- we’ve developed a machine-learning system that can automatically produce captions (like the three above) to accurately describe images the first time it sees them. This kind of system could eventually help visually impaired people understand pictures, provide alternate text for images in parts of the world where mobile connections are slow, and make it easier for everyone to search on Google for images.

Google Research: A picture is worth a thousand (coherent) words: building a natural description of images (via Algopop)
Paper: Show and Tell: A Neural Image Caption Generator

[update] Die Stock-Photo-Bude EyeEm hat sich einen Algorithmus zur Bewertung der Ästhetik von Bildern gebastelt: „EyeEm is 'training' its algorithms to identify which photos actually look good. By looking at things like which objects are in focus and blurred, what’s located at each third of a photo, and other identifiers of 'beauty', the ranking algorithms determine an EyeRank of aesthetic quality for each photo and applies an aggregated score to each photographer.“ (Danke Fabian!)