Neural Network Compositing

Adobe Research und die Uni Berkley arbeiten an Retusche und Compositing via Machine Learning. Die Ergebnisse sind zwar beeindruckend, aber noch lange nicht praktikabel, ich tippe mal auf die übernächste Photoshop-Generation. Aus einem Posting auf Quartz: This digital brush paints with the memories of 275,000 landscapes, hier das Paper als PDF.

netw

The software uses deep neural networks to learn the features of landscapes and architecture, like the appearance of grass or blue skies. As a human user moves the digital brush, the software automatically generates images inspired by the color and shape of the strokes. For example, when researchers testing the project made a blue brushstroke, the software painted a sky. It’s an example of the work being done in a field of AI research called “generative art,” discussed in a recent paper accepted by the 2016 European Conference on Computer Vision.

The same deep neural network used by the Berkeley-led team is also able to generate new kinds of shoes and handbags, using a reference image as a template. In one example, researchers were able to change the design of a shoe by drawing a different style and color on top of it.