Article Source
- Title: Image Editing with Generative Adversarial Networks
- Authors: Jun-Yan Zhu et. al
Image Editing with Generative Adversarial Networks
Overview
The paper “Generative Visual Manipulation on the Natural Image Manifold” is available here: https://people.eecs.berkeley.edu/~junyanz/projects/gvm/
The source code is available here: https://github.com/junyanz/iGAN
Abstract
Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to “fall off” the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user’s scribbles.
Generative Visual Manipulation on the Natural Image Manifold
In ECCV’16
People
Paper
ECCV 2016 paper, 6.2MB
Citation
Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman and Alexei A. Efros. “Generative Visual Manipulation on the Natural Image Manifold”, in European Conference on Computer Vision (ECCV). 2016. Bibtex
Video
Code and Data: Github
Interactive Image Generation
Our system can create new imagery based on user’s scribbles. Let’s draw an outdoor scene.
Here we demonstrate how to achieve the above result.
We can use our drawing interface to design products as well.
Intelligent Image Editing
Our interactive system allows a user to manipulate image in a natural and realistic way.
Generative Image Transformation
Our system can automatically transform the shape and color of one image to look like another image.
Related Work
- Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio “Generative Adversarial Networks”, in NIPS 2014.
- Alec Radford, Luke Metz and Soumith Chintala “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks”, in ICLR 2016.
- Jun-Yan Zhu, Yong Jae Lee and Alexei A. Efros. “AverageExplorer: Interactive Exploration and Alignment of Visual Data Collections”, in SIGGRAPH 2014.
- Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman and Alexei A. Efros. “Learning a Discriminative Model for the Perception of Realism in Composite Images”, in ICCV 2015.