2017 IEEE International Conference on Computer Vision (ICCV) (2017)
Oct. 22, 2017 to Oct. 29, 2017
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/ICCV.2017.168
We present an approach to synthesizing photographic images conditioned on semantic layouts. Given a semantic label map, our approach produces an image with photographic appearance that conforms to the input layout. The approach thus functions as a rendering engine that takes a two-dimensional semantic specification of the scene and produces a corresponding photographic image. Unlike recent and contemporaneous work, our approach does not rely on adversarial training. We show that photographic images can be synthesized from semantic layouts by a single feedforward network with appropriate structure, trained end-to-end with a direct regression objective. The presented approach scales seamlessly to high resolutions; we demonstrate this by synthesizing photographic images at 2-megapixel resolution, the full resolution of our training data. Extensive perceptual experiments on datasets of outdoor and indoor scenes demonstrate that images synthesized by the presented approach are considerably more realistic than alternative approaches.
feedforward neural nets, image resolution, regression analysis
Q. Chen and V. Koltun, "Photographic Image Synthesis with Cascaded Refinement Networks," 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2018, pp. 1520-1529.