Deformable Style Transfer

Sunnie S. Y. Kim1   Nicholas Kolkin1   Jason Salavon2   Gregory Shakhnarovich1
1Toyota Technological Institute at Chicago     2University of Chicago


Each set of four images contains (from left to right) a content input, a style input,
a standard style transfer output, and our proposed method's output.
 

Paper

Code

Demo

Talk

Bibtex

Abstract

Both geometry and texture are fundamental aspects of visual style. Existing style transfer methods, however, primarily focus on texture, almost entirely ignoring geometry. We propose deformable style transfer (DST), an optimization-based approach that jointly stylizes the texture and geometry of a content image to better match a style image. Unlike previous geometry-aware stylization methods, our approach is neither restricted to a particular domain (such as human faces), nor does it require training sets of matching style/content pairs. We demonstrate our method on a diverse set of content and style images including portraits, animals, objects, scenes, and paintings.

1 Minute Summary

Spatially Guided Style Transfer

The key idea of DST is to find a spatial deformation of the content image that brings it into spatial alignment with the style image. This deformation is guided by a set of matching keypoints, chosen to maximize the feature similarity between paired keypoints of the two images. After roughly aligning the paired keypoints with a rigid rotation and scaling, a simple L2 loss encourages warping our output image in such a way that the keypoints become spatially aligned. This deformation loss is regularized with a total variation penalty to reduce artifacts due to drastic deformations, and combined with the more traditional style and content loss terms. DST's joint, regularized objective simultaneously encourages preserving content, minimizing the style loss, and obtaining the desired deformation, weighing these goals against each other. This objective can be solved using standard iterative techniques.

Results

Each set of three images contains (from left to right) a content input, a style input, and our method's output.
 

Face Paintings

 

Animals and Objects

 

Face Caricatures

 

One Content to Multiple Styles

 

Multiple Contents to One Style

Related Work

Style Transfer by Relaxed Optimal Transport and Self-Similarity. Nicholas Kolkin, Jason Salavon and Gregory Shakhnarovich. CVPR 2019.
Comment: A one-shot, optimization-based style transfer method with which we demonstrate our DST framework.
 
Image Style Transfer Using Convolutional Neural Networks. Leon A. Gatys, Alexander S. Ecker and Matthias Bethge. CVPR 2016.
Comment: A one-shot, optimization-based style transfer method with which we demonstrate our DST framework.
 
Neural Best-Buddies: Sparse Cross-Domain Correspondence. Kfir Aberman, Jing Liao, Mingyi Shi, Dani Lischinski, Baoquan Chen and Daniel Cohen-Or. SIGGRAPH 2018.
Comment: A generic point matching method that we use to find correspondences between images and guide the spatial deformation.
 
The Face of Art: Landmark Detection and Geometric Style in Portraits. Jordan Yaniv, Yael Newman and Ariel Shamir. SIGGRAPH 2019
Comment: A method for detecting facial landmarks and transferring geometric style of artistic portraits. We compare our DST results with results from this method.
 
WarpGAN: Automatic Caricature Generation. Yichun Shi, Debayan Deb and Anil K. Jain. CVPR 2019.
Comment: A method for generating caricatures with geometric deformations given an input face photo. We compare our DST results with results from this method.
 
Geometric Style Transfer. Xiao-Chang Liu, Xuan-Yi Li, Ming-Ming Cheng and Peter Hall. arXiv 2020
Comment: A geometry-aware style transfer method that represents geometry with features extracted from pre-trained convolutional neural networks.

Contact

Sunnie S. Y. Kim (sunnie@ttic.edu)