Both geometry and texture are fundamental aspects of visual style. Existing style transfer methods,
however, primarily focus on texture, almost entirely ignoring geometry. We propose deformable style
transfer (DST), an optimization-based approach that jointly stylizes the texture and geometry of a
content image to better match a style image. Unlike previous geometry-aware stylization methods,
our approach is neither restricted to a particular domain (such as human faces), nor does it require
training sets of matching style/content pairs. We demonstrate our method on a diverse set of content
and style images including portraits, animals, objects, scenes, and paintings.
1 Minute Summary
Spatially Guided Style Transfer
The key idea of DST is to find a spatial deformation of the content image that
brings it into spatial alignment with the style image. This deformation is guided
by a set of matching keypoints, chosen to maximize the feature similarity between
paired keypoints of the two images. After roughly aligning the paired keypoints
with a rigid rotation and scaling, a simple L2 loss encourages warping our
output image in such a way that the keypoints become spatially aligned. This deformation
loss is regularized with a total variation penalty to reduce artifacts due to drastic
deformations, and combined with the more traditional style and content loss terms.
DST's joint, regularized objective simultaneously encourages preserving content,
minimizing the style loss, and obtaining the desired deformation, weighing these
goals against each other. This objective can be solved using standard iterative techniques.
Each set of three images contains (from left to right) a content input, a style input, and our method's output.