Generative adversarial networks are getting a lot of press as the general public raises their fears on the forgery techniques, but the data science community has been tracking their development, precision, benefits, and threats for years. Recently though, the team of Yuval Nirkin, Yosi Keller (both of Bar-Ilan University), and Tal Hassner (the Open University of Israel) has managed to make major beak-throughs in both diminishing the effort it takes to create images with face swapping , and improving the quality of those it creates, through machine learning.
[Related Article: Latest Developments in GANs]
In a recent paper, they discuss and compare their model to three other methods of so-called face swapping: Face2Face, Nirkin et al. (another research paper with similar methods), and DeepFakes. Despite making these seemingly similar comparisons, the paper distinguishes its model in three ways:
First, they developed subject agnostic face swapping. By this, they mean they’ve been able to remove subject specific training from the equation. Instead of setting models to train on individual people or pairings (like is the technique of older versions) they trained their model with machine learning, allowing them to apply it with ease to any pairing.
Second, they integrated a novel method of multiple view interpolation: training their models to handle source and target images with different parts of the face showing. This is done by predicting missing portions of the image, then inpainting and blending the source image onto the target. In previous GANs faces may have been distorted or fuzzy in cases like this.
Third, they came up with new loss functions. These included, “a stepwise consistency loss, for training face reenactment progressively in small steps, and a Poisson blending loss, to train the face blending network to seamlessly integrate the source face into its new context.”
[Related Article: 6 Unique GANs Use Cases]
Overall, it’s a really impressive new method of GANs. Through both qualitative (as seen above) and quantitative (within the paper) data, they’ve been able to show their improvement on prior techniques and the speed at which these can be created. And, in awareness of the publics’ fear of these technologies, they finish the paper with a proposal for other research teams to create counter methods of detecting forgeries: “suppressing the publication of such methods would not stop their development, but rather make them available to select few and potentially blindside policy makers if it is misused.”