Thursday, July 4, 2024
HomeData ModellingLatest Developments in GANs

Latest Developments in GANs

Generative adversarial networks (GANs) is a compelling technology that’s widely considered one of the most interesting developments in AI and deep learning in the past decade. This article provides an overview of the ODSC West 2018 talk “Latest Developments in GANs,” presented by Seth Weidman of Facebook. The presentation is an excellent way to quickly get up to speed with GANs and get a sense for the state-of-the-technology.

[Related Article: 6 Unique GANs Use Cases]

GANs were first conceived in a 2014 groundbreaking research paper “Generative adversarial networks,” by Ian Goodfellow et al. Goodfellow was the Google intern who in just months’ time, managed to take the massive Street View house number data set and developed a deep Convolutional Neural Network (CNN) to look at an image and determine the street numbers. Another seminal paper that exploded interest in GANs and got a lot of people’s attention was “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” by Alec Radford et al. This 2016 paper used batch normalization, along with deep convolutional architecture, and is renowned for having a picture of fake bedrooms. Another 2016 paper mentioned by Weidman is “Improved Techniques for Training GANs,” by Tim Salimans et al. And finally, Ian Goodfellow’s 2017 GANs tutorial can help you further embrace this rapidly evolving area of deep learning, “NIPS 2016 Tutorial: Generative Adversarial Networks.”

Since their inception, GANs have seen huge success as an exciting and rapidly changing field, delivering on the promise of generative models in their ability to generate realistic examples across a range of problem domains, most notably in image-to-image translation tasks. GANs is a method of training neural networks to generate images similar to those of the data that the neural network was trained on, and the training is done with an adversarial process. 

Weidman starts his presentation by covering the fundamentals of how and why GANs work, reviewing basic neural network and deep learning terminology in the process. He then covers the latest applications of GANs, from generating art from drawings to advancing research areas such as Semi-Supervised Learning, and even generating audio. He also examines the progress on improving GANs themselves, showing the tricks researchers have used to increase the realism of the images GANs generate. Throughout, he touches on many related topics, such as different ways of scoring GANs, and many of the deep learning-related techniques that have been found to improve training. Finally, Weidman closes with some speculation from the leading minds in the field on where we are most likely to see GANs applied next.

Developments in Gans

If you’re a data scientist wanting to move into the deep learning area of GANs, this talk will leave you with a better understanding of the latest developments in this exciting area and the technical innovations that made those developments possible. The talk illuminates why the latest achievements have worked, not just what they are. You’ll get a feeling of increased confidence and empowerment to apply these same methods to solve problems you face in personal projects or at work. To take a deeper dive into GANs, check out Seth’s compelling talk (audio only) from ODSC West 2018.

[Related Article: Efficient, Simplistic Training Pipelines for GANs in the Cloud with Paperspace]

Key Takeaways:

  • GANs represent a relatively new adjunct to deep learning with fast-paced following in the AI research community resulting in many recent papers pushing the technology beyond its early boundaries.
  • GANs were invented by Ian Goodfellow and colleagues in 2014.
  • Given a training set, a GAN learns to generate new data with the same statistics as the training set, e.g. a GAN trained on photographs can generate new photographs that appear superficially authentic to a human observer.
  • GANs were originally proposed as a form of a generative model for unsupervised learning, but they have also proven useful for semi-supervised learning.

Dominic Rubhabha Wardslaus
Dominic Rubhabha Wardslaushttps://neveropen.dev
infosec,malicious & dos attacks generator, boot rom exploit philanthropist , wild hacker , game developer,
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments