shyam u Sept 14 2021 · 2 min read

GAN

In short ,Generative adversarial networks (GANs) are an exciting recent innovation in machine learning. GANs are generative models: they create new data instances that resemble your training data.

What are generative models ?

In statistical classification there are two many approaches one is the discriminative approach and the other one the generative approach.

Discriminative models in short are the classifiers which take X (features/attributes) as the input and predict the class the given set of input belongs to . These models gives the probability of Y(class) given the features X.

Mathematically it is given as : P(Y|X)

Generative models on the other hand takes the class(Y) with some random noise as input and it tries to give the set of features X as the output. These models gives the probability of X given the class Y

Mathematically it is given as : P(X|Y)

There are many types of generative models :

• Gaussian mixture model
• Hidden Markov model
• Probabilistic context-free grammar
• Bayesian network
• Averaged one-dependence estimators
• Latent Dirichlet allocation
• Boltzmann machine
• Variational autoencoder
• Flow-based generative model
• Energy based model
• Generative adversarial network is the topic of interest for the rest of the document.

A generative adversarial network is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014.

Basic principle :

GANs are a type of deep neural network that allow us to generate data.

The two neural networks that make up a GAN are referred to as the generator and the discriminator. The generator is a convolutional neural network and the discriminator is a deconvolutional neural network. The goal of the generator is to artificially manufacture outputs that could easily be mistaken for real data. The goal of the discriminator is to identify which outputs it receives have been artificially created(in other words classify between real and fake data).

Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution.

Generator :

The main objective of the generator is to fool the discriminator by creating realistic data .Take the feedback from the discriminator and learn to create more realistic images.

Discriminator :

The discriminator in a GAN is simply a classifier. It tries to distinguish real data from the data created by the generator. It could use any network architecture appropriate to the type of data it's classifying.

The discriminator's training data comes from two sources:

• Real data instances, such as real pictures of people. The discriminator uses these instances as positive examples during training.
• Fake data instances created by the generator. The discriminator uses these instances as negative examples during training.
• Use cases :

Applications of GANs go beyond boundaries . GANs are used widely in image generation, video generation and voice generation. GANs are not just limited to these but also used in medical image synthesis . Down here are very few use cases of GANs

• Generate Examples for Image Datasets
• Generate Photographs of Human Faces
• Generate Realistic Photographs
• Generate Cartoon Characters
• Image-to-Image Translation
• Text-to-Image Translation
• Semantic-Image-to-Photo Translation
• Face Frontal View Generation
• Generate New Human Poses
• Photos to Emojis
• Photograph Editing
• Face Aging
• Photo Blending
• Super Resolution
• Photo Inpainting
• Clothing Translation
• Video Prediction
• 3D Object Generation
• GAN Loss Function:

The GAN architecture consists of two models: a discriminator and a generator. The discriminator is trained directly on real and generated images and is responsible for classifying images as real or fake (generated). The generator is not trained directly and instead is trained via the discriminator model.

For the GAN, the generator and discriminator are the two players and take turns involving updates to their model weights. The min and max refer to the minimization of the generator loss and the maximization of the discriminator’s loss.

As stated above, the discriminator seeks to maximize the average of the log probability of real images and the log of the inverse probability for fake images.

discriminator: maximize log D(x) + log(1 – D(G(z)))

The generator seeks to minimize the log of the inverse probability predicted by the discriminator for fake images. This has the effect of encouraging the generator to generate samples that have a low probability of being fake.

generator: minimize log(1 – D(G(z)))