Transfer Learning

#vgg16 #lenet5 #alexnet #resnet #inception

surya gokul Feb 17 2021 · 3 min read
Share this

Transfer Learning  :   Transferring Knowledge of trained model to another.

Transfer Learning is the Advanced CNN,which has network architectures pre trained on millions of data.We use those pretrained weights for our custom model based upon input and output we want.Here we need to know a competition called ImageNet started from 2011.It contains dataset of millions of images with 1000 classes.This competition is held for each and every year.Anyone can participate and has freedom to create the best architecture which gives highest accuracy.. With the help of this competition so many state of art models were trained.Some of them are given below -

                                        1.  Lenet                                        

                                        2.  Alexnet                                   

                                        3.  Vgg     

                                        4.  Resnet

                                        5. Inceptionnet     

                                       6. Mobilenet  etc                                                                  

These are said to be `State Of Art Algorithms Of CNN.`Because these will be the best at that particular time.No other models give that much accuracy.  w.r.t all of these networks we have to look for `How many numbers of parameters are there?`   like {weights ,bias}.

To know all the available models and there parameters refer ----> 

https://keras.io/api/applications/

Keras Available Models

In real world industry,we doesn't take entire architecture of pre trained model like Input layer and Output layer. Because pre trained model Input's and output's are different to our custom project.We may use different input and outputs classes.. For this reason we doesn't take entire architecture.

This shows how actually we use pre trained models.

How to use these models for any project?

  • Do CNN from Scratch
  • Keras tuner CNN
  • Use VGG16     (which gives approximately 88% accuracy or more than that)
  • If VGG16 doesn't work, then use 'ResNet50'
  • If ResNet50 also doesn't work, use Inceptionv3.
  •        Almost any Classification problem will be solved by using InceptionNet.

    LeNet5 Architecture  

    It is very Oldest Network developed by  "Yann Lecun and Yoshua Bengio" in 1988. If you want to read paper refer  http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf

    Architecture Of LeNet5
    Lenet layers and their features

    How to Calculate total parameters?

    Dis Advantages Of LeNet 

  • Activation function used in LeNet is tanh which have vanishing gradient problem.
  • Here Average Pooling is used.It's one of the concern for lenet because -
  • In this we doesn't particulary focus on any features.We don't get any clarity of particular feature of an image.

    But in case of Max Pooling          -   

    In this we focuses on particular feature which is very much important.

    Therefore we say that in average pooling, we doesn't extracts any important inforamtion.

    That's all about lenet architecture.I will come again with AlexNet

    Comments
    Read next