Convolutional Autoencoder for Dummies

Post Grześka o tym, jak łatwo w Lasagne zaimplementować konwolucyjny autoencoder!

Grzegorz Gwardys

Each day, I become a bigger fan of Lasagne. Recently, after seeing some cool stuff with a Variational Autoencoder trained on Blade Runner, I have tried to implement a much simpler Convolutional Autoencoder, trained on a lot simpler dataset – mnist. The task turned out to be a really easy one, thanks to two existing in Lasagne layers: Deconv2DLayer and Upscale2DLayer . My Convolution Autoencoder consists of two stages:

  1. Coding consists of convolutions and maxpoolings
  2. Decoding consists of upscalings and deconvolutions.

convolutional_autoencoder Outline of Convolutional Autoencoder

Some thought experiment, that must be processed to realize how easy it is, is to realize that deconvolutions are just convolutions! What is more, if somebody read my post Convolutional Neural Networks backpropagation: from intuition to derivation then he or she saw this concept in the backpropagation phase!

Citing myself (I feel really embarrassed now for this didactic tone …):

Yeah, it is a bit different convolution than…

View original post 421 more words

Advertisements

The simple example of Theano and Lasagne super power

Post Grześka o tym, jak łatwo eksperymentować w Lasagne!

Grzegorz Gwardys

I mentioned in my initial post “Deep Learning Frameworks Overview” that my choice of Deep Learning library is (at least for now)  Theano and Lasagne combination. However, I did not use, in all my post, some most important word: the experiment. So, let’s assume that you have some idea and want to test it quickly. For example, what if we add to a standard CNN (omitted maxpooling for a clarity):

standard_mnist_cnn

some extra “convolutional branch”, that is concatenated with last but one layer:

modified_cnn

This experiment is really easy to do in (based on Theano) Lasagne. I have just added a build_modified_cnn  method to a mnist example (bolded text refers to my “convolutional branch”, rest is the same as a standard build_cnn method):

defbuild_modified_cnn(input_var=None):
    l_in = lasagne.layers.InputLayer(shape=(None, 1, 28, 28),
        input_var=input_var)

    l_conv1 =

View original post 310 more words

Convolutional Neural Networks backpropagation: from intuition to derivation

Kolejny post Grześka, tym razem o Konwolucyjnych Sieciach Neuronowych!

Grzegorz Gwardys

Disclaimer: It is assumed that the reader is familiar with terms such as Multilayer Perceptron, delta errors or backpropagation. If not,  it is recommended to read for example a chapter 2 of free online book ‘Neural Networks and Deep Learning’ by Michael Nielsen.   

Convolutional Neural Networks (CNN) are now a standard way of image classification – there are publicly accessible deep learning frameworks, trained models and services. It’s more time consuming to install stuff like caffe than to perform state-of-the-art object classification or detection. We also have many methods of getting knowledge -there is a large number of deep learning courses/MOOCs, free e-books or even direct ways of accessing to the strongest Deep/Machine Learning minds such as Yoshua Bengio, Andrew NG or Yann Lecun by Quora, Facebook or G+.

Nevertheless, when I wanted to get deeper insight in CNN, I could not find a “CNN backpropagation for dummies”. Notoriously…

View original post 785 more words

Deep Learning Frameworks Overview

Post Grześka Gwardysa z mini przeglądem biblioteczek do Deep Learningu (głównie Theano)

Grzegorz Gwardys

I have some experience with caffe and it was my main tool for research in area of Music Information Retrieval. However, Deep Learning is not reduced to Convolution Neural Networks and caffe is not suitable for fast, prototype implementations. So I was faced with the question: What is the best Deep Learning framework?

Before google-it let’s  quora-it. We can easily find a related question: Which is the best deep learning framework Theano Torch7 or Caffe ? I recommend to read all this thread, but here I copy-paste some interesting parts:

If one wants to code up the entire algorithm for specific problem Theano is the quickest to get started with. It gives a comprehensive control over Neural Network formation . The reason we use Theano at ParallelDots is that the Neural Networks we make had no standard implementations and hence Theano was the best way to prototype them .

View original post 885 more words