Post Grześka o tym, jak łatwo w Lasagne zaimplementować konwolucyjny autoencoder!
Each day, I become a bigger fan of Lasagne. Recently, after seeing some cool stuff with a Variational Autoencoder trained on Blade Runner, I have tried to implement a much simpler Convolutional Autoencoder, trained on a lot simpler dataset – mnist. The task turned out to be a really easy one, thanks to two existing in Lasagne layers: Deconv2DLayer and Upscale2DLayer . My Convolution Autoencoder consists of two stages:
- Coding consists of convolutions and maxpoolings
- Decoding consists of upscalings and deconvolutions.
Outline of Convolutional Autoencoder
Some thought experiment, that must be processed to realize how easy it is, is to realize that deconvolutions are just convolutions! What is more, if somebody read my post Convolutional Neural Networks backpropagation: from intuition to derivation then he or she saw this concept in the backpropagation phase!
Citing myself (I feel really embarrassed now for this didactic tone …):
Yeah, it is a bit different convolution than…
View original post 421 more words