Reduced bit-precision in Convolutional Neural Networks

Deep neural networks (DNN) and Convolutional Networks (CNNs) offer a great opportunity for lots of classification and recognition tasks in machine learning, such as handwritten digit recognition (based on MNIST dataset) or image classification (based on Imagenet dataset). The weight between two consecutive neurons within the deep network is stored with high-resolution, i.e. 32 bits are used to represent the weight. However, storage capacity and memory access are two limiting factors when it comes to an implementation of deep networks on small devices, since the storage capacity is limited and each memory access consumes power. To overcome this bottleneck research groups from all over the world tried to lower the bit-resolution of the synaptic weight, while maintaining a high performance.

I would like to first discuss SOA methods to compress the weights of a CNN and secondly, what are the next steps for either compressing the weights further or how to counteract the loss in classification performance when weights are compressed.

Login to become a member send


Day Time Location
Fri, 29.04.2016 19:00 - 19:30 Bar


Moritz Milde


Alessandro AImar
Adam Arany
Christopher Bennett
Lukas Cavigelli
Tobi Delbruck
Gabriel Andres Fonseca Guerra
Giacomo Indiveri
Shih-Chii Liu
Manu Nair
Guido Novati
Johannes Partzsch
Melika Payvand
Mihai Alexandru Petrovici
Jaak Simm
Evangelos Stromatias
André van Schaik
Nikolaos Vasileiadis
Borys Wrobel
Qi Xu