Reduced bit-precision in Convolutional Neural Networks
Deep neural networks (DNN) and Convolutional Networks (CNNs) offer a great opportunity for lots of classification and recognition tasks in machine learning, such as handwritten digit recognition (based on MNIST dataset) or image classification (based on Imagenet dataset). The weight between two consecutive neurons within the deep network is stored with high-resolution, i.e. 32 bits are used to represent the weight. However, storage capacity and memory access are two limiting factors when it comes to an implementation of deep networks on small devices, since the storage capacity is limited and each memory access consumes power. To overcome this bottleneck research groups from all over the world tried to lower the bit-resolution of the synaptic weight, while maintaining a high performance.
I would like to first discuss SOA methods to compress the weights of a CNN and secondly, what are the next steps for either compressing the weights further or how to counteract the loss in classification performance when weights are compressed.
Login to become a member sendTimetable
Day | Time | Location |
---|---|---|
Fri, 29.04.2016 | 19:00 - 19:30 | Bar |