Attention in Convolutional Neural Networks
Convolutional Neural Networks (CNN) have shown remarkable classification performances on many different datasets. However, these networks need couple of days to train for example on ImageNet and even during inference the processing time of one image takes too long to run state-of-the-art networks in real-time, especially in context of a driving-assistant system.
In this discussion group I would like to discuss possible ways to either speed-up inference or reduce the computational load of the network by introducing attention-mechanisms.
The starting point is the 'communication-through-coherence' hypothesis proposed by Fries [Fries 2005, Fries 2015, Buschman and Kastner 2015].
Timetable
Day | Time | Location |
---|---|---|
Tue, 03.05.2016 | 14:00 - 16:00 | Sala Panorama |
Leaders
Moritz Milde
Yulia Sandamirskaya
Members
Adam Arany
Enrico Calabrese
Lukas Cavigelli
Gabriel Andres Fonseca Guerra
Giacomo Indiveri
Alejandro Linares-Barranco
Shih-Chii Liu
Moritz Milde
Manu Nair
Guido Novati
Johannes Partzsch
Melika Payvand
Christian Pehle
Mihai Alexandru Petrovici
Gary Pineda-Garcia
Yulia Sandamirskaya
Jaak Simm
Saray Soldado Magraner
Alan Stokes
Evangelos Stromatias
Dora Sumislawska
André van Schaik
Nikolaos Vasileiadis
Bernhard Vogginger