Recurrent Models of Visual Attention | TDLS
Toronto Deep Learning Series, 4 September 2018
Paper Review: https://papers.nips.cc/paper/5542-recurrent-models-of-visual-attention.pdf
Speaker: https://www.linkedin.com/in/shazia-akbar-52164923/
Organizer: https://www.linkedin.com/in/amirfz/
Host: Loblaw Digital
Paper abstract:
Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.