A bio-inspired bistable recurrent cell allows for long-lasting memory (Paper Explained)

Subscribers:
291,000
Published on ● Video Link: https://www.youtube.com/watch?v=DLq1DUcMh1Q



Duration: 49:13
7,939 views
416


Even though LSTMs and GRUs solve the vanishing and exploding gradient problems, they have trouble learning to remember things over very long time spans. Inspired from bistability, a property of biological neurons, this paper constructs a recurrent cell with an inherent memory property, with only minimal modification to existing architectures.

OUTLINE:
0:00 - Intro & Overview
1:10 - Recurrent Neural Networks
6:00 - Gated Recurrent Unit
14:40 - Neuronal Bistability
22:50 - Bistable Recurrent Cell
31:00 - Neuromodulation
32:50 - Copy First Benchmark
37:35 - Denoising Benchmark
48:00 - Conclusion & Comments

Paper: https://arxiv.org/abs/2006.05252
Code: https://github.com/nvecoven/BRC

Abstract:
Recurrent neural networks (RNNs) provide state-of-the-art performances in a wide variety of tasks that require memory. These performances can often be achieved thanks to gated recurrent cells such as gated recurrent units (GRU) and long short-term memory (LSTM). Standard gated cells share a layer internal state to store information at the network level, and long term memory is shaped by network-wide recurrent connection weights. Biological neurons on the other hand are capable of holding information at the cellular level for an arbitrary long amount of time through a process called bistability. Through bistability, cells can stabilize to different stable states depending on their own past state and inputs, which permits the durable storing of past information in neuron state. In this work, we take inspiration from biological neuron bistability to embed RNNs with long-lasting memory at the cellular level. This leads to the introduction of a new bistable biologically-inspired recurrent cell that is shown to strongly improves RNN performance on time-series which require very long memory, despite using only cellular connections (all recurrent connections are from neurons to themselves, i.e. a neuron state is not influenced by the state of other neurons). Furthermore, equipping this cell with recurrent neuromodulation permits to link them to standard GRU cells, taking a step towards the biological plausibility of GRU.

Authors: Nicolas Vecoven, Damien Ernst, Guillaume Drion

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher




Other Videos By Yannic Kilcher


2020-06-25Discovering Symbolic Models from Deep Learning with Inductive Biases (Paper Explained)
2020-06-24How I Read a Paper: Facebook's DETR (Video Tutorial)
2020-06-23RepNet: Counting Out Time - Class Agnostic Video Repetition Counting in the Wild (Paper Explained)
2020-06-22[Drama] Yann LeCun against Twitter on Dataset Bias
2020-06-21SIREN: Implicit Neural Representations with Periodic Activation Functions (Paper Explained)
2020-06-20Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
2020-06-19On the Measure of Intelligence by François Chollet - Part 2: Human Priors (Paper Explained)
2020-06-18Image GPT: Generative Pretraining from Pixels (Paper Explained)
2020-06-17BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)
2020-06-16TUNIT: Rethinking the Truly Unsupervised Image-to-Image Translation (Paper Explained)
2020-06-15A bio-inspired bistable recurrent cell allows for long-lasting memory (Paper Explained)
2020-06-14SynFlow: Pruning neural networks without any data by iteratively conserving synaptic flow
2020-06-13Deep Differential System Stability - Learning advanced computations from examples (Paper Explained)
2020-06-12VirTex: Learning Visual Representations from Textual Annotations (Paper Explained)
2020-06-11Linformer: Self-Attention with Linear Complexity (Paper Explained)
2020-06-10End-to-End Adversarial Text-to-Speech (Paper Explained)
2020-06-09TransCoder: Unsupervised Translation of Programming Languages (Paper Explained)
2020-06-08JOIN ME for the NeurIPS 2020 Flatland Multi-Agent RL Challenge!
2020-06-07BLEURT: Learning Robust Metrics for Text Generation (Paper Explained)
2020-06-06Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search (Paper Explained)
2020-06-05CornerNet: Detecting Objects as Paired Keypoints (Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
gru
lstm
schmidhuber
bistable
bistability
neurons
biological
spiking
tanh
stable
attractor
fixed points
memory
memorize
sparse
long sequence
history
storage
remember
rnn
recurrent neural network
gated recurrent unit
forget
backpropagation
biologically inspired