Keynote: ReduNet: Deep (convolutional) networks from the principle of rate reduction

Subscribers:
343,000
Published on ● Video Link: https://www.youtube.com/watch?v=1LjkGb9zczY



Duration: 27:49
560 views
21


Speaker: Yi Ma, Professor, University of California, Berkeley

In this talk, we will offer an entirely white-box interpretation of deep (convolutional) networks from the perspective of data compression and group invariance. We’ll show how modern deep-layered architectures, linear (convolutional) operators and nonlinear activations, and even all parameters can be derived from the principle of maximizing rate reduction with group invariance. We’ll cover how all layers, operators, and parameters of the network are explicitly constructed through forward propagation rather than learned through back propagation. We’ll also explain how all components of the so-obtained network, called ReduNet, have precise optimization, geometric, and statistical interpretation. You’ll learn how this principled approach reveals a fundamental tradeoff between invariance and sparsity for class separability; how it reveals a fundamental connection between deep networks and Fourier transform for group invariance—the computational advantage in the spectral domain; and how it clarifies the mathematical role of forward and backward propagation. Finally, you’ll discover how the so-obtained ReduNet is amenable to fine-tuning through both forward and backward propagation to optimize the same objective.

𝗥𝗲𝗹𝗮𝘁𝗲𝗱 𝗿𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀:
Deep (Convolution) Networks from First Principles
■ English: Harvard University, Center of Mathematical Sciences and Applications, Math-Science Literature Lecture Series.
YouTube: https://www.youtube.com/watch?v=z2bQXO2mYPo
■ Chinese: Tsinghua University, Institute for AI Industrial Research.
Tencent Video: https://v.qq.com/x/page/k3248slfvw8.html

Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit




Other Videos By Microsoft Research


2022-02-25Reinforcement Learning (RL) Open Source Fest 2021 | Final Presentations - Part 1
2022-02-25Reinforcement Learning (RL) Open Source Fest 2021 | Final Presentations - Part 2
2022-02-24Towards a New Biology Nexus: Race, Society and Story in the Science of Life
2022-02-18Microsoft Soundscape - an Illustrated Demonstration
2022-02-08Research talk: Maia Chess: A human-like neural network chess engine
2022-02-08Research talk: Safe reinforcement learning using advantage-based intervention
2022-02-08Research talk: Reinforcement learning with preference feedback
2022-02-08Keynote: Key research challenges for real world reinforcement learning
2022-02-08Opening remarks: Reinforcement Learning
2022-02-08Closing remarks: Health & Life Sciences - Discovery
2022-02-08Keynote: ReduNet: Deep (convolutional) networks from the principle of rate reduction
2022-02-08Closing remarks: Towards Human-Like Visual Learning and Reasoning
2022-02-08Research talks: Generalization and adaptation
2022-02-08Research talk: CitizenEndo: Patient-centered endometriosis research
2022-02-08Research talks: Few-shot and zero-shot visual learning and reasoning
2022-02-08Research talk: Learning to read the adaptive immune systems of humans
2022-02-08Research talk: Next generation spatial genomics
2022-02-08Research talk: Leveraging uncertainty in machine learning to bridge computation and experimentation
2022-02-08Research talks: Learning for interpretability
2022-02-08Fireside chat: The future of medicines and innovation
2022-02-08Keynote: Learning from observation: Small-data approach to human common sense



Tags:
Human-Like Visual Learning
visual learning
visual Reasoning
big data deep learning
visual tasks
real-world tasks
microsoft research summit