SIREN: Implicit Neural Representations with Periodic Activation Functions (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=Q5g3p9Zwjrk



Duration: 56:05
39,024 views
1,287


Implicit neural representations are created when a neural network is used to represent a signal as a function. SIRENs are a particular type of INR that can be applied to a variety of signals, such as images, sound, or 3D shapes. This is an interesting departure from regular machine learning and required me to think differently.

OUTLINE:
0:00 - Intro & Overview
2:15 - Implicit Neural Representations
9:40 - Representing Images
14:30 - SIRENs
18:05 - Initialization
20:15 - Derivatives of SIRENs
23:05 - Poisson Image Reconstruction
28:20 - Poisson Image Editing
31:35 - Shapes with Signed Distance Functions
45:55 - Paper Website
48:55 - Other Applications
50:45 - Hypernetworks over SIRENs
54:30 - Broader Impact

Paper: https://arxiv.org/abs/2006.09661
Website: https://vsitzmann.github.io/siren/

Abstract:
Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives. We analyze Siren activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine Sirens with hypernetworks to learn priors over the space of Siren functions.

Authors: Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher




Other Videos By Yannic Kilcher


2020-07-01GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding (Paper Explained)
2020-06-30Object-Centric Learning with Slot Attention (Paper Explained)
2020-06-29Set Distribution Networks: a Generative Model for Sets of Images (Paper Explained)
2020-06-28Context R-CNN: Long Term Temporal Context for Per-Camera Object Detection (Paper Explained)
2020-06-27Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures (Paper Explained)
2020-06-26On the Measure of Intelligence by François Chollet - Part 3: The Math (Paper Explained)
2020-06-25Discovering Symbolic Models from Deep Learning with Inductive Biases (Paper Explained)
2020-06-24How I Read a Paper: Facebook's DETR (Video Tutorial)
2020-06-23RepNet: Counting Out Time - Class Agnostic Video Repetition Counting in the Wild (Paper Explained)
2020-06-22[Drama] Yann LeCun against Twitter on Dataset Bias
2020-06-21SIREN: Implicit Neural Representations with Periodic Activation Functions (Paper Explained)
2020-06-20Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
2020-06-19On the Measure of Intelligence by François Chollet - Part 2: Human Priors (Paper Explained)
2020-06-18Image GPT: Generative Pretraining from Pixels (Paper Explained)
2020-06-17BYOL: Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning (Paper Explained)
2020-06-16TUNIT: Rethinking the Truly Unsupervised Image-to-Image Translation (Paper Explained)
2020-06-15A bio-inspired bistable recurrent cell allows for long-lasting memory (Paper Explained)
2020-06-14SynFlow: Pruning neural networks without any data by iteratively conserving synaptic flow
2020-06-13Deep Differential System Stability - Learning advanced computations from examples (Paper Explained)
2020-06-12VirTex: Learning Visual Representations from Textual Annotations (Paper Explained)
2020-06-11Linformer: Self-Attention with Linear Complexity (Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
implicit
nerf
neural processes
optimization
curve fitting
audio
signal processing
surfaces
point clouds
oriented
signed distance function
mlp
layers
hypernetworks
representation
function
sin
sinus
sinusoid
fourier
initialization
relu
nonlinearity
derivative
gradient
laplacian
wave