What Could Be the Data-Structures of the Mind?

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=M_TnYgGiPqI



Duration: 34:00
2,489 views
73


A Google TechTalk, presented by Rina Panigrahy, 2021/12/01
ABSTRACT: What is a reasonable architecture for an algorithmic view of the mind? Is it akin to a single giant deep network or is it more like several small modules connected by some graph? How is memory captured -- is it some lookup table? Take a simple event like meeting someone over coffee -- how would your mind remember who the person was, what was discussed? Such information needs to be organized and indexed in a way so that it can be quickly accessed in future if say I met the same person again.

We propose that information related to such events and inputs is stored as a sketch (a compact representation that approximately reconstructs the input) -- the sketching mechanism is based on random subspace embedding and is able to approximately reconstruct the original input and its basic statistics up to some level of accuracy. The sketching mechanism implicitly enables different high level object oriented abstractions such as classes, attributes, references, type-information, modules into a knowledge graph without explicitly incorporating such ideas into the mechanism operations. We will see how ideas based on sketching can lead to an initial version of a very simplified architecture for an algorithmic view of the mind. We will see how a simplified implementation of Neural Memory based storing sketches using Locality sensitive hashing can be used to almost double the capacity of BERT with a small amount of Neural Memory while adding less than 1% FLOPS.

References:
1. Related to the panel discussion "Is there a mathematical model of the Mind?": https://www.youtube.com/watch?v=g5DGBWjiULQ&t=6496s
2. Recursive Sketches for Modular Deep Learning
https://arxiv.org/abs/1905.12730
3. Sketch based Memory for Neural Networks
http://proceedings.mlr.press/v130/panigrahy21a/panigrahy21a.pdf
4. How does the Mind store Information?
https://arxiv.org/abs/1910.06718
5. Provable Hierarchical Lifelong Learning with a Sketch-based Modular Architecture
https://arxiv.org/abs/2112.10919

About the Speaker: Rina Panigrahy, Research Scientist.
https://theory.stanford.edu/~rinap




Other Videos By Google TechTalks


2022-02-08Day 2 Lightning Talks: Privacy & Security
2022-02-08Google Keynote: Federated Learning & Federated Analytics-Research, Applications, & System Challenges
2022-02-08Academic Keynote: Differentially Private Covariance-Adaptive Mean Estimation, Adam Smith (BU)
2022-02-08Academic Keynote: Mean Estimation with User-level Privacy under Data Heterogeneity, Rachel Cummings
2022-02-08Day 1 Lightning Talks: Federated Optimization and Analytics
2022-02-08Day 1 Lightning Talks: Privacy & Security
2022-02-08Academic Keynote: Federated Learning with Strange Gradients, Martin Jaggi (EPFL)
2022-02-08Google Keynote: Federated Aggregation and Privacy
2022-02-08Welcome and Opening Remarks
2022-01-25Warehouse-Scale Video Acceleration: Co-Design and Deployment in the Wild
2022-01-06What Could Be the Data-Structures of the Mind?
2021-12-21Differential Privacy and the 2020 Census in the United States
2021-12-21Covariance-Aware Private Mean Estimation Without Private Covariance Estimation
2021-12-21Private Histograms in the Shuffle Model
2021-12-14The Platform Design Problem
2021-11-19Reducing Polarization and Increasing Diverse Navigability in Graphs
2021-10-12Near-Optimal Experimental Design for Networks: Independent Block Randomization
2021-10-06Greybeard Qualification (Linux Internals) part 1: Process Structure and IPC
2021-10-06Greybeard Qualification (Linux Internals) part 3: Memory Management
2021-10-06Greybeard Qualification (Linux Internals) part 2 Execution, Scheduling, Processes & Threads
2021-10-06Greybeard Qualification (Linux Internals) part 6: Networking & Building a Kernel