Author Interview - Transformer Memory as a Differentiable Search Index

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=C7mUYocWdG0



Duration: 43:04
6,860 views
201


#neuralsearch #interview #google

This is an interview with the authors Yi Tay and Don Metzler.
Paper Review Video: https://youtu.be/qlB0TPBQ7YY

Search engines work by building an index and then looking up things in it. Usually, that index is a separate data structure. In keyword search, we build and store reverse indices. In neural search, we build nearest-neighbor indices. This paper does something different: It directly trains a Transformer to return the ID of the most relevant document. No similarity search over embeddings or anything like this is performed, and no external data structure is needed, as the entire index is essentially captured by the model's weights. The paper experiments with various ways of representing documents and training the system, which works surprisingly well!

OUTLINE:
0:00 - Intro
0:50 - Start of Interview
1:30 - How did this idea start?
4:30 - How does memorization play into this?
5:50 - Why did you not compare to cross-encoders?
7:50 - Instead of the ID, could one reproduce the document itself?
10:50 - Passages vs documents
12:00 - Where can this model be applied?
14:25 - Can we make this work on large collections?
19:20 - What's up with the NQ100K dataset?
23:55 - What is going on inside these models?
28:30 - What's the smallest scale to obtain meaningful results?
30:15 - Investigating the document identifiers
34:45 - What's the end goal?
38:40 - What are the hardest problems currently?
40:40 - Final comments & how to get started

Paper: https://arxiv.org/abs/2202.06991

Abstract:
In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup.

Authors: Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler

Links:
Merch: http://store.ykilcher.com
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://ykilcher.com/discord
BitChute: https://www.bitchute.com/channel/yannic-kilcher
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2022-06-01Did I crash the NFT market?
2022-05-13[ML News] DeepMind's Flamingo Image-Text model | Locked-Image Tuning | Jurassic X & MRKL
2022-05-10[ML News] Meta's OPT 175B language model | DALL-E Mega is training | TorToiSe TTS fakes my voice
2022-05-05This A.I. creates infinite NFTs
2022-05-02Author Interview: SayCan - Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
2022-04-30Do As I Can, Not As I Say: Grounding Language in Robotic Affordances (SayCan - Paper Explained)
2022-04-26Author Interview - ACCEL: Evolving Curricula with Regret-Based Environment Design
2022-04-25ACCEL: Evolving Curricula with Regret-Based Environment Design (Paper Review)
2022-04-22LAION-5B: 5 billion image-text-pairs dataset (with the authors)
2022-04-21Sparse Expert Models (Switch Transformers, GLAM, and more... w/ the Authors)
2022-04-17Author Interview - Transformer Memory as a Differentiable Search Index
2022-04-16Transformer Memory as a Differentiable Search Index (Machine Learning Research Paper Explained)
2022-04-10[ML News] Google's 540B PaLM Language Model & OpenAI's DALL-E 2 Text-to-Image Revolution
2022-04-06DALL-E 2 by OpenAI is out! Live Reaction
2022-04-04The Weird and Wonderful World of AI Art (w/ Author Jack Morris)
2022-04-02Author Interview - Improving Intrinsic Exploration with Language Abstractions
2022-04-01Improving Intrinsic Exploration with Language Abstractions (Machine Learning Paper Explained)
2022-03-30[ML News] GPT-3 learns to edit | Google Pathways | Make-A-Scene | CLIP meets GamePhysics | DouBlind
2022-03-29Author Interview - Memory-assisted prompt editing to improve GPT-3 after deployment
2022-03-28Memory-assisted prompt editing to improve GPT-3 after deployment (Machine Learning Paper Explained)
2022-03-26Author Interview - Typical Decoding for Natural Language Generation