How to Compress Garbled Circuit Input Labels, Efficiently

Subscribers:
342,000
Published on ● Video Link: https://www.youtube.com/watch?v=csQKXSAhHr8



Duration: 0:00
384 views
8


Speakers: Hanjun Li, University of Washington

Garbled circuit is a foundational primitive in both theory and practice of cryptography. Given (\hat{C}, K[x]), where \hat{C} is the garbling of a circuit C and K[x] = {K[i, x_i]} are the input labels for an input x, anyone can recover C(x), but nothing else about the input x. Most research efforts have focused on minimizing the size of the garbling \hat{C}. In contrast, the work by Applebaum, Ishai, Kushilevitz, and Waters (CRYPTO' 13) initiated the study of minimizing the cost of transferring the input labels K[x]. Later improved in a follow-up by Applebaum et al. (STOC' 23), the state-of-the-art techniques allow compressing the input labels to the optimal rate of 1 + o(1). That is, each input label can be transferred by essentially sending 1 bit. However, existing solutions are computationally expensive, requiring large numbers of public-key operations (such as RSA exponentiations). In this talk, we present an efficient input label compression technique based on Ring-LWE. It achieves the same optimal rate of 1 + o(1), by making use of additional communication in an offline stage (before the input x becomes known). The offline communication is either reusable or compressible using a random oracle, leading to a small amortized offline cost, o(|x|). We further demonstrate concrete efficiency through an implementation whose online latency out-performs the naive baseline (which sends all of K[x] in the online phase) in a realistic network with a bandwidth of up to 45Mbps. This break-even point can be pushed further by leveraging the large potential for parallelization of computation. Finally, we present an application of our techniques: a maliciously-secure two-party computation protocol with succinct online communication. The online phase starts once the circuit C becomes known, and the parties exchange poly(\lambda) bits (independent of |C|). Then when inputs x_A, x_B become known, the parties exchange an additional |x_A| + |x_B | + poly(\lambda) bits.




Other Videos By Microsoft Research


6 days agoMicrosoft as Customer Zero: Empowering Research Teams with AI
6 days agoA Fever Dream of Machine Learning Framework Composability
2025-04-29AI for Africa’s Future: Innovation, Equity, and Impact
2025-04-22Hamming Quasi-Cyclic
2025-04-22Towards Safer Augmented Reality: Identifying, Evaluating, and Mitigating Security & Privacy Threats
2025-04-22Shining light on the learning brain: Estimating mental workload in a simulated flight task using opt
2025-03-24How to Compress Garbled Circuit Input Labels, Efficiently
2025-03-24Differentially Private Synthetic Data without Training
2025-03-21Celebrating Susan Dumais: Reflections on a Legacy of Research and Collaboration | Plenary Session
2025-03-21The Assistant: Situated Interaction Project (2012)
2025-03-20The AI Revolution in Medicine, Revisited: An Introduction
2025-03-10AI and Europe's history of reinvention
2025-03-03World and Human Action Models towards gameplay ideation (Supplementary Video 1)
2025-03-03LLMs vs. Torch 1.5: Why Your Code Assistant Can't Keep Up
2025-02-25Using LLMs for safe low-level programming | Microsoft Research Forum
2025-02-25AutoGen v0.4: Reimagining the foundation of agentic AI for scale and more | Microsoft Research Forum
2025-02-25Belief state transformers | Microsoft Research Forum
2025-02-25Magma: A foundation model for multimodal AI Agents | Microsoft Research Forum
2025-02-25Chimera: Accurate synthesis prediction by ensembling models with... | Microsoft Research Forum
2025-02-25AI for Precision Health: Learning the language of nature and patients | Microsoft Research Forum
2025-02-25Keynote: Multimodal Generative AI for Precision Health | Microsoft Research Forum