Demo: Generating formally proven low-level parsers with EverParse

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=Xma9_6J9a7o



Duration: 18:27
84 views
0


Speaker: Aseem Rastogi, Principal Researcher, Microsoft Research India

DARPA and MITRE estimate that 80 percent of software security vulnerabilities have incorrect input validation as their root cause. In such scenarios, attackers provide malformed input, which, when not properly rejected, causes various misbehaviors such as buffer overruns or integer overflows, which ultimately lead to giving the attacker full control of the system. Thus, hardening critical software systems by systematically replacing their input validation code with formally proven message parsers can make a radical difference. This talk is the third of three research talks that will present ongoing and future research and engineering efforts to this end, demonstrating how projects such as Microsoft Research EverParse and DARPA SafeDocs harden input validation for various applications, ranging from network communication protocols to document formats. See talks by Sergey Bratus, DARPA, and Tahina Ramananandro, Microsoft Research Redmond, for more information.

Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit




Other Videos By Microsoft Research


2022-02-08Demo: Enabling end-to-end causal inference at scale
2022-02-08Research Talk: Enhancing the robustness of massive language models via invariant risk minimization
2022-02-08Research talk: Post-contextual-bandit inference
2022-02-08Research talk: Causal ML and fairness
2022-02-08Research talk: Causal learning: Discovering causal relations for out-of-distribution generalization
2022-02-08Research talk: Can causal learning improve the privacy of ML models?
2022-02-08Research talk: Causal ML and business
2022-02-08Research talk: Challenges and opportunities in causal machine learning
2022-02-08Opening remarks: Causal Machine Learning
2022-02-08Closing remarks: The Future of Privacy and Security
2022-02-08Demo: Generating formally proven low-level parsers with EverParse
2022-02-08Demo: EverParse: Automatic generation of formally verified secure parsers for cloud integrity
2022-02-08Research talk: DARPA SafeDocs: an approach to secure parsing and information interchange formats
2022-02-08Research talk: Privacy in machine learning research at Microsoft
2022-02-08Research talk: Towards bridging between legal and technical approaches to data protection
2022-02-08Research talk: Building towards a responsible data economy
2022-02-08Keynote: Unlocking exabytes of training data through privacy preserving machine learning
2022-02-08Closing remarks: Responsible AI
2022-02-08Opening remarks: The Future of Privacy and Security
2022-02-08Tutorial: Create human-centered AI with the Human-AI eXperience (HAX) Toolkit
2022-02-08Panel: Maximizing benefits and minimizing harms with language technologies



Tags:
security
user privacy
future of security
future of privacy
trust in technology
system integrity
privacy preserving machine learning
election integrity
secure parsing technology
communication protocols for systems
microsoft research summit