Perception Deception Physical Adversarial Attack Challenges

Subscribers:
5,970
Published on ● Video Link: https://www.youtube.com/watch?v=7VPyHkZudgM



Duration: 54:33
11 views
0


DNN has been successful for Object Detection, which is critical to the perceptions of Autonomous Driving, and it also has been found vulnerable to adversarial examples. There has been an ongoing debate whether the perturbations to the sensor input, such as video streaming data from the camera, is practically achievable. Instead of tampering with the input streaming data, we added perturbations to the target object which is more practical. Our goal of this talk is to shed a light to the challenges of the physical adversarial attack against computer vision-based object detection system, and the tactics we applied to achieve success. At the same time, we'd like to raise the security concerns of AI-powered perception system, and urge the research efforts to harden the DNN models.

The presentation starts with an overview of YOLOv3 to introduce the fundamentals of the state-of-the-art object detection method, which takes in the camera input and produces accurate detections. It is followed by the threat models we design to achieve the physical attack by applying carefully crafted perturbations to the actual physical objects. We further reveal our attack algorithms and attack strategies respectively. Throughout the presentation, we will show examples about our initial digital attack, and how we adapt it to a physical attack given the environmental constraints, for example, an object is seen at various distances and various angles etc.,

Finally, we wrap up the presentation with a demo to make the audience aware that with a careful setup, computer vision-based object detection can be deceived. A robust, adversarial example resistant model is required in safety critical system like autonomous driving system.


Presenters:
Tao Wei - Chief Security Scientist, X-Lab, Baidu USA
Tao Wei is a Chief Security Scientist at X-Lab.
Yunhan Jia - Senior Security Scientist, Baidu X-Lab
Yunhan Jia is a senior security scientist at Baidu X-Lab. He obtained his PhD from University of Michigan with a research focus on smartphone, IoT, and autonomous vehicle security. His past research revealed the open port vulnerabilities in apps that exposed millions of Android devices to remote exploits. He is currently working on the memory safety and deep learning model security issues of autonomous vehicle platform.
Weilin Xu - PhD candidate, Department of Computer Science at the University of Virginia
Weilin Xu is a PhD candidate in the Department of Computer Science at the University of Virginia, co-advised by Prof. David Evans and Prof. Yanjun Qi. He is interested in creating robust machine learning-based classifiers. His research has developed a generic method for generating adversarial examples using genetic programming, and a general technique named Feature Squeezing to harden deep learning models by eliminating unnecessary features.
Zhenyu Zhong - Staff Security Scientist, X-Lab, Baidu USA
Zhenyu Zhong's current research focuses on adversarial machine learning, particularly deep learning. He explores physical attack tactics against autonomous perception models, as well as defensive approaches to harden the deep learning model. Previously, Dr. Zhong worked for Microsoft and McAfee, mainly applying large scale machine learning solutions to security problems such as malware classification, intrusion detection, malicious URL detection, spam filtering, etc..

Black Hat - Europe - 2018
Hacking conference
#hacking, #hackers, #infosec, #opsec, #IT, #security




Other Videos By All Hacking Cons


2021-12-21Attacking and Defending Blockchains From Horror Stories to Secure Wallets
2021-12-21Straight Outta VMware Modern Exploitation of the SVGA Device for Guest to Host Escapes
2021-12-21Network Defender Archeology An NSM Case Study in Lateral Movement with DCOM
2021-12-21Attacking Hardware Systems Using Resonance and the Laws of Physics
2021-12-21The Last Line of Defense Understanding and Attacking Apple File System on iOS
2021-12-21Eternal War in XNU Kernel Objects Black Hat - Europe - 2018
2021-12-21Evolving Security Experts Among Teenagers Black Hat - Europe - 2018
2021-12-21No Free Charge Theorem 2 0 How to Steal Private Information from a Mobile Device Using a Powerbank
2021-12-21Off Path Attacks Against PKI Black Hat - Europe - 2018
2021-12-21How to Build Synthetic Persons in Cyberspace
2021-12-21Perception Deception Physical Adversarial Attack Challenges
2021-12-21BLEEDINGBIT Your APs Belong to Us Black Hat - Europe - 2018
2021-12-21Perfectly Deniable Steganographic Disk Encryption
2021-12-21DIFUZE Android Kernel Driver Fuzzing Black Hat - Europe - 2017
2021-12-21Becoming You A Glimpse Into Credential Abuse
2021-12-21How to Rob a Bank over the Phone Lessons Learned from an Actual Social Engineering Engagement
2021-12-21Wi Fi Direct To Hell Attacking Wi Fi Direct Protocol Implementations
2021-12-21Breaking Out HSTS and HPKP On Firefox, IE Edge and Possibly Chrome
2021-12-21Enraptured Minds Strategic Gaming of Cognitive Mindhacks
2021-12-21Zero Days, Thousands of Nights The Life & Times of Zero Day Vulns and Their Exploits
2021-12-21I Trust My Zombies A Trust Enabled Botnet



Tags:
data
hacker
security
computer
cyber
internet
technology
hacking
attack
digital
virus
information
hack
online
crime
password
code
web
concept
thief
protection
network
scam
fraud
malware
secure
identity
criminal
phishing
software
access
safety
theft
system
firewall
communication
business
privacy
binary
account
spy
programmer
program
spyware
hacked
hacking conference
conference
learn
how to
2022
2021
cybersecurity
owned
break in
google
securing
exploit
exploitation
recon
social engineering