Let's Code: Adversarial Robustness Toolbox (ART) – Create adversarial input to check AI
The Adversarial Robustness Toolbox (ART) is a Python library for machine learning security and a Graduated Project of the Linux Foundation for AI & Data. ART provides tools for AI Red and Blue Teams to evaluate and secure AI against the adversarial threats of evasion, poisoning, extraction, and inference. Here, we demonstrate how to apply ART to generate adversarial perturbations for images that decrease the performance of image classifiers. We will deploy preprocessing defenses to defend the image classifier and explain how to adapt our attack to account for the deployed defense.
View the course here: https://ibm.biz/BdfNsX
____________________________________________
Learn in-demand skills. Build with real code. Connect to a global development community: http://ibm.biz/IBMdeveloperYT
Subscribe to see more developer content → https://www.youtube.com/user/developerworks?sub_confirmation=1
Follow IBM Developer on social:
Twitter: https://twitter.com/IBMDeveloper
Facebook: https://www.facebook.com/IBMDeveloper/
LinkedIn: https://www.linkedin.com/showcase/ibmdeveloper
More from IBM Developer:
Community: https://developer.ibm.com/community/
Blog: https://developer.ibm.com/blogs/
Call for Code: https://developer.ibm.com/callforcode/
#IBMDeveloper
#Developer
#Coding
Hands-on Demo: Use the Cloud Native Toolkit on an Existing Application