Guardrails for Innovation: Navigating Security Standards in Generative AI and LLMs

Subscribers:
23,700
Published on ● Video Link: https://www.youtube.com/watch?v=5TTkTFs-Y6w



Duration: 0:00
254 views
4


As generative AI and large language models (LLM) gain momentum, solid security standards are more critical than ever. In this talk, SANS Faculty Fellow and SEC511 author Seth Misenar dives into the key frameworks and models shaping the security landscape for AI: the EU AI Act, NIST AI Risk Management Framework (AI RMF), OWASP Top 10 for LLM, and MITRE ATLAS. Seth shows how these frameworks act as essential guardrails, guiding us through the risks while fostering innovation. Whether you're building, deploying, or managing AI systems, you'll leave with actionable insights to better secure your AI initiatives and stay ahead in this rapidly evolving field.

Learn more about SEC511 Cybersecurity Engineering: Advanced Threat Detection and Monitoring: https://www.sans.org/u/1vzW

About the Speaker
Seth Misenar is a Cyber Security Expert who serves as a SANS Faculty Fellow and Principal Consultant at Context Security, LLC. He is numbered among the few security experts worldwide to have achieved the GIAC GSE (#28) credential. His background includes network and Web application penetration testing, vulnerability assessment, regulatory compliance efforts, security architecture design, and general security consulting. Seth teaches a variety of cybersecurity courses for the SANS Institute including two popular courses for which he is co-author: the bestselling SEC511: Continuous Monitoring and Security Operations and LDR414: SANS Training Program for CISSP® Certification. He also co-authored Syngress CISSP® Study Guide, now in its 3rd Edition.