Book | The Developer's Playbook for Large Language Model Security: Building Secure AI Application...

Channel:
Subscribers:
4,540
Published on ● Video Link: https://www.youtube.com/watch?v=BgXnDrR0m9A



Duration: 0:00
14 views
0


Guest: Steve Wilson, Chief Product Officer, Exabeam [@exabeam (https://x.com/exabeam) ] & Project Lead,  OWASP Top 10 for Larage Language Model Applications [@owasp (https://x.com/owasp) ]


On LinkedIn |   / wilsonsd  


On Twitter | https://x.com/virtualsteve


____________________________


Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine (  / itspmagazine  ) ] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber (  / redefiningcyber  ) ]


On ITSPmagazine | https://www.itspmagazine.com/sean-martin


View This Show's Sponsors (https://www.itspmagazine.com/sean-martin)


___________________________


Episode Notes


In this episode of Redefining CyberSecurity, host Sean Martin sat down with Steve Wilson, chief product officer at Exabeam, to discuss the critical topic of secure AI development. The conversation revolved around the nuances of developing and deploying large language models (LLMs) in the field of cybersecurity.


Steve Wilson's expertise lies at the intersection of AI and cybersecurity, a point he emphasized while sharing his journey from founding the Top 10 group for large language models to authoring his new book, "The Developer's Playbook for Large Language Model Security." In this insightful discussion, Wilson and Martin explore the roles of developers and product managers in ensuring the safety and security of AI systems.


One of the key themes in the conversation is the categorization of AI applications into chatbots, co-pilots, and autonomous agents. Wilson explains that while chatbots are open-ended, interacting with users on various topics, co-pilots focus on enhancing productivity within specific domains by interacting with user data. Autonomous agents are more independent, executing tasks with minimal human intervention.


Wilson brings attention to the concept of overreliance on AI models and the associated risks. Highlighting that large language models can hallucinate or produce unreliable outputs, he stresses the importance of designing systems that account for these limitations. Product managers play a crucial role here, ensuring that AI applications are built to mitigate risks and communicate their reliability to users effectively.


The discussion also touches on the importance of security guardrails and continuous monitoring. Wilson introduces the idea of using tools akin to web app firewalls (WAF) or runtime application self-protection (RASP) to keep AI models within safe operational parameters. He mentions frameworks like Nvidia's open-source project, Nemo Guardrails, which aid developers in implementing these defenses.


Moreover, the conversation highlights the significance of testing and evaluation in AI development. Wilson parallels the education and evaluation of LLMs to training and testing a human-like system, underscoring that traditional unit tests may not suffice. Instead, flexible test cases and advanced evaluation tools are necessary. Another critical aspect Wilson discusses is the need for red teaming in AI security. By rigorously testing AI systems and exploring their vulnerabilities, organizations can better prepare for real-world threats. This proactive approach is essential for maintaining robust AI applications.


Finally, Wilson shares insights from his book, including the Responsible AI Software Engineering (RAISE) framework. This comprehensive guide offers developers and product managers practical steps to integrate secure AI practices into their workflows. With an emphasis on continuous improvement and risk management, the RAISE framework serves as a valuable resource for anyone involved in AI development.


About the Book


Large language models (LLMs) are not just shaping the trajectory of AI, they're also unveiling a new era of security challenges. This practical book takes you straight to the heart of these threats. Author Steve Wilson, chief product officer at Exabeam, focuses exclusively on LLMs, eschewing generalized AI security to delve into the unique characteristics and vulnerabilities inherent in these models.


Complete with collective wisdom gained from the creation of the OWASP Top 10 for LLMs list—a feat accomplished by more than 400 industry experts—this guide delivers real-world guidance and practical strategies to help developers and security teams grapple with the realities of LLM applications. Whether you're architecting a new application or adding AI features to an existing one, this book is your go-to resource for mastering the security landscape of the next frontier in AI.


___________________________


Sponsors


Imperva: https://itspm.ag/imperva277117988


LevelBlue: https://itspm.ag/attcybersecurity-3jdk3


___________________________


Watch this and other videos on ITSPmagazine's YouTube Channel


Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:


📺    • Redefining CyberSecurity Podcast | To...  

ITSPmag...




Other Videos By ITSPmagazine


2024-09-25Building Resilient Applications and APIs: The Importance of Security by Design to Ensure Data Pro...
2024-09-25Building Resilient Applications and APIs: The Importance of Security by Design to Ensure Data Pro...
2024-09-25The Importance of Security by Design to Ensure Data Protection | An Imperva Brand Story
2024-09-24Research is the Key - Shrey Modi and Rahul Vishwakarma's Innovation Journey at California State U...
2024-09-24Research is the Key - Shrey Modi and Rahul Vishwakarma's Innovation Journey at California State U...
2024-09-24Ep 16 - Research is the Key - Shrey and Rahul's Innovation Journey at California State University
2024-09-24Hello From the Dumpster Fire: Real Examples of Artificially Generated Malware, Disinformation and...
2024-09-24Book | The Developer's Playbook for Large Language Model Security: Building Secure AI Applications
2024-09-24Hello From the Dumpster Fire: Real Examples of Artificially Generated Malware, Disinformation and...
2024-09-24Real Examples of Artificially Generated Malware, Disinformation and Scam Campaigns | SecTor Toronto
2024-09-23Book | The Developer's Playbook for Large Language Model Security: Building Secure AI Application...
2024-09-20Rising Stars: Rocket Lab | A Conversation with Sir Peter Beck | Stories From Space Podcast With M...
2024-09-18$17M Series B Will Accelerate Growth As BlackCloak Further Strengthens Its Personal Cybersecurity...
2024-09-18$17M Series B Will Accelerate Growth As BlackCloak Further Strengthens Its Personal Cybersecurity...
2024-09-18$17M Series B Will Accelerate Growth As BlackCloak Further Strengthens Its Personal Cybersecurity...
2024-09-17$17M Series B Will Accelerate Growth As BlackCloak Further Strengthens... | Short Brand Story
2024-09-17Indigenous Astronomy: The Legacy of the Aztecs | Stories From Space Podcast With Matthew S Williams
2024-09-16The Critical Role of Identity in Creating Effective Ransomware Attack Defense and Broader Busines...
2024-09-16The Critical Role of Identity in Creating Effective Ransomware Attack Defense and Broader Busines...
2024-09-16The Critical Role of Identity in Creating Effective Ransomware Attack Defense and Broader Busines...
2024-09-16The Critical Role of Identity in Creating Effective Ransomware Attack Defense... | Semperis