Safeguarding Against Malicious Use of Large Language Models: A Review of the OWASP Top 10 for LLMs

Channel:
Subscribers:
4,540
Published on ● Video Link: https://www.youtube.com/watch?v=zELdBD-ZT6Q



Category:
Review
Duration: 49:43
37 views
0


Guest: Jason Haddix, CISO and Hacker in Charge at BuddoBot Inc [@BuddoBot]

On LinkedIn | https://www.linkedin.com/in/jhaddix/

On Twitter | https://twitter.com/Jhaddix

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin
____________________________

This Episode’s Sponsors

Imperva | https://itspm.ag/imperva277117988

Pentera |Β https://itspm.ag/penteri67a

___________________________

In this Redefining CyberSecurity Podcast, we provide an in-depth exploration of the potential implications of large language models (LLMs) and artificial intelligence in the cybersecurity landscape. Jason Haddix, a renowned expert in offensive security, shares his perspective on the evolving risks and opportunities that these new technologies bring to businesses and individuals alike. Sean and Jason explore the potential risks of using LLMs:

πŸš€ Prompt Injections
πŸ’§ Data Leakage
πŸ–οΈ Inadequate Sandboxing
πŸ“œ Unauthorized Code Execution
🌐 SSRF Vulnerabilities
βš–οΈ Overreliance on LLM-generated Content
🧭 Inadequate AI Alignment
🚫 Insufficient Access Controls
⚠️ Improper Error Handling
πŸ’€ Training Data Poisoning

From the standpoint of offensive security, Haddix emphasizes the potential for LLMs to create an entirely new world of capabilities, even for non-expert users. He envisages a near future where AI, trained on diverse datasets like OCR and image recognition data, can answer private queries about individuals based on their public social media activity. This potential, however, isn't limited to individuals - businesses are equally at risk.

According to Haddix, businesses worldwide are rushing to leverage proprietary data they've collected in order to generate profits. They envision using LLMs, such as GPT, to ask intelligent questions of their data that could inform decisions and fuel growth. This has given rise to the development of numerous APIs, many of which are integrated with LLMs to produce their output.

However, Haddix warns of the vulnerabilities this widespread use of LLMs might present. With each integration and layer of connectivity, opportunities for prompt injection attacks increase, with attackers aiming to exploit these interfaces to steal data. He also points out that the very data a company uses to train its LLM might be subject to theft, with hackers potentially able to smuggle out sensitive data through natural language interactions.

Another concern Haddix raises is the interconnected nature of these systems, as companies link their LLMs to applications like Slack and Salesforce. The connections intended for data ingestion or query could also be exploited for nefarious ends. Data leakage, a potential issue when implementing LLMs, opens multiple avenues for attacks.

Sean Martin, the podcast's host, echoes Haddix's concerns, imagining scenarios where private data could be leveraged and manipulated. He notes that even benign-seeming interactions, such as conversing with a bot on a site like Etsy about jacket preferences, could potentially expose a wealth of private data.

Haddix also warns of the potential to game these systems, using the Etsy example to illustrate potential data extraction, including earnings of sellers or even their private location information. He likens the data leakage possibilities in the world of LLMs to the potential dangers of SQL injection in the web world. In conclusion, Haddix emphasizes the need to understand and safeguard against these risks, lest organizations inadvertently expose themselves to attack via their own LLMs.

All OWASP Top 10 items are reviewed, along with a few other valuable resources (listed below).

____________________________

Resources

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

πŸ“Ί https://www.youtube.com/playlist?list=PLnYu0psdcllQZ9kSG7X7grrP_PsH3q3T3ITSPmagazine YouTube Channel:

πŸ“Ί https://www.youtube.com/@itspmagazine

The inspiring Tweet: https://twitter.com/Jhaddix/status/1661477215194816513

OWASP Top 10 List for Large Language Models Descriptions: https://owasp.org/www-project-top-10-for-large-language-model-applications/descriptions/

Daniel Miessler Blog: The AI attack Surface Map 1.0: https://danielmiessler.com/p/the-ai-attack-surface-map-v1-0/

Gandalf AI Playground: https://gandalf.lakera.ai/

____________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring an ITSPmagazine Channel?

πŸ‘‰ https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network




Other Videos By ITSPmagazine


2023-06-26Book | Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy | With Tom Kemp
2023-06-23ITSPmagazine On-Location at Infosecurity Europe 2023 | Goodbye London and the InfoSec EU Community!
2023-06-22ITSPmagazine On-Location at Infosecurity Europe 2023, London | Day 3 Catch-Up with Frankie Thomas
2023-06-22ITSPmagazine On-Location at Infosecurity Europe 2023, London | Day 2 Catch-Up on Day 3 Morning 😬
2023-06-22Securing Bridges | A Live Stream Podcast With Alyssa Miller | Guest: Kevin Johnson | Episode 43
2023-06-21How to Navigate the Policy Playground for a Privacy-First Future with Debbie Reynolds
2023-06-20ITSPmagazine On-Location at Infosecurity Europe 2023, London | Day One Evening Catch-Up
2023-06-19ITSPmagazine On-Location at Infosecurity Europe, London | Pre-Event Kick-Off
2023-06-15Visualizing and Prioritizing Risk Management in Cybersecurity: A Data-Driven Approach | Brinqa Story
2023-06-15Protecting Active Directory On-Premises and Azure AD in the Cloud | A Semperis Story
2023-06-15Safeguarding Against Malicious Use of Large Language Models: A Review of the OWASP Top 10 for LLMs
2023-06-15Securing Bridges | A Live Stream Podcast With Alyssa Miller | Guest: Jason Haddix | Episode 42
2023-06-14Introducing 'Cyber Cognition Podcast' | A Conversation with Podcast Host Hutch
2023-06-13Managing Current Demands of a Cyber Workforce Whilst Looking to Secure the Workforce of the Future
2023-06-13Why Security Culture Eats Strategy for Breakfast | A Conversation with Robin Bylenga
2023-06-13Power to the People: Climate Change, Technology, and Our Collective Role" |Guest: Will Wiseman
2023-06-13Inspiring Journeys of Two Tenacious Women Who Conquered Hardships, Graduated, and Triumphed
2023-06-13Introducing 'Hacking Your Potential Podcast' | A Conversation with Podcast Host Frankie Thomas
2023-06-12A Conversation with Mentor Valerie Fridland | The Mentor Project Podcast
2023-06-12Supercomputing Analysis for NASA Missions | A Conversation with Dr. Olaf Storaasli
2023-06-12Resistance is Futile | Cyber Cognition Podcast with Hutch



Tags:
natural language
proprietary data
risks
ciso
unauthorized access
sean martin
jason haddix
machine-generated content
exploit
cybersecurity
llms
vulnerabilities
chatbots
pii
biases
attack surfaces
data leakage
disinformation
sandboxing
security
language and learning models
owasp
ai
ml
privacy
code interpreters