How Chatbots Could Be 'Hacked' by Corpus Poisoning

Subscribers:
1,190,000
Published on ● Video Link: https://www.youtube.com/watch?v=RTCaGwxD2uU



Duration: 8:31
8,301 views
484


Read our Blog: How IBM makes AI based on trust, fairness and explainability: https://ibm.biz/Trust-Based_AI

Our fundamental properties for trustworthy AI: https://ibm.biz/Fundamental_Properties_for_Trustworthy_AI

When it comes to getting answers, it's almost a cliché: Just Google it. Yes, you get answers, but usually you have to sort through a list of possible explanations, with varying degrees of reliability. More and more, today's Internet users prefer "just give me ONE answer" type responses. That's where chatbots like chatGPT enter the picture.

These AI-driven chatbots are getting better and better at answering wide-ranging questions, producing a response that sounds authoritative. But is that a good thing? In this video, Jeff "the Security Guy" looks at a potentially darker side of chatbot reliance on huge datasets to derive their "just one answer" responses.

Get started for free on IBM Cloud → https://ibm.biz/ibm-cloud-sign-up

Subscribe to see more videos like this in the future → http://ibm.biz/subscribe-now

#AI #Software #Dev #lightboard #IBM #TrustworthyAI #JeffCrume #ChatGPT







Tags:
IBM
IBM Cloud
chatgpt
chatbot
generative ai