Why Large Language Models Hallucinate

Subscribers:
1,190,000
Published on ● Video Link: https://www.youtube.com/watch?v=cfqtFvWOfg0



Duration: 9:38
95,026 views
3,080


What is WatsonX? → https://ibm.biz/WatsonX_No_Hallucinations
What is generative AI, what are foundation models, and why do they matter? → https://ibm.biz/Generative_AI_Models
What the Masters app can teach us about large language models → https://ibm.biz/The_Masters_and_LLM

What are Foundation Models?: https://ibm.biz/Foundation_Models

Large language models (LLMs) like chatGPT can generate authoritative-sounding prose on many topics and domains, they are also prone to just "make stuff up". Literally plausible sounding nonsense! In this video, Martin Keen explains the different types of "LLMs hallucinations", why they happen, and ends with recommending steps that you, as a LLM user, can take to minimize their occurrence.

Get started for free on IBM Cloud → https://ibm.biz/buildonibmcloud

Subscribe to see more videos like this in the future → http://ibm.biz/subscribe-now

#AI #Software #Dev #lightboard #IBM #MartinKeen #llm







Tags:
IBM
IBM Cloud
Chariots
gpt