How do you Force an LLM to Keep Track of the Assumptions a Document Makes?

Published on ● Video Link: https://www.youtube.com/watch?v=Apxsmuv5F74



Duration: 1:27
76 views
4


Check out my essays: https://aisc.substack.com/
OR book me to talk: https://calendly.com/amirfzpr

AF: I'm going to give you a use case and prescribe what I can do with it. So I have a storybook on page one of this storybook it says every time that we refer to a cat, we really mean a dog. This is a fantastical world and that's what's happening.

My task is to create a RAG on top of this that answers questions about animals in this story and I don't have a lot of resources to fine tune models and come up with my own embeddings. What's the top tip for how to go about building a RAG system for this.

SP: Yeah, that's a very interesting thing.

So, it depends on whether you want this to be a general system or, that in specific documents, there is going to be this caveat. The easy way would be to use heuristics. But if you want to use a more general system you can find topics that are premises or preludes or forewords, like text that is setting the stage for everything else and then that particular premise text will be part of the context window forever when you're talking about this particular document.

The full fledged way would be the topic spans and the themes that I talked about. But if you know some of these patterns it's easy to build either a classifier or use heuristics to find the rules of discourse.







Tags:
deep learning
machine learning