Troubleshoot Retrieval with Vector Stores
Vector Stores enable teams to connect their own data to LLMs. A common application is chatbots looking across a company's knowledge base/context to answer specific questions.
Here's an example of what retrieval looks like for a Chatbot Application. A user asked a specific question, an embedding was generated for the query, all relevant context in the knowledge base was pulled in, and then added into the prompt to the LLM.
If there isn't enough context to pull in, then the prompt doesn't have enough context to answer the question.
Here's an example of Bad Retrieval. There wasn't enough information about video data quality in the knowledge base to answer the users's question and the chatbot hallucinated.
If users are asking questions that have decent overlap with context, then this dictates the knowledge base has enough context to answer user questions. However if there isn't overlap, there are either:
- 1.Users asking questions that aren't in the knowledge store (Bigger Issue)
- 2.There are documents in the knowledge store not getting hit with queries
To do this, compare the distance between query and context embeddings.
Measure Euclidean Distance between query and context embeddings
If the query vs context embeddings are off, identifying where there is a density of queries without enough context is helpful to narrow down.
Identify patterns where there are more queries with less context
In the example above, there wasn't enough context on video quality to be able to correctly answer the user questions. Adding more context can help.