What do you mean by automatically curated content chunks? RAG with Embedding search is the process of deciding which chunks go into the context of the bot so that it can reference them to answer a user question
I guess I'm saying that over the past 30 years there have been a number of systems developed that take input from a user and find relevant bits of content from some corpus...aka 'search'.
Searches using vector embeddings are likely better at matching relevant semantics than most other systems, so they are an excellent candidate for RAG. However, if there's a system that's already working quite well at finding relevant content based on user input, then there wouldn't necessarily be any value in adding a vectorized search to the RAG pipeline. Just use the existing system to populate relevant content into the context.
Then the other half of my wondering is why the primary use case for vector databases appears (?) to be for RAG and not just a general purpose search engine.
I guess I'm saying that over the past 30 years there have been a number of systems developed that take input from a user and find relevant bits of content from some corpus...aka 'search'.
Searches using vector embeddings are likely better at matching relevant semantics than most other systems, so they are an excellent candidate for RAG. However, if there's a system that's already working quite well at finding relevant content based on user input, then there wouldn't necessarily be any value in adding a vectorized search to the RAG pipeline. Just use the existing system to populate relevant content into the context.
Then the other half of my wondering is why the primary use case for vector databases appears (?) to be for RAG and not just a general purpose search engine.