logoalt Hacker News

jahooma11/07/20242 repliesview on HN

Ah yeah, that's what I mean! I thought RAG is synonymous with this vector search approach.

Either way, we do the search step a little different and it works well.


Replies

cratermoon11/08/2024

Any kind of search prior for content to provide as context to the LLM prompt is RAG. The goal is to leverage traditional information retrieval as a source of context. https://cloud.google.com/use-cases/retrieval-augmented-gener...

I'm currently working on a demonstration/POC system using my ElasticSearch as my content source, generating embeddings from that content, and passing them to my local LLM.

show 1 reply
petesergeant11/07/2024

I didn't mean to be down on it, and I'm really glad it's working well! If you start to reach the limits of what you can achieve with your current approach, there are lots of cute tricks you can steal from RAG, eg nothing stopping you doing a fuzzy keyword search for interesting-looking identifiers on larger codebases rather than giving the LLM the whole thing in-prompt, for example