logoalt Hacker News

jcon321yesterday at 10:18 PM2 repliesview on HN

I thought this entire premise was obvious? Does it really take an article and a venn diagram to say you should only provide the relevant content to your LLM when asking a question?


Replies

simonwyesterday at 10:22 PM

"Relevant content to your LLM when asking a question" is last year's RAG.

If you look at how sophisticated current LLM systems work there is so much more to this.

Just one example: Microsoft open sourced VS Code Copilot Chat today (MIT license). Their prompts are dynamically assembled with tool instructions for various tools based on whether or not they are enabled: https://github.com/microsoft/vscode-copilot-chat/blob/v0.29....

And the autocomplete stuff has a wealth of contextual information included: https://github.com/microsoft/vscode-copilot-chat/blob/v0.29....

  You have access to the following information to help you make
  informed suggestions:

  - recently_viewed_code_snippets: These are code snippets that
  the developer has recently looked at, which might provide
  context or examples relevant to the current task. They are
  listed from oldest to newest, with line numbers in the form
  #| to help you understand the edit diff history. It's
  possible these are entirely irrelevant to the developer's
  change.
  - current_file_content: The content of the file the developer
  is currently working on, providing the broader context of the
  code. Line numbers in the form #| are included to help you
  understand the edit diff history.
  - edit_diff_history: A record of changes made to the code,
  helping you understand the evolution of the code and the
  developer's intentions. These changes are listed from oldest
  to latest. It's possible a lot of old edit diff history is
  entirely irrelevant to the developer's change.
  - area_around_code_to_edit: The context showing the code
  surrounding the section to be edited.
  - cursor position marked as ${CURSOR_TAG}: Indicates where
  the developer's cursor is currently located, which can be
  crucial for understanding what part of the code they are
  focusing on.
show 2 replies
alfalfasproutyesterday at 11:07 PM

The industry has attracted grifters with lots of "<word of the day> engineering" and fancy diagrams for, frankly, pretty obvious ideas

I mean yes, duh, relevant context matters. This is why so much effort was put into things like RAG, vector DBs, prompt synthesis, etc. over the years. LLMs still have pretty abysmal context windows so being efficient matters.