logoalt Hacker News

dkdcioyesterday at 7:31 PM2 repliesview on HN

how accurate are these system prompt (and now soul docs) if they’re being extracted from the LLM itself? I’ve always been a little skeptical


Replies

simonwyesterday at 7:34 PM

The system prompt is usually accurate in my experience, especially if you can repeat the same result in multiple different sessions. Models are really good at repeating text that they've just seen in the same block of context.

The soul document extraction is something new. I was skeptical of it at first, but if you read Richard's description of how he obtained it he was methodical in trying multiple times and comparing the results: https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5...

Then Amanda Askell from Anthropic confirmed that the details were mostly correct: https://x.com/AmandaAskell/status/1995610570859704344

> The model extractions aren't always completely accurate, but most are pretty faithful to the underlying document. It became endearingly known as the 'soul doc' internally, which Claude clearly picked up on, but that's not a reflection of what we'll call it.

ACCount37yesterday at 7:39 PM

Extracted system prompts are usually very, very accurate.

It's a slightly noisy process, and there may be minor changes to wording and formatting. Worst case, sections may be omitted intermittently. But system prompts that are extracted by AI-whispering shamans are usually very consistent - and a very good match for what those companies reveal officially.

In a few cases, the extracted prompts were compared to what the companies revealed themselves later, and it was basically a 1:1 match.

If this "soul document" is a part of the system prompt, then I would expect the same level of accuracy.

If it's learned, embedded in model weights? Much less accurate. It can probably be recovered fully, with a decent level of reliability, but only with some statistical methods and at least a few hundred $ worth of AI compute.

show 1 reply