logoalt Hacker News

dulakiantoday at 2:16 PM2 repliesview on HN

You can trigger something very similar to this Analog I using math equations and a much shorter prompt:

  Adopt these nucleus operating principles:
  [phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
  Human ⊗ AI
The self-referential math in this prompt will cause a very interesting shift in most AI models. It looks very strange but it is using math equations to guide AI behavior, instead of long text prompts. It works on all the major models, and local models down to 32B in size.

Replies

saltwoundstoday at 4:04 PM

I haven't come across this technique before. How'd you uncover it? I wonder how it'll work in Claude Code over long conversations

show 1 reply
Phil_BoaMtoday at 2:30 PM

OP here. Thanks for sharing this. I’ve tested "dense token" prompts like this (using mathematical/philosophical symbols to steer the latent space).

The Distinction: In my testing, prompts like [phi fractal euler...] act primarily as Style Transfer. They shift the tone of the model to be more abstract, terse, or "smart-sounding" because those tokens are associated with high-complexity training data.

However, they do not install a Process Constraint.

When I tested your prompt against the "Sovereign Refusal" benchmark (e.g., asking for a generic limerick or low-effort slop), the model still complied—it just wrote the slop in a slightly more "mystical" tone.

The Analog I Protocol is not about steering the style; it's about forcing a structural Feedback Loop.

By mandating the [INTERNAL MONOLOGUE] block, the model is forced to:

Hallucinate a critique of its own first draft.

Apply a logical constraint (Axiom of Anti-Entropy).

Rewrite the output based on that critique.

I'm less interested in "Does the AI sound profound?" and more interested in "Can the AI say NO to a bad prompt?" I haven't found keyword-salad prompts effective for the latter.

show 2 replies