logoalt Hacker News

mbtrillatoday at 11:49 AM1 replyview on HN

I run a free comparison site in a different vertical, and the "publishes its losses" line is what made me click. Methodology page on a niche aggregator earns its keep two ways: readers can check you didn't just rank things by vibes, and it's about the only piece of content that holds up once AI summaries start chewing through your other pages. Question I keep coming back to though: how often do you actually update the methodology vs quietly nudge it? The discipline of versioning a formula is harder than writing it, because when a result comes out wrong the temptation to move the threshold instead is huge.


Replies

irldextertoday at 12:43 PM

Hi, Donal from JSS here. We're on R27, revision 27 of the signal weights and features. Each revision gets snapshotted as a "golden" version in config, run through a full backtest, and the results pages pull dynamically from that snapshot, so the numbers are always anchored to a specific revision.

The round numbering is partly for exactly the reason you named: it forces a name onto every change. When a result comes out wrong, the temptation to quietly shift a threshold is real, and having to call it R28 and re-run the full validation raises the cost of doing that on a whim.

Perhaps a changelog would close the loop though? Right now, R27 is visible in config and referenced in the metrics, but there's no page that says "R27 changed X because Y, here's what the backtest/walk-forward showed before and after." That's the missing accountability layer, and probably more useful to a skeptical reader than any amount of methodology prose.