logoalt Hacker News

ossa-mayesterday at 7:20 PM2 repliesview on HN

How is it a “fascinating learning exercise” when the intention is to run the model in a closed loop with zero transparency. Running a black box in a black box to learn? What signals are you even listening to to determine whether your context engineering is good or whether the quality has improved aside from a brief glimpse at the final product. So essentially every time I want to test a prompt I waste $100 on Claude and have it an entire project for me?

I’m all for AI and it’s evident that the future of AI is more transparency (MLOPs, tracing, mech interp, AI safety) not less.


Replies

dhorthyyesterday at 10:43 PM

there is the theoretical "how the world should be" and there is the practical "what's working today" - decry the latter and wait around for the former at your peril

alansaberyesterday at 7:40 PM

Current transparency is rubbish but people will continue to put up with it if they're getting decent output quality