I miss the days when humans submitted things they had done to this site, instead of generating long slop articles in 5 minutes: ‘LLM‑based code synthesis—while mind-numbingly effective—’ about slop code they generated in 5 minutes (or worse in hours) with foolish prompts:’Produce mathematics at the level of Vladimir Voevodsky, Fields Medal-winning, foundation-shaking work’.
Should we even read this or should we get an LLM to summarise it onto a few bullet points again?
This bit was interesting in illuminating the human authors’ credulity (assuming they believe in their own article):
‘The central move was elegant: stop asking only “is the system safe?“, start asking “how far is it from safety?“‘
This ersatz profundity couched in a false opposition is common in generated text - does it have anything at all to do with the code generated or is it all just convincing bullshit?