But they are abstractions made to cater to human weaknesses.
So you want LLMs to write a bunch of black box code that humans won’t be able to read and reason about easily? That will definitely end well.
So you want LLMs to write a bunch of black box code that humans won’t be able to read and reason about easily? That will definitely end well.