This is exactly the wrong approach. LLMs are good at writing programming languages they already know, that are well represented in the training data, not at writing programming languages that they have never seen before, so that you have to include the entire programming language manual and lots of example code in every prompt.
This is not my experience. I've been experimenting with something very similar to vera. However my language transpiles into multiple languages (Java, Typescript, Common Lisp, Rust, C++, Python, C# and Swift). The transpiler is written in the language itself (there's a separate bootstrap transpiler written in Common Lisp). But where I'm going is that Claude, at least, is extremely capable at writing decent code in my new language with barely any prompting; just minimal guidance on the language itself and no examples.