LLM coding isn't a new level of abstraction. Abstractions are (semi-)reliable ways to manage complexity by creating building blocks that represent complex behavior, that are useful for reasoning about outcomes.
Because model output can vary widely from invocation to invocation, let alone model to model, prompts aren't reliable abstractions. You can't send someone all of the prompts for a vibecoded program and know they will get a binary with generally the same behavior. An effective programmer in the LLM age won't be saving mental energy by reasoning about the prompts, they will be fiddling with the prompts, crossing their fingers that it produces workable code, then going back to reasoning about the code to ensure it meets their specification.
What I think the discipline is going to find after the dust settles is that traditional computer code is the "easiest" way to reason about computer behavior. It requires some learning curve, yes, but it remains the highest level of real "abstraction", with LLMs being more of a slot machine for saving the typing or some boilerplate.