You need this skill if you're the engineer that's designing and implementing that preprocessing step.
In the short term horizon I think you are right. But over a longer horizon, we should expect model providers to internalize these mechanisms, similar to how chain of thought has been effectively “internalized” - which in turn has reduced the effectiveness that prompt engineering used to provide as models have gotten better.
Non-rhetorical question: is this different enough from data engineering that it needs it’s own name?
Not at all, just ask the LLM to design and implement it.
AI turtles all the way down.
The skill amounts to determining "what information is required for System A to achieve Outcome X." We already have a term for this: Critical thinking.