I suspect your background is dominated by imperative languages, because these often bleed low-level, machine concepts into the domain of discourse, causing a conceptual muddle.
From a functional perspective, you see things properly as a matter of language. When you describe to someone some solution in English, do you worry about a "computational system"? When writing proofs or solving mathematical problems using some formal notation, are you thinking of registers? Of course not. You are using the language of the domain with its own rules.
Computer science is firmly rooted in the formal language tradition, but for historical reasons, the machine has assumed a central role it does not rightly possess. The reason students are confused is because they're still beholden to the machine going into the course, causing a compulsion to refer to the machine to know "what's really going on" at the machine level. Instead of thinking of the problem, they are worrying about distracting nonsense.
The reason why your students might feel comforted after you explain the machine model is because they already tacitly expect the machine to play a conceptual role in what they're doing. They stare and think "Okay, but what does this have to do with computers?". The problem is caused by the prior idolization of the machine in the first place.
But machine code and a machine model are not the "really real", with so-called "high-level languages" hovering above them like some illusory phantom that's just a bit of theater put on by 1s and 0s. The language exists in our heads; machines are just instruments for simulating them. And assembly language itself is just another language. It's domain just is, loosely, the machine architecture.
So my view is that introductory classes should beat the machine out of students' heads. There is no computer, no machine. The first few classes in programming should omit the computer and begin with paper and pencil and a small toy language (a pure, lispy language tends to be very good here). They should gain facility in this small language first. The aim should be to make it clear that the language is about talking about the domain, and that it stands on its own, as it were; the computer is to programming as the calculator is to mathematical calculation. Only once this have been achieved are computers permitted, because large programs are impractical to deal with using pen and paper.
This intuition is the foundation difference between a bona fide compute science curriculum and dilettante tinkering.
Not everything that isn't "bona fide computer science" should be considered "dilettante tinkering". In the real world, code is run on physically existing machines, and not in some abstract mathematical universe of pure functions and infinite-length tapes.
I think that this is completely backwards. As James Mickens put it: pointers are real; you can't just put a LISP book on top of an x86 chip and hope it learns the lambda calculus by osmosis. Computer science is, to be honest, not interesting or useful without a machine to use it on. Therefore trying to teach it to people without reference to a machine is a grave error.