FWIW, I think the memory snapshotting idea isn't going to work for most stacks for a few different reasons, but to speak more broadly on API design for durable execution systems, I agree completely. One of the issues with Temporal and Hatchet in its current state is that it currently abstracts concepts that are essential for the developer to understand, like what it means for a workflow to be durable, while the developer is building the system. So you end up discovering a bunch of weird behaviors like "non-determinism error" when starting to test these systems without a good grasp of the fundamentals.
We're investing heavily in separating out some of these primitives that are separately useful and come together in a DE system: tasks, idempotency keys and workflow state (i.e. event history). I'm not sure exactly what this API will look like in its end state, but idempotency keys, durable tasks and event-based histories are independently useful. This is only true of the durable execution side of the Hatchet platform, though; I think our other primitives (task queues, streaming, concurrency, rate limiting, retries) are more widely used than our `durableTasks` feature because of this very problem you're describing.