The simulation isn't an operating brain. It's a description of one. What it "means" is imposed by us, what it actually is, is a shitload of graphite marks on paper or relays flipping around or rocks on sand or (pick your medium).
An arbitrarily-perfect simulation of a burning candle will never, ever melt wax.
An LLM is always a description. An LLM operating on a computer is identical to a description of it operating on paper (if much faster).
I believe that the important part of a brain is the computation it's carrying out. I would call this computation thinking and say it's responsible for consciousness. I think we agree that this computation would be identical if it were simulated on a computer or paper. If you pushed me on what exactly it means for a computation to physically happen and create consciousness, I would have to move to statements I'd call dubious conjectures rather than beliefs - your points in other threads about relying on interpretation have made me think more carefully about this.
Thanks for stating your views clearly. I have some questions to try and understand them better:
Would you say you're sure that you aren't in a simulation while acknowledging that a simulated version of you would say the same?
What do you think happens to someone whose neurons get replaced by small computers one by one (if you're happy to assume for the sake of argument that such a thing is possible without changing the person's behavior)?
It seems to me that the distinction becomes irrelevant as soon as you connect inputs and outputs to the real world. You wouldn't say that a 737 autopilot can never, ever fly a real jet and yet it behaves exactly the same whether it's up in the sky or hooked up to recorded/simulated signals on a test bench.
Here is a thought experiment:
Build a simulation of creatures that evolve from simple structures (think RNA, DNA).
Now, if in this simulation, after many many iterations, the creatures start talking about consciousness, what does that tell us?
> An arbitrarily-perfect simulation of a burning candle will never, ever melt wax.
It might if the simulation includes humans observing the candle.
What makes the simulation we live in special compared to the simulation of a burning candle that you or I might be running?
That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.