If you want the AI to do anything useful, you need to be able to trust it with the access to useful things. Sandboxing doesn't solve this.
Full isolation hasn't been taken seriously because it's expensive, both in resources and complexity. Same reason why microkernels lost to monolithic ones back in the day, and why very few people use Qubes as a daily driver. Even if you're ready to pay the cost, you still need to design everything from the ground up, or at least introduce low attack surface interfaces, which still leads to pretty major changes to existing ecosystems.
> If you want the AI to do anything useful, you need to be able to trust it with the access to useful things. Sandboxing doesn't solve this.
By default, AI cannot be trusted because it is not deterministic. You can't audit what the output of any given prompt is going to be to make sure its not going to rm -rf /
We need some form of behavioral verification/auditing with guarantees that any input is proven to not produce any number of specific forbidden outputs.
Crazy how all the rules about privacy and security go out of the window as soon as its AI
Microkernels lost "back in the day" because of how expensive syscalls were, and how many of them a microkernel requires to do basic things. That is mostly solved now, both by making syscalls faster, and also by eliminating them with things like queues in shared memory.
> you still need to design everything from the ground up
This just isn't true. The components in use now are already well designed, meaning they separate concerns well, and can be easily pulled apart. This is true of kernel code and userspace code. We just witnessed a filesystem enter and exit the linux kernel within the span of a year. No "ground up" redesign needed.