I don’t know if any of engines are fully tested yet.
For new LLMs I get in the habit of building llama.cpp from upstream head and checking for updated quantizations right before I start using it. You can also download llama.cpp CI builds from their release page but on Linux it’s easy to set up a local build.
If you don’t want to be a guinea pig for untested work then the safe option would be to wait 2-3 weeks