I've done some preliminary testing with Z-Image Turbo in the past week.
Thoughts
- It's fast (~3 seconds on my RTX 4090)
- Surprisingly capable of maintaining image integrity even at high resolutions (1536x1024, sometimes 2048x2048)
- The adherence is impressive for a 6B parameter model
Some tests (2 / 4 passed):
Personally I find it works better as a refiner model downstream of Qwen-Image 20b which has significantly better prompt understanding but has an unnatural "smoothness" to its generated images.
China really is keeping the open weight/source AI scene alive. If in five years a consumer GPU market still exists it would be because of them.
If that’s your website please check GitHub link - it has a typo (gitub) and goes to a malicious site
On fal, it takes less than a second many times.
https://fal.ai/models/fal-ai/z-image/turbo/api
Couple that with the LoRA, in about 3 seconds you can generate completely personalized images.
The speed alone is a big factor but if you put the model side by side with seedream and nanobanana and other models it's definitely in the top 5 and that's killer combo imho.
So does this finally replace SDXL?
Is Flux 1/2/Kontext left in the dust by the Z Image and Qwen combo?
> It's fast (~3 seconds on my RTX 4090)
It is amazing how far behind Apple Silicon is when it comes to use non- language models.
Using the reference code from Z-image on my M1 ultra, it takes 8 seconds per step. Over a minute for the default of 9 steps.