Hi! I'm lead researcher on Krea-1. FLUX.1 Krea is a 12B rectified flow model distilled from Krea-1, designed to be compatible with FLUX architecture. Happy to answer any technical questions :)
From a traditional media production background, where media is produced in separate layers, which are then composited together to create a final deliverable still image, motion clip, and/or audio clip - this type of media production through the creation of elements that are then combined is an essential aspect of expense management, and quality control. Current AI image, video and audio generation methods do not support any of that. ForgeUI did briefly, but that went away, which I suspect because few understand large scale media production requirements.
I guess my point being: do you have any (real) experienced media production people working with you? People that have experience working in actual feature film VFX, animated commercial, and multi-million dollar budget productions?
If you really want to make your efforts a wild success, simply support traditional media production. None of the other AI image/video/audio providers seem to understand this, and it is gargantuan: if your tools plugged into traditional media production, it will be adopted immediately. Currently, they are tentatively and not adopted because they do not integrate with production tools or expectations at all.
I recently ran a training experiment using the same dataset, number of steps, and epochs on both Flux Dev and Flux Krea models.
What stood out to me was that Flux Dev followed the text prompts more accurately, whereas Krea’s generations were more loosely aligned or "off" in terms of prompt fidelity with deformations in body type and the architecture.
Does this suggest that Flux Krea requires more training to achieve strong text-to-image alignment compared to Flux Dev? Or is it possible that Krea is optimized differently (e.g. for style, detail, or artistic variation rather than strict prompt adherence)?
Curious if anyone else has experienced this or has any insight into the differences between these two. Would love to hear your thoughts
thanks for doing this!
what does " designed to be compatible with FLUX architecture" mean and why is that important?
The model looks incredible!
Regarding this part: > Since flux-dev-raw is a guidance distilled model, we devise a custom loss to finetune the model directly on a classifier-free guided distribution.
Could you go more into detail on the specific loss used for this and any other possible tips for finetuning this that you might have? I remember the general open source ai art community had a hard time with finetuning the original distilled flux-dev so I'm very curious about that.