logoalt Hacker News

Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params)

78 pointsby schopra909yesterday at 4:31 PM15 commentsview on HN

Writeup (includes good/bad sample generations): https://www.linum.ai/field-notes/launch-linum-v2

We're Sahil and Manu, two brothers who spent the last 2 years training text-to-video models from scratch. Today we're releasing them under Apache 2.0.

These are 2B param models capable of generating 2-5 seconds of footage at either 360p or 720p. In terms of model size, the closest comparison is Alibaba's Wan 2.1 1.3B. From our testing, we get significantly better motion capture and aesthetics.

We're not claiming to have reached the frontier. For us, this is a stepping stone towards SOTA - proof we can train these models end-to-end ourselves.

Why train a model from scratch?

We shipped our first model in January 2024 (pre-Sora) as a 180p, 1-second GIF bot, bootstrapped off Stable Diffusion XL. Image VAEs don't understand temporal coherence, and without the original training data, you can't smoothly transition between image and video distributions. At some point you're better off starting over.

For v2, we use T5 for text encoding, Wan 2.1 VAE for compression, and a DiT-variant backbone trained with flow matching. We built our own temporal VAE but Wan's was smaller with equivalent performance, so we used it to save on embedding costs. (We'll open-source our VAE shortly.)

The bulk of development time went into building curation pipelines that actually work (e.g., hand-labeling aesthetic properties and fine-tuning VLMs to filter at scale).

What works: Cartoon/animated styles, food and nature scenes, simple character motion. What doesn't: Complex physics, fast motion (e.g., gymnastics, dancing), consistent text.

Why build this when Veo/Sora exist? Products are extensions of the underlying model's capabilities. If users want a feature the model doesn't support (character consistency, camera controls, editing, style mapping, etc.), you're stuck. To build the product we want, we need to update the model itself. That means owning the development process. It's a bet that will take time (and a lot of GPU compute) to pay off, but we think it's the right one.

What’s next? - Post-training for physics/deformations - Distillation for speed - Audio capabilities - Model scaling

We kept a “lab notebook” of all our experiments in Notion. Happy to answer questions about building a model from 0 → 1. Comments and feedback welcome!


Comments

taherchhabratoday at 8:25 AM

I want to build my own video model, just for learning purposes, is there any course which can teach end to end

convivialdingotoday at 6:24 AM

That’s amazing effort - I am impressed.

Awesome to see more small teams making impressive leaps.

WhitneyLandyesterday at 11:22 PM

Great work. How many GPU hours to train?

popalchemisttoday at 5:49 AM

Incredibly impressive, dudes. Well done.

E-Reveranceyesterday at 9:16 PM

Post it on r/StableDiffusion

throwaway314155today at 4:55 AM

How much compute was ultimately required to get this done?

Jack_a11ytoday at 6:00 AM

[dead]

streamer45yesterday at 5:13 PM

Rad! huggingface link gives 404 on my side though.

show 1 reply