Hey HN - we're Saksham and Ishan, and we’re building Cardboard (https://www.usecardboard.com). It lets you go from raw footage to an edited video by describing what you want in natural language. There’s a demo video at https://www.usecardboard.com/share/fUN2i9ft8B46, and you can try the product out at https://demo.usecardboard.com (no login required!)
People sit on mountains of raw assets - product walkthroughs, customer interviews, travel videos, screen recordings, changelogs, etc. - that could become testimonials, ads, vlogs, launch videos, etc.
Instead they sit in cloud storage / hard drives because getting to a first cut takes hours of scrubbing through the raw footage manually, arranging clips in correct sequence, syncing music, exporting, uploading to a cloud storage to share, and then getting feedback on WhatsApp/iMessage/Slack, then re-doing the same thing again till everyone is happy.
We grew up together and have been friends for 15 years. Saksham creates content on socials with ~250K views/month and kept hitting the wall where editing took longer than creating. Ishan was producing launch videos for HackerRank's all-hands demo days and spent most of his time on cuts and sequencing rather than storytelling. We both felt that while tools like Premiere Pro and DaVinci are powerful, they have a steep learning curve and involve lots of manual labor.
So we built Cardboard. You tell it to "make a 60s recap from this raw footage" or "cut this into a 20s ad" or "beat-sync this to the music I just added" and it proposes a first draft on the timeline that you can refine further.
We built a custom hardware-accelerated renderer on WebCodecs / WebGL2, there’s no server-side rendering, no plugins, everything runs in your browser (client-side). Video understanding tasks go through a series of Cloud VLMs + traditional ML models, and we use third party foundational models for agent orchestration. We also give a dropdown for this to the end user.
We've shipped 13 releases since November (https://www.usecardboard.com/changelog). The editor handles multi-track timelines with keyframe animations, shot detection, beat sync via percussion detection, voiceover generation, voice cloning, background removal, multilingual captions that are spatially aware of subjects in frame, and Premiere Pro/DaVinci/FCP XML exports so you can move projects into your existing tools if you want.
Where we're headed next: real-time collaboration (video git) to avoid inefficient feedback loops, and eventually a prediction engine that learns your editing patterns and suggests the next low entropy actions - similar to how Cursor's tab completion works, but for timeline actions.
We believe that video creation tools today are stuck where developer tools were in the early 2000s: local-first, zero collaboration with really slow feedback loops.
Here are some videos that we made with Cardboard: - https://www.usecardboard.com/share/YYsstWeWE9KI - https://www.usecardboard.com/share/nyT9oj93sm1e - https://www.usecardboard.com/share/xK9mP2vR7nQ4
We would love to hear your thoughts/feedback.
We'll be in the comments all day :)
Love this idea! I built something similar last year https://www.usecrossfade.com and know how difficult this is to get right - I'm rooting for you guys!
Really impressive work guys! It seems like YC has funded a few companies attacking this but I think you all might have the best approach so far. Behind the scenes is the agent just editing using text/annotated timelines? I feel like the move is probably text for roughcut/narrative, then a vlm for digesting the initial roughcut, then adding broll and fixing timing issues. Feel free to steal my FCP xml generator. https://github.com/barefootford/buttercut
Funnily, this was an issue for myself so I built an open source AI video editor - https://github.com/waylonkenning/aidirector
Cardboard looks really well polished, well done!
This is amazing (I'll add you on LinkedIn).
I recently started making videos for a loved one that lives far away, I started using CapCut and this is the kind of thing I was thinking "I wish it did that".
I'll definitely try it out. Congrats!
This seems like a great idea. Tools like video editors (and CAD) often impose a big learning curve - there is a big differential between "I want to do X" and actually knowing all the right buttons to press to do X. Good luck.
Excited to see AI integrations into more non-text-related applications (coding, spreadsheets, proofreading etc). As someone who only occasionally needs to edit videos for product / feature reels, I'd happily ask an AI to "sync the narration to the video, cut away irrelevant footage, and add transitions". The convenience of being able to automate simple, repeatable tasks in creative software via ai is something that gets overshadowed a lot by the agentic coding discussions. I can only imagine the nightmare it would be for a tool like Premier to integrate effective ai features, so new ai-in-mind tools really feel like a necessity.
Great website and good luck!
Impressive UI. I assume you must be doing some kind of RAG + audio/video transcription on all the media. What's RAG architecture did you go with?
Very cool idea. If your product is about video, please fix your video players. I cannot even seek on my touch screen.
Who do you think your target customer is? Curious to know if you think the money is in short form, traditional YouTube videos, or even movie studios one day.
Great website btw. The onboarding was very pleasing
The 10gb file size is going to be limiting for anyone shooting prores or raw.
We use Cardboard at Vulnetic and it is an incredible product. The founders are easily accessible, and it has definitely made it easier to film feature update videos. I can't recommend them enough.
> We built a custom hardware-accelerated renderer on WebCodecs / WebGL2, there’s no server-side rendering, no plugins, everything runs in your browser (client-side).
Aight imma head out. Holy moly.
$60...eh
Wow! congrats on the launch guys. client-side rendering is incredible, really. I saw your product somewhere and have it as an open tab in my chrome for ~2 weeks :D
I also saw another YC company, Mosaic, doing something similar. But your approach of chat-based editing is a lot closer to what I'm building. Shameless plug: I'm also working on a chat-based media processor. https://chatoctopus.com
But you guys are way ahead! will be looking at you for inspiration.