logoalt Hacker News

dagmxtoday at 3:11 PM1 replyview on HN

I don’t understand the spread of thoughts in your post.

The reason to create image sequences is not because you need to send it to other apps, it’s because you preserve quality and safeguard from crashes.

A crash mid video write out can corrupt a lengthy render. With image sequences you only lose the current frame.

People aren’t going to stop using image sequences even if they stayed in the same app.

And I’m not sure why this applies: “this goes beyond” what Apple has, because they do have hardware support for decoding several compressed codecs (also I’ll note that ProRes is also compressed). Other than streaming, when are you going to need that kind of encode performance? Or what other codecs are you expecting will suddenly pop up by not requiring ASICs?

Also how does this remove degradation when going between apps? Are you envisioning this enables Blender to stream to an NLE without first writing a file to disk?


Replies

pandaforcetoday at 3:43 PM

> A crash mid video write out can corrupt a lengthy render. With image sequences you only lose the current frame.

You wouldn't contain FFv1 in MP4, the only format incompetent enough for such corruption.

Apple has an interest against people using codecs that they get no fees from. And Apple don't have a lossless codec. So they don't offer lossless compressed video acceleration.

The idea is that when working as a part of a team, and you get handed a CG render, you can avoid sending a huge .tar or .zip file full of TIFF which you then decompress, or ProRes which loses quality, particularly when in a linear colorspace like ACEScg.

show 1 reply