logoalt Hacker News

It Took Me 30 Years to Solve This VFX Problem – Green Screen Problem [video]

238 pointsby yincrashlast Friday at 7:14 PM96 commentsview on HN

Comments

Springtimeyesterday at 8:06 PM

In an earlier video they made a couple years back about Disney's sodium vapor technique Paul Debevec suggested he was considering creating a dataset using a similar premise: filming enough perfectly masked references to be able to train models to achieve better keying. So it was interesting seeing Corridor tackle this by instead using synthetic data.

show 1 reply
diacriticalyesterday at 8:16 PM

From ~04:10 till 05:00 they talk about sodium-vapor lights and how Disney has the exclusive rights to use it. From what I read the knowledge on how to make them is a trade secret, so it's not patented. Seems weird that it would be hard to recreate something from the 1950's.

I also wonder how many hours were wasted by people who had to use inferior technology because Disney kept it secret. Cutting out animals and objects from the background 1 frame at a time seems so mindnumbingly boring.

show 4 replies
jayd16yesterday at 11:22 PM

As far as alternatives, I wonder if anyone has tried a screen that cycles through colors in a known sequence. Using this modulating-color screen, it might actually be easier to separate the subject because you get around the "green shirt over green screen" problem. You might even be able to use a time sampling to correct the light cast on the subject from the screen as you would have a full spectrum of response.

I could also imagine using polarized light as the backdrop as well.

show 1 reply
lynnharrytoday at 6:19 AM

It’s fascinating to see the bridge between academic research and industry application here. While Image Matting is a massive research area in Computer Vision, academia often focuses on solving perfect 'benchmarks.' Corridor Crew effectively took that foundational research, like neural unmixing and synthetic training, and adapted it to solve the 'messy' reality of production, like tracking markers and motion blur. It’s a great example of using open-source deep learning resources to build a tool that prioritized workflow over just a high accuracy score.

swframe2today at 1:53 AM

The model in this repo seems pretty good: https://github.com/xuebinqin/DIS

vsviridovyesterday at 8:07 PM

The community has managed to drastically lower hardware requirements, but so far I think only Nvidia cards are supported, so as an AMD owner I'm still missing out :(

show 1 reply
superjanyesterday at 8:02 PM

Watched this a few days ago. The video is light on technical details, except maybe that they used CGI to generate training data.

show 1 reply
dylan604yesterday at 9:50 PM

The sad thing about this is the problems encountered during post from the production team saying "fix it post" during the shoot. I've been on set for green screen shoots where the lighting was not done properly. I watched the gaffer walk across the set taking readings from his meter before saying the lighting was good. I flip on the waveform and told him it was not even (which never goes down well when camera dept tells the gaffer it's not right). He put up an argument, went back and took measurements again before repeating it was good. I flipped the screen around and showed him where it was obviously not even. A third set of meter readings and he starts adjust lights. Once the footage was in post, the fx team commented about how easy the keys were because of the even lighting.

The problem is that the vast majority of people on set have no clue what is going on in post. To the point, when the budget is big enough, a post supervisor is present on production days to give input so "fixing it in post" is minimized. When there is no budget, you'll see situations just like in the first 30 seconds of TFA's video. A single lamp lighting the background so you can easily see the light falling off and the shadows from wrinkles where the screen was just pulled out of the bag 10 minutes before shooting. People just don't realize how much light a green screen takes. They also fail to have enough space so they can pull the talent far enough off the wall to avoid the green reflecting back onto the talent's skin.

TL;DR They solved something to make post less expensive because they cut corners during production.

show 5 replies
comexyesterday at 8:17 PM

See also this video comparing Corridor Key to traditional keyers:

https://www.youtube.com/watch?v=abNygtFqYR8

show 1 reply
qingcharlestoday at 2:52 AM

I use the Adobe version of this in Photoshop every day and I assumed that Adobe solved this the same way, but used professionals to cut out the subjects from the backgrounds then fed both versions into their AI.

Since they added it a year or so ago it has been game-changing. I'm cutting out portraits every day and having a magical tool that cuts out the subject with perfect hair cut out with a single click is sci-fi.

Here's a demo of Photoshop's tool:

https://www.youtube.com/watch?v=SNVJN6PKeGQ

(the other magical Photoshop tool is the one that removes reflections from windows, which is even more insane when you reverse it and tell it you only want the reflection and not what's on the other side of the glass)

IshKebabyesterday at 8:09 PM

Pretty impressive results! Seems like someone has even made a GUI for it: https://github.com/edenaion/EZ-CorridorKey

Still Python unfortunately.

show 1 reply
ameliusyesterday at 10:34 PM

Is it a coincidence that the result is stable between subsequent frames?

ralusekyesterday at 8:20 PM

I'm a software engineer that, like the vast majority of you, uses AI/agents in my workflow every day. That being said, I have to admit that it feels a little weird to hear someone who does not write code say that they built something, without even mentioning that they had an agent build it (unless I missed that).

show 5 replies
ameliusyesterday at 9:18 PM

There's still a bug: the glass with water does not distort the checker pattern in the background at 24:12.

show 6 replies
MrVitaliyyesterday at 11:34 PM

Anyone tried using lidar and just cut/measure distance to the object?

show 4 replies
Computer0yesterday at 8:08 PM

Looking forward to trying it out, 8gb of vram or unified memory required!

tempaccountabgdtoday at 2:30 AM

[dead]