logoalt Hacker News

qingcharlestoday at 1:23 AM3 repliesview on HN

I started my career writing software 3D renderers before switching to Direct3D in the later 90s. What I wonder is if all of this is going to just get completely washed away and made totally redundant by the incoming flood of hallucinated game rendering?

Will it be possible to hallucinate the frame of a game at a similar speed to rendering it with a mesh and textures?

We're already seeing the hybrid version of this where you render a lower res mesh and hallucinate the upscaled, more detailed, more realistic looking skin over the top.

I wouldn't want to be in the game engine business right now :/


Replies

jsheardtoday at 1:36 AM

You can't really do a whole lot of inference in 16ms on consumer hardware. Not to say that inference isn't useful in realtime graphics, DLSS has proven itself well enough, but that's a very small model laser-targetted at one specific problem and even that takes a few milliseconds to do its thing. Fitting behemoth generative models into those time constraints seems like an uphill battle.

8n4vidtmkvmktoday at 1:41 AM

I just assumed hallucinated rendering was a stepping stone to training AGIs or something. No one is actually seriously trying to build games that way, are they? Seems horribly inefficient at best, and incoherent at worst.

webdevvertoday at 1:13 PM

reminds me of this remark made by Carmack on hidden surface removal

https://www.youtube.com/watch?v=P6UKhR0T6cs&t=2315s

> "research from the 70s especially, there was tons of work going on on hidden surface removal, these clever different algorithmic ways - today we just kill it with a depth buffer. We just throw megabytes and megabytes of memory and the problem gets solved much much easier."

ofcourse "megabytes" of memory was unthinkiable in the 70s. but for us, its unthinkable to have real-time frame inferencing. I cant help but draw the parallels between our current-day "clever algorithmic ways" of drawing pixels to the screen.

I definitely agree with the take that in the grand scheme of things, all this pixel rasterizing business will be a transient moment that will be washed away with a much simpler petaflop/exaflop local TPU that runs at 60W under load, and it simply 'dreams' frames and textures for you.

show 1 reply