Don't you think a sufficiently advanced model will end up emulating what normal 3D engines already do mathematically? At least for the rendering part, I don't see you can "compress" the meaning behind light interaction without ending up with a somewhat latent representation of the rendering equation
If I am not mistaken, we are already past that. The pixel, or token, gets probability-predicted in real time. The complete, shaded pixel, as you will, gets computed ‘at once’ instead of layers of simulation. That’s the LLM’s core mechanism.
If the mechanism allows for predicting how the next pixel will look like, which includes the lighting equation, then there is no need anymore for a light simulation.
Would also like to know how Genie works. Maybe some parts get indeed already simulated in a hybrid approach.
The model has multiple layers which are basically a giant non-linear equation to predict the final shaded pixel, I don't see how it's inherently difference from a shader outputing a pixel "at once".
Correct me if I'm wrong, but I don't see how you can simulate a PBR pixel without doing ANY pbr computation whatsoever.
For example one could imagine a very simple program computing sin(x), or a giant multi-layered model that does the same, wouldn't it just be a latent, more-or-less compressed version of sin(x)?
in this case I assume it would be taken from motion / pixels of actual trees blowing in the wind. Which does serve up the challenge of like how does dust blow on a hypothetical gameworld alien planet?