Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's the only way to get reliably usable output.

There's a lot of "80% there but not quite" in the current version, which makes it more of a novelty than a useful content generator.

The problem with moving to 3D is there are no almost no 3D data sources that combine textures, poses (where relevant), lighting, 3D geometry and (ideally) physics.

They can be inferred to some extent from 2D sources. But not reliably.

Humans operate effortlessly in 3D and creative humans have no issues with using 3D perceptions creatively.

But as for as most content is concerned it's a 2D world. Which is why AI art bots know the texture of everything and the geometry of nothing.

AI generation is going to be stuck at nearly-but-not-quite until that changes.



While not fully. There is a lot of freely available 3d models that can used as a starting point. Id love a dalle2 for 3d model generation. Even if no texture lighting physics was there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: