Actually, I think this idea is a really practical and interesting way to apply some types of text-to-image and image-to-3d models to user generated games, as an alternative to the heavy frame-by-frame uber world model generation where every interaction and frame render goes through the model and the scale of the world is directly tied to what that model can manage at once.
One can imagine different ways to integrate this type of decomposed generation with different game engines or to parallelize it or allow lazy generation of assets. It's also very accessible to programmers like me who don't have the resources to train and host giant world models but are interested in AI world generation.
I assume that something like this is going to end up in Unity, Unreal and others within a matter of months.
And people are going to say that we already have enough crappy Unity asset games in the monopoly Steam store, but I think that misses the point. It's about opening up game creation or world generation as a creative outlet or tool for more people. It's not an attempt to create more refined games.
One can imagine different ways to integrate this type of decomposed generation with different game engines or to parallelize it or allow lazy generation of assets. It's also very accessible to programmers like me who don't have the resources to train and host giant world models but are interested in AI world generation.
I assume that something like this is going to end up in Unity, Unreal and others within a matter of months.
And people are going to say that we already have enough crappy Unity asset games in the monopoly Steam store, but I think that misses the point. It's about opening up game creation or world generation as a creative outlet or tool for more people. It's not an attempt to create more refined games.