Always important to bear in mind that the examples they show are likely the best examples they were able to produce.
Many times over the past few years a new AI release has "wowed" me, but none of them resulted in any sudden overnight changes to the world as we know it.
VFX artists: You can sleep well tonight, just keep an eye on things!
Yes, and like pretty much every AI release I've seen, even these cherry-picked examples mostly do not quite match the given prompt. The outputs are genuinely incredible, but if you imagine actually trying to use this for work, it would be very frustrating. A few examples from this page:
Pumpkin patch - Not sitting on the grass, not wearing a scarf, no rows of pumpkins the way most people would imagine.
Sloth - that's not really a tropical drink, and we can't see enough of the background to call it a "tropical world".
Fire spinner - not wearing a green cloth around his waist
Ghost - Not facing the mirror, obviously not reflected the way the prompter intended. No old beams, no cloth-covered furniture, not what I would call "cool and natural light". This is probably the most impressively realistic-looking example, but it almost certainly doesn't come close to matching what the prompter was imagining.
Monkey - boat doesn't have a rudder, no trees or lush greenery
Science lab - no rainbow wallpaper
This seems like nitpicking, and again I can't underestimate how unbelievable the technology is, but the process of making any kind of video or movie involves translating a very specific vision from your brain to reality. I can't think of many applications where "anything that looks good and vaguely matches the assignment" is the goal. I guess stock footage videographers should be concerned.
This all matches my experience using any kind of AI tool. Once I get past my astonishment at the quality of the results, I find it's almost always impossible to get the output I'm looking for. The details matter, and in most cases they are the only thing that matters.
The one thing that immediately stood out to me in the ghost example was how the face of the ghost had "wobbly geometry" and didn't appear physically coupled to the sheet. This and the way the fruit in the sloth's drink magically rested on top of the drink without being wedged onto the edge of the glass as that would require were actually some of the more immediate "this isn't real" moments for me.
The ghost is insanely impressive, it's the example that gave me a "wow" effect. The cloth physic looks stunning, I never thought we would reach such a level of temporal coherence so fast.
I think those types of visual glitches can probably be fixed with more or better training, and I have no doubt that future versions of this type of system will produce outputs that are indistinguishable from real videos.
But better training can't fix the more general problem that I'm describing. Perfect-looking videos aren't useful if you can't get it to follow your instructions.
I am not trying to be negative, however it is the reality that ML/LLM has eliminated entire industries. Medical transcription for example is essentially gone.
That thread you linked doesn’t seem to align at all with your claims though? The majority of comments do not make the claim that they’re using any GenAI elements.
As someone who’s worked in the industry previously and am quite involved still, very few studios are using it because of the lack of direction it can take and the copyright quagmire. There are lots of uses of ML in VFX but those aren’t necessarily GenAI.
GenAI hasn’t had an effect on the industry yet. It’s unlikely it will for a while longer. Bad business moves from clients are the bigger drain, including not negotiating with unions and a marked decline in streaming to cover lost profits.
I don't see it as that much of a problem. It's like washing machines taking away people's job of washing clothes, what are they gonna do with their time now? Maybe something more productive.
We really have a problem once there are no more jobs left for us humans, and only the people who own capital (stocks, real estate etc) will be able to earn money from dividends.
The amount required to pay rent on their continued survival, which in a capitalist society, and excluding members of the capitalist class, will never be zero.
Tbf, the biggest private infrastructure project in the history of humanity is now underway (Microsoft GPU centers), the fastest app to reach #1 on the App Store was released (ChatGPT), and it’s dominating online discourse. Many companies have used LLMs to justify layoffs, and /r/writers and many, many fanart subreddits already talk of significant changes to their niches. All of this was basically at 0 in 2022, and 100 by early 2023. It’s not normal.
Everyone should sleep well tonight, but only because we’ll look out for each other and fight for just distribution of resources, not because the current job market is stable. IMO :)
On the contrary I dont think theres anything special with their examples. It probably represent well most output. Think of image generation and the insane stuff people can produce with it. There's no "oh yeah this is just cherry picked"
Always important to bear in mind that the examples they show are likely the best examples they were able to produce.
Many times over the past few years a new AI release has "wowed" me, but none of them resulted in any sudden overnight changes to the world as we know it.
VFX artists: You can sleep well tonight, just keep an eye on things!