Wow, I'm an idiot. It took me how many days (since my "Youtube borked?" thread, not this one) to figure out I could use a post preview to turn a link into an embed and then just edit it out again? Derp.
Anyway, yes, the video was a new one - lots more characters featured, and it didn't have the hand-colorized-B&W-film look the other managed in some clips at all. I generally concur with Labster's assessment of the details in their own right -- but the AI is definitely getting there. If it could be applied as part of a render pipeline to use 3D sets and animated models a final glow-up, it might even be ready for early-stage production, like the progression of 3D effects in movies ranging from Tron through Terminator 2, Jurassic Park, and Species, to the two Final Fantasy movies (Spirits Within & FF7:Advent Children, in case I've forgotten others) -- though looking at that list, it seems more like we're halfway through rather than at the earliest stages. At least up to Species, and with any given frame looking as good as the best from Spirits Within save for facial expressions. Perhaps those could be captured via video of the voice actors, whether directly recording the lines or as part of the training data for AI voices.
At this pace, it might only be another five years before you can just drop a yarhar batch of whatever show and a fanfic script into the AI blender and pour out something of at least at least crunch-time-animated-by-a-sub-sub-sub-contractor-studio-in-North-Korea quality video on command... though I fully expect industry groups like RIAA and MPAA to get generative AI banned without some kind of licensing scheme that effectively means they can use it but not you, specifically to prevent that, while actor and off-camera crew guilds try to put the genie back in the bottle entirely with all the success that phrase usually implies.
Anyway, yes, the video was a new one - lots more characters featured, and it didn't have the hand-colorized-B&W-film look the other managed in some clips at all. I generally concur with Labster's assessment of the details in their own right -- but the AI is definitely getting there. If it could be applied as part of a render pipeline to use 3D sets and animated models a final glow-up, it might even be ready for early-stage production, like the progression of 3D effects in movies ranging from Tron through Terminator 2, Jurassic Park, and Species, to the two Final Fantasy movies (Spirits Within & FF7:Advent Children, in case I've forgotten others) -- though looking at that list, it seems more like we're halfway through rather than at the earliest stages. At least up to Species, and with any given frame looking as good as the best from Spirits Within save for facial expressions. Perhaps those could be captured via video of the voice actors, whether directly recording the lines or as part of the training data for AI voices.
At this pace, it might only be another five years before you can just drop a yarhar batch of whatever show and a fanfic script into the AI blender and pour out something of at least at least crunch-time-animated-by-a-sub-sub-sub-contractor-studio-in-North-Korea quality video on command... though I fully expect industry groups like RIAA and MPAA to get generative AI banned without some kind of licensing scheme that effectively means they can use it but not you, specifically to prevent that, while actor and off-camera crew guilds try to put the genie back in the bottle entirely with all the success that phrase usually implies.
--
‎noli esse culus
‎noli esse culus