No clue if anyone's commented on this one before, but Super Panavision 70 Sailor Moon (done by AI).
Sailor Moon in the 50's...
|
Something on YouTube's pages (but not embedded videos, so it's probably another ploy in the adblock war) is still making my browser allergic so I didn't checkk your link, but something like ay least like that was posted in one of the video madness threads recently. Does this one have a live-action-looking Usagi with AI blank-stare-face being character appropriate for once by giving her maximum empty-head energy in a classroom near the end?
--
noli esse culus
Some of the visuals here are pretty good. Some, like Chibiusa, are absolutely awful. It's kinda funny because the training data is so different. Queen Beryl looks like a guest on Bewitched, while the Shitennou look like a K-Pop band. Makoto seems to appear in the early Showa era for some reason which is just classy -- but then gets classical Greek architecture when showing off the lightning powers. Rei has a pretty good Japanese look with her arrow, but then starts looking SE Asian when she unlocks her fire powers.
Impressions of the Moon Kingdom ruins range from "I made this model in the 1950s for 480p B&W screens" to "obviously trained on official art". It's simultaneously impressively pretty and lacking artistic coherence.
"Kitto daijoubu da yo." - Sakura Kinomoto
Wow, I'm an idiot. It took me how many days (since my "Youtube borked?" thread, not this one) to figure out I could use a post preview to turn a link into an embed and then just edit it out again? Derp.
Anyway, yes, the video was a new one - lots more characters featured, and it didn't have the hand-colorized-B&W-film look the other managed in some clips at all. I generally concur with Labster's assessment of the details in their own right -- but the AI is definitely getting there. If it could be applied as part of a render pipeline to use 3D sets and animated models a final glow-up, it might even be ready for early-stage production, like the progression of 3D effects in movies ranging from Tron through Terminator 2, Jurassic Park, and Species, to the two Final Fantasy movies (Spirits Within & FF7:Advent Children, in case I've forgotten others) -- though looking at that list, it seems more like we're halfway through rather than at the earliest stages. At least up to Species, and with any given frame looking as good as the best from Spirits Within save for facial expressions. Perhaps those could be captured via video of the voice actors, whether directly recording the lines or as part of the training data for AI voices. At this pace, it might only be another five years before you can just drop a yarhar batch of whatever show and a fanfic script into the AI blender and pour out something of at least at least crunch-time-animated-by-a-sub-sub-sub-contractor-studio-in-North-Korea quality video on command... though I fully expect industry groups like RIAA and MPAA to get generative AI banned without some kind of licensing scheme that effectively means they can use it but not you, specifically to prevent that, while actor and off-camera crew guilds try to put the genie back in the bottle entirely with all the success that phrase usually implies.
--
noli esse culus |
« Next Oldest | Next Newest »
|
Users browsing this thread: 1 Guest(s)