I've been using Ovi for about a week and it's a blast. Like all AI gen, it's a slot machine and even putting in good inputs might lead to bad outputs, but if you run it enough you'll get something good or usable.
I've definitely made many things that look and sound real with both I2V and T2V, albeit T2V tends to look more like 90s tv quality at times, but that also makes it seem more real. If you use Flux SPRO as the image source you can get some pretty realistic looking videos.
I do have a 5090, so it takes about 4 to 5 minutes to make a 5 second clip.
what is your setup? took 2 hours for me on 9950x3d with 5090. Any idea what I could be missing? or maybe some other variable is off - i was using default .yml values.
Lots of activity around Wan lately. It’s nice to see flexible open models make a strong showing against the massively funded closed competitors like OpenAI and Runway.
Wan – Open-source alternative to VEO 3 - https://news.ycombinator.com/item?id=44928997 - Aug 2025 (38 comments)
Kling still has the best proprietary video model, but Sora 2 is so smart that you don't need to edit anything if your target is social.
I don't see how Runway, Pika, or the rest of the purely foundation video model startups survive against the giants and the incredible open source Chinese models. They've got to be sweating bullets right now.
Everyone's also sleeping on xAI's high quality and insanely fast video model (10 second generations) that they're giving away completely for free without watermarks.
I was in a few of the early meetings on the Helsinki site where I overheard some executives expressing their intention to go after Google. These people had some balls. No clue whatsoever unfortunately. But it was the right kind of ballsy move that Nokia could have pulled off with a bit more vision.
The name was more or less a LOL WHUT?! kind of thing and it flopped horribly with consumers. But still there was some nice stuff in there that wasn't half bad. It's just that the whole branding and rudderless direction doomed it. And of course it was all tied to a failing device software strategy. So when that failed the rest failed as well. I'm not even sure when they pulled the plug on OVI exactly. It was such a non event in the grand scheme of things (mass layoffs, sale of the phone division to MS and subsequent closure, etc.) Must have been around 2013ish I would say. I was gone by then.
1. Goes to friends' place 2. Usual drinks, whatever gets you going activity 3. Each person writes a prompt 4. Chain them together 5. Watch the resulting movie together
That sounds hilarious and I can't wait to try
I have fond memories of laughing until I was in tears when playing with a group of friends over drinks during the lockdowns in 2020. Something about the process just naturally results in hilarity (especially if you're in a group where you can be offensive).
It's like exquisite corpse for t-shirts. Or, in your case, shorts.
https://www.youtube.com/watch?v=DME86-QucsA
Great work all around though.
Easier than ever now, as AI-assisted coding tools will build you that generic landing page and basic UI.
But I also suspect that most of these are indeed SEO scammers, that there's no actual service, and that all payments are pocketed. It might take a few days for the scam to be reported and the site taken down, but it's likely enough to get a few hundred bucks out of it. They'll never be pursued because of where they live, and they can have many of these up in no time, thanks to AI, as you say.
What a sad state of affairs that no "AI" company or government is taking seriously.
Also this model seems to benefit noticeably from having both Cuda >= 12.8 and Torch >= 2.8, and separately SageAttention over Flash 2. But I have yet to see any cache threshold with Easy or Tea that doesn’t get a bit postmodern.
and here you are, clutching pearls about AI girlfriends. lol. lmao.
Dandadan intro and its lack of FPS and sharp lines: https://www.youtube.com/watch?v=a4na2opArGY
Animation doesn’t feel fast if it’s too many FPS or too steady, anyway, ironically and counterintuitively. You can’t do everything on the ones and twos.
To your point about Dandadan’s intro, it’s jam packed with references, which is another kind of skill in and of itself:
https://www.youtube.com/watch?v=5sUaK0xahBU
Chainsaw Man is in that same vein, and is another Science Saru production. I’m looking forward to seeing what they will do with the Ghost in the Shell franchise next year.
https://en.wikipedia.org/wiki/Science_Saru
I get what you mean though regarding Dandadan’s animation style; it has a very hand drawn manga vibe, and the detail is minimal yet finely balanced against the overwhelming amount of noise and visuals. It’s like a slapdash superflat.
https://en.wikipedia.org/wiki/Superflat
On a side note, as an anime fan, MBS is doing great work lately. I liked Witch Watch much more than I expected to, and that’s a much better show than the genres involved would lead one to expect.
that and the guitar player behind the singer in the concert example has three arms :)
(Of course, excluding the obvious "that guy just knocked down a building!" CGI)
The only catch is that I'd need to get 32 people who want VMs like this since I would have to do it for the entire box of compute.
Wan2.2 runs just fine on AMD.
Probably never. If AI is good enough to cover all the skills needed to do what would currently make a blockbuster movie for less than $1000, the demand for movies will be small enough relative to supply that there will be no such thing as a “blockbuster movie”
Edit: perhaps 12 angry men was good enough at the time.
On the other hand, I think the quality of movies and expectations will be a lot higher.
Younger generation who grow up with AI will just think it’s normal, like we think being connected to the internet via a rectangle you keep in your pocket is normal.
Most people would use these tools for personal use, if nothing else. Seeing a celebrity, themselves, their friends, etc., act out any scenario they can think of is quite an appealing proposition. And porn, of course, for better or worse.
In the long-term, this has the potential to significantly change how media is created and consumed. Feature films produced by large studios will undoubtedly continue to exist, and they will also leverage the technology, but it's not difficult to imagine a new branch of personalized media becoming popular. The tools are practically already there; they just need to become more accessible, and slightly better.
https://reddit.com/r/singularity/comments/1lq299r/postscarci...
https://reddit.com/r/midjourney/comments/1o6ickx/dreaming_on...
https://reddit.com/r/midjourney/comments/1n6mzig/how_to_buil...
https://reddit.com/r/aivideo/comments/1nwdjdn/the_perfect_bo...
https://reddit.com/r/aivideo/comments/1m8a9wz/pinkington_rop...
https://reddit.com/r/aivideo/comments/1n52kut/derek_the_agin...
https://reddit.com/r/midjourney/comments/1muwyah/still_here_...
https://reddit.com/r/DefendingAIArt/comments/1mttoi4/my_not_...
AI is but a tool; if there is an artist using them, real art can be created, as with any other tool.
Before we see this and higher level of quality accessible to enthusiasts, we'll see these tools adopted by mainstream studios first, which is starting to happen.
I'm a firm "AI" skeptic, but if this technology has revolutionized anything, it has been image generation. A few years ago it was science fiction to have the quality of upscaling we take for granted today. I reckon the same will happen with video generation as well a few years from now. Unlike "ASI" and "AGI", these improvements are achievable with better engineering, and don't necessarily require a breakthrough.
Three years ago we had a live streaming autogen-seinfeld twitch stream; some kind of coherent story telling via AI doesn't seem beyond reach today, the tools just haven't fully matured yet.
https://m.youtube.com/watch?v=bS5P_LAqiVg
Im sure more wil follow.
(loud music warning)
I think we'll see AGI first.