And just like auto-tuned voices, it will come off as janky and fake.
It is similar to special effects. People complain about how bad and fake they look, but this is only for the effects that are bad enough that they are noticible. People don't realise the sheer amount of special effects being used in scenes they never realise they are being used for.
Stylus noise, fret noise and mp3 compression artefacts are other "mistakes" deliberately introduced.
If done properly, it's not really janky or fake sounding at all.
It can't really help with a live performance and a for a good singer, recording multiple takes is going to be faster/more economical, punching in/out is so easy and with modern digital DAW's like ProTools (which does this by default) it keeps all your takes for you anyway - no need to waste another track on your 24 track tape or tape over the previous one.
Here's another viewpoint:
https://www.quora.com/Do-all-most-singers-use-pitch-correcti...
Fantastic vocal performances were captured all throughout the last century without the parachute of pitch correction/autotune. I'd rather listen to an imperfect take with flaws than to a machine assisted correction any day. Each to their own I guess.
And the same way some song makers have used autotune to adjust synthetic voices like Hatsune Miku’s, would this have any use as an external filter to smooth out synthethized videos ?
(I guess it might take a few years for the performance to get there)
Is any tech that is published in arxiv just free game immediately? Seems unfair to the researchers
The real consequence of this is that video footage is no longer a reliable source.
I'm a bit disappointed though that they didn't also include results for a synthetic source video with "impossible" poses (e.g. joints bending backwards, stretching, separating from the body or performing full rotations). That would have been pretty interesting (though perhaps a bit unsettling) to see.
Using AI to transform anyone into a professional dancer might include using AI to process live video (webcam) of someone dancing and then giving them some feedback for improvement. In a word: coaching.
However this is using AI to produce composite videos of people dancing.
It's not good enough to produce professional dancers, but it has definitely improved my dancing as someone who just dances for fun.
Meanwhile composite videos really blend in with all the augmented reality phone apps that teenagers use nowadays.
I'm half surprised there isn't already something like this for smartphones (with inferior quality).
Maybe I'm in the minority, but I think if we take this idea and walk with it, it has the potential to trivialize actual accomplishment. Maybe I'm overthinking it.
We're not going to see The Running Man style/quality fake videos any time soon, and the media kinda runs with this an exaggerates; making people wonder if camera footage may one day no longer be considered evidence.
We're far from that. At the most, the quality of the transfers here is about the same as what you'd see with Deep Fakes (celebrities imposed on top of pornographic models using computer vision and AI algorithms).
But, with that said, this work is dependent on having a "source" that the user is using as an input for pose detection. The actual accomplishment must still be performed and recorded, though I suppose this opens the door to the dance equivalent of "lip syncing" even beyond what might be done today with a body double.
They said the same thing with any new artistic medium. Digital cameras, photoshop, Instagram filters, MIDI music instruments, etc.
- Mimic a target's body motions (this link)
- Mimic a target's facial expressions (deepfakes)
- Mimic a target's voice (lyrebird AI, etc)
related video, digital animation puppeteering
https://www.youtube.com/watch?v=YiOByO8J7xg&t=2s&list=LLI462...
Its not perfect by any means, but we're seeing a new age of CGI. Once perfected, I wonder how the entertainment industry will change as a result (Faster rendering times, less time to make scenes, puppeteering, not needing expensive famous actors or stunt doubles, digital identity copyrights, etc)
We're heading into a world where it would not be very hard to bombard the public with a large number of long form videos of highly convincing videos of anyone in the world ranting on any topic and acting out anything they want, and we would have borderline no idea if it was legitimate.
Combining that with our media climate and already runaway problem with monetary and political incentives for fabricated stories seems really dangerous.
You could make a video of Neil Armstrong and Nasa execs talking about how they faked the moonlanding, or even much more nefarious fake content confirming conspiracy theories for political ends.
What will we use as a scalable filter to know what is actually going on, and how will we keep that content from manipulating public discussion?
I appreciate that the detected poses and motions create clear pictures for what different parts of the body are doing. Particularly for ballet, if I had access to this technology (in a way that was user friendly), I'd love to see the difference between ballet styles (Vaganova, Cechetti, ABT, ect). I think it would be much clearer from a students' perspective, to see the stylistic difference in lines, shapes and movement.
This AI reminds me of Happy Feet, where they took Savion Glover's movement and choreography and applied it to the animation penguin. It doesn't seem too far-fetched. And lastly, for those who say this seems unnatural--dancing is unnatural to the body, hence the training and years put into it. So having an AI applied to it will only make it look more unnatural.
Artistically, this can be debated (as it has been), but in search for 'real life application,' I'd love to get my hands on this as a teaching tool.
sorry for the long post--this is my first time on this site--my boyfriend sent this to me & warned me that if i blabbed too long, this post would not be successful.
"(...) allows anyone to portray themselves as a world-class ballerina (...)"
Moreover, after AlphaGO took away Go from us, I started to wonder "what is left" for humans, and I believe that we are centuries away to have machines that achieve world class dancing level. My reasoning is than in things like Go, image or speech recognition, it is easier to "encode" the information for the ML to actually learn. On the other hand, encoding the movements of professional dancers is already quite difficult. Consider for example in the video linked here, the whole human body is mapped into ~20 points. Sure, this may be enough to portray someone as a dancer. But good luck making a dancing robot.
So, maybe I quit my programming career to become a dancer, it is less likely to be a job that the machines will take away ;-)
edit: grammar
Yeah, it doesn't matter if machines can't dance if I can't either. Still no job for me. :)
Like competitive sports, art is all about display of human ability under constraints. This is why even in the age of photographs, we still value hand-painted canvases. Such techniques are simply going to make people more discerning between real effort v/s automated means of generating the same outcome.
Rather than thinking AI-assisted style transfers are the end of art, we should think that these are new tools for artists to do even more interesting stuff. See this upcoming tool for example: https://runwayml.com/
And more recently with AlphaGo. Now that humans have no chance of ever beating AI again in the game of go, what will change?
I'm a go player so I'm more interested in this question. Professional go players said that AlphaGo is positive for go, that they will be able to learn from it and reach new levels of play.
Although of course their livelihood depends on the popularity of go, it would be bad press for them to say the opposite.
Maybe AI isn't able to copy human technique well enough yet but whether it succeeds or fails will have little to do with whether or not it creates work that resonates emotionally like classic art, because that's no longer the purpose of the vast majority of art that people encounter.
And I would argue that human beings, for the most part, copy other human beings anyway. Working within a "genre" and using cultural references and even recognizable techniques are all essentially copying or at least adapting what came before.
https://www.theguardian.com/politics/2018/aug/30/theresa-may...
I think the opposite. I believe that this will kill blackmail. Why care if someone has a leaked sex tape featuring you in an age where anyone can fake them. Simply say it's fake. In a few years, I bet there will be simply apps where you can point to a person's social network accounts and have the app generate whatever you want. Blackmail will die once everyone will have access to those videos with a few clicks.
This idea dates back to way before bitcoin.
I wonder if seeing yourself dance like this might speed up learning to actually dance like this...
plus this is just a very complicated thing, in that it’s gluing together multiple new techniques to do various things.
some of the pieces that went into this work (like GANs) have lots of tutorials online and might be a more manageable, and budget-friendly, place to start. you could do something interesting on Google CoLab with free GPU time.
I mean, who cares what you look like in some video? When you actually meet people, they'll know that it's bullshit.
Now, if you could manage it in meatspace, that would be cool!
Everyone who watches television, movies, YouTube, etc. I know that's only a few people, but hey, it's a start.
And the focus here is "anyone", not professionals.
> the team based their algorithm on the pix2pixHD architecture developed by NVIDIA researchers
Is it me, or is NVIDIA trying very hard to take credit for this UC Berkeley paper? (they're almost taking credit for Pytorch as well). Sure, this kind of work wouldn't be possible without their hardware, but in that case Intel could probably take credit for most of science in the last few decades.
It also seems as UC Berkeley and NVIDIA collaborated on pix2pixHD, judging by the paper
They are, see quote above. They're also going out of their way to mention that Pytorch is using cuDNN, which is true but off-topic.
> This work was supported, in part, by NSF grant IIS-1633310 and research gifts from Adobe, eBay, and Google
The fact that people are thinking "it's on the nvidia site, they must have participated somehow" is precisely the reason I wanted to bring this up.