Indeed. Another criticism that I can definitely somewhat see the idea behind, is that the barrier to entry is very different from for example drawing. To draw, you need a pen and a paper, and you can basically start. To start with Stable Diffusion et al, you need either A) paid access to a service, B) money to purchase moderately powerful hardware or C) money to rent moderately powerful hardware. One way or another, if you want to practice AI generated art, you need more money than what a pen and paper cost.
With a super-cheap T4 GPU (free in Google Colab), PyTorch 2.0, and the latest diffusers package, you can now generate batches of 9-10 images in about the same time it took to 4 images when Stable Diffusion was first released. This drastically speeds up the cherry-picking and iteration processes: https://pytorch.org/blog/accelerated-diffusers-pt-20/
Google Cloud Platform also now has preview access to L4 GPUs, which are 1.5x the cost of a T4 GPU but 3x throughput for Stable diffusion workflows (maybe more given the PyTorch 2.0 improvements for newer architectures), although I haven't tested it: https://cloud.google.com/blog/products/compute/introducing-g...
Thqt got me thinking. I agree, but from another perspective: the skillset is different. Traditionally, the approach to art was very bottom-up. Start with a pen and basic contouring techniques. Understanding more advanced techniques require a lot of work (perspective, shadows, etc).
"AI" art generally does away with basic techniques. The emphasis is more on composing, styling. A top-down approach. "AI" artists may be able to iterate quicker by seeing "almost-finished" versions quickly (though a skilled artist can most likely imagine their work pretty well).
But most of all, the tools and required skills are very different. You don't need to know a lot about machine learning, but it certainly helps. Probably pretty far from the skillset of most current artists. And people generally fear what they don't understand. And if I was an artist, I'd be at least a bit concerned about (i) it undercutting the value of my art, (ii) having to learn this alien way of doing things to remain competitive (by way of selection, artists probably enjoy their current tools.
Anyway, I imagine photography was similarly upsetting in a lot of ways. It also didn't happen overnight. I also suspect we are going to see similar improvements to output quality as in early days of photography.
Another similarity is with digital music (and recording/remixing before that). I wonder if we're going to see new genres emerge as a result (the equivalent of techno/electro).
This complain in particular strikes me as someone who enjoyed the process more than the result. Very specialized crafting skills, now made, not useless, but not required to obtain a similar result. And, if reducing things to a market, competing with a very orthogonal set of skills.
There are plenty of free online tools for using all kinds of AI image generation techniques, and they don't require powerful hardware, just something that can browse websites or run Discord.
It’s like with dreams. They can be terribly intricate and detailed, but ask me to draw something creative and I’m out.
Our DM, being someone who has released creative works, was reticent and less gung-ho on AI for awhile until he decided to start playing with the AI tools (Midjourney in his case) for a new custom campaign he's running. He's suddenly able to develop novel NPC tokens for every important character we meet, the monsters are high resolution and the convenience of using Midjourney in Discord (which he already uses to coordinate our online D&D games) has been a huge boon and enhancement to how much fun and immersive our games are. A year ago this would have cost literally tens of thousands of dollars. He's a published fantasy author so prompting aka describing a scene comes naturally to him. It's been a lot of fun seeing what he's coming up with.
I'm really loving the spark of creativity I've been finding in myself where the turnaround on the old tools was too long for me to not get frustrated and give up, and to see it amongst my friends, even the ones who were initially skeptical.
A 4GB NVidia GPU (sufficient to run Stable Diffusion with the A1111 UI) is hardly “moderately powerful hardware”, and, beyond that, Stable Horde (AI Horde) exists.
OTOH, a computer and internet connection are more expensive than a pencil, even if nearly ubiquitous.
None of this is true, you can easily use a colab and load the models on a drive, I have always done it this way and it works perfectly.
Also if you just need to start, there are plenty of free interface online to use, colab if actually you want to dive in.