To be honest, I’d struggle to find any human content that is not recycled from the past. I guess we are still better than AI at remixing sufficiently novel combinations of concepts but I think much of this AI content aversion comes from some kind of phobia, or an unwillingness to admit humans are shockingly unoriginal.
In other words, if there is truly no such thing as an "original idea", then how did the ideas that we are pulling from, deriving from and combining come into existence?
If all we are saying is that existing ideas inspire new ones, or that most human generated content is derivative, then I completely agree with you. I don't see how that proposition could be controversial at all.
But it seems to me that some people, at some points in time, somewhere have and will continue to contribute something original at least on occasion. Even if the "original" idea is 1% of the "intellectual product" and 99% is reusing existing concepts. To insist otherwise is to insist that we hit peak human innovation somewhere along the line and there's nowhere left to go.
Of course there are original ideas. "C is not an original idea, because it's just mixing A and B..." ok so there are only two possbilities:
1. Someone else did mixing A and B before. Okay then we can recursively find the first person who mixed A and B.
2. No one else did mixing A and B before. The "Mixing A and B" is a new idea.
For some incomprehensible reasons, people seem to accept "mixing A and B is not a new idea because A and B already exist." Like... why? Since the four basic nucleobase existed, there haven't been any new DNA creatures?
The real reason that it feels like there isn't new idea is simple: the "mixing" process happens gradually. Even if you come up with a really good "X + Z" idea, people (read: market) might not accept it because what they're familiar with is "X + Y".
So you put a bit of Z to make "X + 0.9Y + 0.1Z", and another person who is as smart as you makes "X + 0.8Y + 0.2Z"... then when "X + Z" is officially a thing, people outside think you smart-ass guys were all just copying & pasting each other.
In copyright-law (and I think patents?), there is the Threshold of originality. Basically you evaluate how much work and change went into something, to distinguish it from other works. Which IMHO comes down to how much of a and b can you still see in the mix, and how much did mixing them changed them in the end product. If the change is low, then people more likely consider it as not innovative.
Or how Homer Simpson once said: “People are afraid of new things. You should have taken an existing product and put a clock in it or something.”
But now that we have software that can generate massive amounts of "Mixing A and B" ideas automatically... I dunno, seems like we're heading toward a sort of "pollution of the idea atmosphere" where we get a ton of new/remixed ideas dumped into the world that haven't passed through that initial filter of a human vetting them as particularly useful or valuable or interesting in the first place.
With such definition, GPT-4 can create new ideas.
I often take it a step further and use it as a reminder that new ideas are rarely born from one person but rather the contributions of many, each too small to be noticed as an “original idea”, but when viewed collectively, can be revolutionary.
"A fax machine is just a telephone and a waffle iron!" —Abraham Simpson
This is the root of the objection to AI. To say that humans are similar would imply that a person's thoughts are a function of their sensory experiences, and that is not a popular view.
Similarly, people like to think that a person's behavior is not a mere function of their experience- that they are capable of "choosing" how to react to their circumstances. They will accept no explanation that appeals to material conditions or biology. A man steals bread not because he is poor and hungry, nor because of some chemical reaction in his brain, but because he "chose" to steal. The part that chooses would his "soul", which cannot be explained in terms of cause-and-effect.
And the obvious answer of course is that recycling / mixing can be innovative and creative. The dichotomy only exists in our heads.
I believe this is because of how our culture choose to reward intelectual output. Your output needs to appear sufficiently original to be worthy of reward and protection.
For example a musician who creates a new song is not rewarded, in fact punished for naming all the influences they had coming in to said new song. (both morally, by people who would think less of them, and by lawsuits asking for a share of their profit.)
But I think the way you have put it here is more clear than anything I've been able to manage. I think I put it in a more extreme form, which is to say that the earth had no life on it for ~3 billion years, and still no humans until ~100k years ago. At some point the first music was made. And if it could be original then, whatever it was that made that possible, should presumably make it possible now, too.
If you listen to the stories of how things like major inventions came into being, it's almost always some domain expert visited some domain in which they were not an expert and inspiration struck on how to combine the domains.
Otherwise, the rest of invention is either human imitations or improvements to things found in nature or not inventions at all but discoveries stumbled upon by accident/experimentation.
Nothing is just willed into existence by pure "thought". Every thought you've had is a product of the things you've been surrounded by.
There's a million things today that didn't exist back then, and many things that came to exist culturally over time, or were discovered. Do you think they were sitting around the fire talking about how the molecular motion registers as a conscious heat sensation in animals or what the proper interpretation of QM is?
But when you start to scrutinize everything they did, there are many clear examples of things and people that did those things first and directly influenced them. However, they may have been the first to do it in their specific flavor, which is undoubtedly original.
The most original thing I can think of that The Beatles did was add sound to the runoff track at the end of Sgt. Pepper’s Lonely Hearts Club Band, so that it keeps playing the sound infinitely until the needle is removed on the turntable. I’m sure someone with more insight can point to another musician doing that first, but at The Beatles’ scale, we collectively come to believe they were the first to do it.
Somewhere around 2016, when the golden Age of Internet-Content ended, as the last undiscovered part was added to the mix...
But jokes aside, does innovation really need to be new? Or would it still innovative if you don't know it when you create something you for you? Take for example all the retro-waves which regularly come back, where people discover something for them completely new, and make up some for them completely new things, which the generations for them already did long ago.
> In other words, if there is truly no such thing as an "original idea"
Does Original mean 100% new? And would it demand that all parts are new, or would it still original if we just arrange old parts in a new way?
There is a huge amount of information flowing around us in realtime, and linking some thought/thing to something else, which then might have implications (subsequent discoveries) is statistics. We do have people with a better wetware so they have increased %chance do recognize something in front of them as well as linking it in a "novel way", but this is not really neccessary, it only increases %chances.
Even pretty radical things can be explained this exact way, up to having some sort of brain "damage" (or drugs like LSD) that changes mental pathways and have a %chance to be actually right and lead to discoveries, but also the %chance here is very low. (not every trip is a breakthrough in art/science, right?)... a bit like iE people that can smell sounds or link some natural phenomenon with an emotional situation between some people and write about it.
> peak human innovation
that might simply a nonsense question if you follow my previous argument. there simply is no real "innovation" - only "discoveries" of stuff. and we meatbags with very limitied/unreliable wetware at some point can't "progress" any more, but non-biologically flawed systems might overcome our limitations and do just that for us.
with our naked eyes, we cannot see things like infrared light, but it exists and some tools helped us to understand everything around it somewhat. now we're building more advanced tooling to get through the next plateaus - just like we have to build very sophisticated tooling to watch further into space or more closely into fractions of atoms, which in turn lead to more discoveries, and ultimately "innovation" we can feel in our lives.
the only thing I will 100% agree to is that a real AGI will be the _last_ discovery/innovation humanity will ever make - and that is a good thing - it means we finally overcome our biological limits somewhat.
Nothin' you can sing that can't be sung
Nothin' you can say, but you can learn how to play the game
It's easy
Nothin' you can make that can't be made
No one you can save that can't be saved
Nothin' you can do, but you can learn how to be you in time
It's easy
All you need is love
There will be a big leap forward when we get to teaching these things from video streams so they can get the concept of 3D properly.
Read How to Ride a Bicycle. Human creativity is just a lot of work, often connecting novel ideas together while doing so.
Your question contains the answer. Remixing begets innovation / creativity begets remixing.
Its only words and concepts and we keep inventing new ones constantly. We also imbue new meaning to words and concepts over time.
By observing nature, and then iterating on that over thousands of years of culture. Of course the question is whether and to what degree the process of iteration and transformation matters.
How would you define an "original idea?" An idea with no origin or source, that spontaneously manifests from the aether? That isn't possible, everything must be extrapolated from something prior.
Or did it ever stop ?
Art that isn't recycled is almost always an illusion caused by your not being familiar with the things it was remixing. Nobody pulls things out of thin air. Even whacked-out acid trip visions were molded by our cultural experiences and aesthetic. While there are a few people who make large shifts in their artform-- e.g. Jackson Pollock, Jimi Hendrix, and Antoine Carême-- but a) they were still making incremental progress, and b) you'd have a pretty limited set of things to choose from for your entertainment.
PS Edit: Right now AI-generated art is novel, but it has far less potential to meaningfully advance art as time progresses. It will mimic what it sees, but it won't see anything that a creative cultural avant-garde doesn't produce first, and as long as real people are doing it first, there will be eyes that want to see it before an AI algorithm waters it down into some conceptual amalgam of its real form. That probably includes commercial entities who are deliberately trying to make themselves stand out.
1) The output, while impressive on the surface, is bland and recycled. It will drag down the general level in the same manner that CGI has destroyed movies.
2) People don't want to consume AI generated content in the same manner that they generally don't watch Stockfish vs. Stockfish.
3) It is not phobia, it is disgust at humans being dehumanized.
The phobia seems to be on the side of AI corporations, who quickly step in any discussion that questions their business model.
The problem people have with AI is that it is cheapening the things that make us "special". If anything I can draw, paint write, create, and pour my heart and soul into can be acceptably approximated by a "bot" in five seconds or less, then what do I have to offer to the world beyond my "work" output? The things that creatively inspire me have been commoditized in a way where it's exponentially harder for me to generate value doing them.
Artists and musicians have famously struggled for centuries. A technology that raises the bar on an already exceedingly difficult path (making money doing creative things) makes that particular situation much worse.
It's clear that amidst a deluge of AI-generated content, audience urges for authenticity will rise. Attention is in more finite supply than content, has been even before generative AI; only so much will stick out, and it will perhaps be the most authentic or analog content and goods.
The real question is whether AI/AGI can make it past the "authenticity threshold" and xenophobia to where we also accept AI storytellers and brands as eligible.
As far as I can tell, there's just two interesting things to watch out for:
- Personalized AI-generated content. What if the movie were just for you? What if it expired like a Snapchat, never to be seen again by anyone else? Would this tickle us? Would we succumb to new filter bubbles, increasing the distance between us?
- Can AI make the movie that's so good that it sticks out and does become the one we watch collectively?
So what? There are a handful of truly revolutionary artists each generation. Faulting AI for producing merely good and interesting art is missing the point.
What fills me with dread is the obvious glee over removing human work from sellable products, when there is no viable alternative to working for most people. We've created a world where a lot of jobs can be automated, and that is somehow a bad thing, and it's going to be used to crush people. It's a labor issue not a philosophical one.
Labor will shift. Bemoaning new efficiencies and new empowerment because they reduce the need for meaningless labor consumption is backwards. We should be sensitive to those whose lives will take an unexpected turn (I may be one of them!), but we should do so in the context of celebrating greater individual leverage to create and produce.
We biologically have a desire to live. Part of living is confronting threats to our survival, and hopefully defeating them. Our minds are what provided us with the ability to survive despite there being stronger, more vicious competition out there. Why would we want to surrender our one competitive advantage?
People aren’t stupid. They know that AI will continue to progress (“technology must progress”, says the technologist) and that it threatens their way of life. Truckers know that AIs will automate their jobs. Artists know it will automate theirs. Everyone knows that AI is coming for them, sooner or later. If not in their career, then maybe in their social life, like video games and social media have decimated in-person communication.
Those are things we need in order to survive. Our jobs provide us the money we need to meet our needs, and give some of us meaning in life (no, UBI proponents, receiving a check doesn’t solve all the problems). Our social lives are paramount to our health. What will humans be doing all day when AI has taken all our jobs and we are all talking to chatbots all day which cater to our every proclivity? Maybe that is some folks’ ideal worlds, but certainly not mine.
Given the mixed results we’ve seen with technological innovations in the past, rather than giving a negative connotation to the people who are cautious or concerned about AI, why not listen to them? Rather than having a phobia, maybe they have a valid point.
I could make a similarly disrepectful and unsupported claim about the mental state of people who believe most human creative activity is simply recycling learned ideas, and we could spend some time flinging insults at each other, but why?
It is to believe the mind to be non-functionally pure[1] in the programmatic sense. Output based somehow other than purely on input.
And if that's the case, the question is where does that irrationality come from? A random number generator? Any developer knows there's no such thing as actual random. It's all either list based or based environmental aspects.
I don't see room for an argument to be made that the human mind isn't just mixing ideas/the environment that doesn't rely on some form of magical thinking.
No, it believes that the mind has an internal state. We have many physical cases of state which is in practice undecidable. Moreover, in many formulation of quantum physics one can have a state which is unreachable even in theory - not just hidden variable style QM theory, there are ways to get unreachable state even in 'regular' QM[0].
>And if that's the case, the question is where does that irrationality come from?
The universe itself. The linked PDF - you might be familiar with the author, he features often on HN - provides one method this is possible. Though, I'm not saying this is true, I suspect it may not be.
I like it.
> The Not By AI badge is created to encourage more humans to produce original content and help users identify human-generated content.
How does this encourage humans to produce more original content? It may help users identify human-generated content - if they care. But perhaps more usefully it helps AI identify human-generated content to avoid training on its own garbage.
The word "recycled" attempts to carry a lot of weight here, and not successfully.
In 1905 Einstein published his famous four papers that could not have existed without recent work done by Planck, Michelson, Morley and Maxwell.
It would be ridiculous to describe those papers as recycling.
A small culture war that was smouldering long before the appearance of generative models.
But perhaps the process doesn't matter as much as the end result.
Maybe another interesting aspect to consider is: humans create new things (even if by remixing and deriving from old things) all the time, because they want to, because they choose to do so.
Thus far, these AI creations have been made at the behest of humans demanding them.
Perhaps that will, that desire, to make something new in the first place will remain a differentiating factor.
It’s probably not completely accurate to say there’s nothing truly original anymore, but also probably true that the rate of discovery of truly novel and original concepts and ideas has slowed to the point it may seem that way. Possibly b/c all the low-hanging fruit as been taken, and only the more difficult discoveries remain.
You can retrospectively say we've always had a device that creates light from heat, like this 'new' lightbulb and metaphysically you'd be correct.
But the properties and design of a lightbulb are different to an oil lamp. That's new enough to be called original and Thomas Edison (and whoever else helped him) are the originators of the lightbulb design and they made it real.
An original thought? Probably not, an original object, yes. Very much so.
The argument should be about capacity for creation, and to me AI generation just doesn't seem like true creation, but more of a cheap magic trick.
That means AI is not capable of producing meaningful _new_ content like discovering new mathematical theorems, because AI does not understand maths, whereas humans can come up with something meaningful based on _understanding_ of the content they have learned from.
This is why when you ask e.g. ChatGPT about something it has not been trained, it can only come up with garbage, whereas a human would likely be able to provide meaningful answer based on looking at the same training data, if that makes sense.
Language models can generate novel functioning protein structures that adhere to a specified purpose. Structures that didn't exist before nevermind found in the dataset. The idea that there's some special distinction between the reasoning LLMs do and what Humans do is unfounded nonsense.
A distinction you can't test for (this so called "true understanding" ) is not a distinction
"Hold the newsreader's nose squarely, waiter, or friendly milk will countermand my trousers."
A Bit of Fry & Laurie Concerning Language: https://www.youtube.com/watch?v=3MWpHQQ-wQg
make up a list of fictious creatures and describe them. make sure the names aren't reused.
can you be extraordinarily creative and unusual?
Sure, here are five highly creative and unusual fictional creatures:
Crystalline Leviathan: The Crystalline Leviathan is a massive creature that lives deep in the ocean. Its body is made entirely of iridescent crystals that refract light in all directions, creating a dazzling display. The Crystalline Leviathan feeds on deep-sea creatures and can grow up to hundreds of meters long.
Plasmic Shapeshifter: The Plasmic Shapeshifter is a shape-shifting creature that can manipulate its own plasma to take on different forms. It has no fixed appearance, but usually appears as a mass of writhing, translucent tendrils. The Plasmic Shapeshifter is highly intelligent and can mimic the voices and mannerisms of other creatures.
Quantum Butterfly: The Quantum Butterfly is a tiny, delicate creature that flits through the fabric of space-time. Its wings are made of pure energy, and it can move at incredible speeds. The Quantum Butterfly is believed to be responsible for some of the most unusual quantum phenomena, such as entanglement and superposition.
Cosmic Serpent: The Cosmic Serpent is a serpentine creature that lives in the void of space. Its body is covered in shimmering, rainbow-colored scales that reflect the light of distant stars. The Cosmic Serpent feeds on cosmic radiation and can survive in the most inhospitable environments.
Dream Weaver: The Dream Weaver is a mysterious creature that lives in the realm of dreams. Its body is made of ephemeral, mist-like substance that can change shape at will. The Dream Weaver is said to be able to influence the dreams of mortals, and can even enter their dreams to communicate with them. It is rumored that if you catch a Dream Weaver and release it, it will grant you a wish as a reward for sparing its life.
But they mostly read like someone churning out monster manual entries by pairing animal names with adjectives then putting in the obvious two sentences of description.
Nothing about it engages my brain or strikes me as brilliant, insightful, or what I'd usually mean when I say "creative".
This, I think, is the key bit of your comment:
> I'm not a fictional guy so I don't really know
HN (and wider social media) is filled with people who aren't familiar with [field] declaring that AI has surpassed humans in [field]; it's not a display of AI's new brilliance but the old[1] display of a speakers' ignorance.
Most of them don't display the same humility as you.
"A man with a new horse visited the hardware store in Dagwood last week to try and find a new hitching post for his stable."
That simple sentence has enough entropy that I can say, with confidence, that is has never been uttered by anyone in the history of human civilization.
https://www.biblegateway.com/passage/?search=Ecclesiastes%20...
For me, I actually really like that human content is recycled. The discussions around AI, art, and humans always talk about art as a separate product produced by the human. I feel like many pieces of art (not all, for example zombie realism) has a piece of the human inside of it.
An example I used in a comment in this post is Raymond Carver. His short stories are about blue-collar men in the mid-west. Carver was a blue-collar man from the mid-west. I find that interesting, and I liked that he pulled from his life experience to write stories.
I get very excited when I realize that the work I'm engaging with is recycled in some way. Like a song's chorus sharing lines from a separate artist, or a style that seems similar to a different writer I know. I love that. It makes me feel more connected to the creator and makes me like them more because we share similar interests.
An example that comes to mind for me are the manga Hunter x Hunter and Jujutsu Kaisen. The creator of Jujutsu Kaisen loved HxH and it 100% shows. The crazy powers, the complicated fights, the walls of texts explaining what happened in a fight, etc. All of those come from HxH and I love the creator was inspired by it. Sometimes reading Jujutus Kaisen makes me feel like I'm bonding with the creator over our mutual love of HxH.
With AI created work, I feel like we're missing the human touch. And as I pointed out in another comment here - I don't think it's hard to bring it back. Show me the prompts and the chat log. Why did you choose those prompts? Why did you ask the AI to write in X style? How does that X style make you feel?
On the other hand, maybe the AI prevents some of the human touch from coming through. I imagine much of the recycling that comes from humans is unintentional. The song you wrote on guitar has that riff not because you're copying a band, but because you've listened to so much 90s indie rock that you just make what you love - without realizing it.
Maybe in the future this changes. I'd love to see someone spend some time with an AI and mold it to such a way that what it produces can emulate the user's love and interests. Don't forget to show and share the chat log!
[EDIT]: To add onto this, I've actually done the above. I've played some text adventure games in the past, and I got ChatGPT to play with me. I told it that I had stat points (Health Points, Magic Points), that I had two types of attacks, and that there are three types of monsters. That experience invigorated me to write paragraphs on world building - just so I can feed it into ChatGPT and live in that world.
I seem some instances that are similar like this - but people just end up not sharing the prompts they feed into ChatGPT. I'd love to see those prompts. I've love to see what someone did to get ChatGPT to act a certain way.
"Soon, asking a writer if they use AI will be like asking a photographer if they use Photoshop" – e.g., it's a foregone conclusion, and the best artists will generally adapt to using the best tools available
It's similar to music. DAWs and samples didn't kill off music; instead, it made it easier than ever for a teenager with a computer and a passing interest in music to create a song and share it with a world. As a consequence, though, the standards for mixing and mastering have gone up massively; people don't really tolerate bedroom recordings with $10 mics any more. I imagine most amateur musicians in the 90s didn't know what a compressor actually did (I certainly didn't).
Seeing the results of talented artists who are experimenting with AI[0] makes you realize that there's still going to be a massive gulf between skilled artists using SD etc as a tool, versus those who think they can be artists just by putting keywords into an image generation AI and calling it a day.
[0] https://twitter.com/jamm3rd/status/1619896080619159553 https://twitter.com/jamm3rd/status/1633758455952703488/photo... (moderately nsfw I guess)
the fear from the current professionals is that the new ai raised floor is good enough to be usable commercially, and thus, cut a chunk of economic value from the professionals.
There is no fear that AI stops people with the intrinsic interest in creation to stop creating - their ceiling and productivity would sky-rocket as a result in fact. And it would be _these_ productions that wow an audience.
Look at the history of woodworking for inspiration.
It used to take human sweat to make flat straight surfaces. Something like a walmart flatpack bookshelf would have been VERY expensive to make 1000 years ago. Industrial tech made that cheap and ... now its seen as tacky and out of style and only for poors.
The result culturally this century is an avoidance of "plastic wood" finish, faux finishes and faux surface prep that look old and worn and hand scuffed, and a strange hipster fetish for "live edge" raw bark on finished product. Sure, in 1875 one might have predicted based on mass production trends that furniture in 2020s would be all right angles, smooth as glass mirror, photographic "ideal" woodgrain instead of real woodgrain, plastic-y lifetime thick film finish. But thats only for poor people who shop at walmart, nobody with money buys that stuff.
I suspect what AI can cheaply produce will rapidly culturally be considered lower class and trashy. Oh, sure, completely AI generated movies look very nice and sound very nice and are ever so trendy and well written, but special effects and yesterdays memes on todays big screen generated entirely by computer algorithm is only for poors. Now, us rich cognitive elites only attend live theater plays. Aspirational middle class types watch recordings of theater plays (admittedly maybe some AI generated). You wouldn't show off your 'wealth' and 'class' by taking a date to an AI generated movie, that would be insulting, you might take her to a live theater play... I'm sure there will be some AI incursion, maybe the lights will be run by AI or AI painted backdrops or some actors will lipsync their lines to AI, or roughly every 5 years we have to tediously suffer thru a new generation of rewritten modernized classics and it'll be worse in the future with AI... but in general human will be the status symbol because it'll be expensive.
I suspect being able to ask an AI to generate non-copyrighted commercial quality music will mean massive noise pollution (much like every room required 15 digital clocks in the 80s and every room required 15 super bright blue power LEDs in the 00s) followed by the only people making money in music will be live humans playing physical instruments.
Poor people try to get by and don't spend money and when they do it'll be on infinitely cheap AI commodities. Rich people spend money on unique stuff, not mass produced AI shovelware. Aspirational people spend money trying to look rich so they will spend all their money on non-AI stuff.
So I ask the AI to rephrase it for me. Now I can pick and choose an option from those the AI offered, or I can just keep my version if it turns out it's good enough. I feel this is an acceptable use, and you can still maintain authorship, just like book authors maintain authorship after an editor has overhauled their writing.
Your DAW example would work if DAWs could generate samples, beats, and whole songs by the click of a button. But are you truly the author, then?
They absolutely can, at least with 15-20 clicks rather than 1 at the moment. Drag in drum loop, drag in melodic loop, use Scaler2/Cthulhu/Captain Chords to come up with a chord progression, use a synthesizer preset to play the chords, drag in vocal sample, put on Ozone AI Mastering on the master bus, done. If you sent me back in time to 1990 with that track, people would be really impressed.
As to whether you can claim you're the author, this gets debated a lot. I'll just paste the famous goat farming quote.
> I thought using loops was cheating, so I programmed my own using samples. I then thought using samples was cheating, so I recorded real drums. I then thought that programming it was cheating, so I learned to play drums for real. I then thought using bought drums was cheating, so I learned to make my own. I then thought using premade skins was cheating, so I killed a goat and skinned it. I then thought that that was cheating too, so I grew my own goat from a baby goat. I also think that is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all.
It’s a bit more like a hand crank that builds a lopsided toolshed at this point. Which is absolutely incredible, to be honest, and couldn’t have been anticipated by many people five years ago, but doesn’t necessarily entail that the cathedral-button will arrive any time soon.
We kinda have to be patient.
Whenever a customer ask me to develop an app, i don't consider that he did the job. I did it.
I don’t find this holds true for hip hop, the most streamed genre, whatsoever. There are countless examples of people making it big recording on a cheap mic in their mom’s bed room. One example is Chief Keef. A much more prominent example is Kanye who had terrible mixing on The Life of Pablo to the point where he released patches[0] for the album. He also recorded parts of Jesus is King on an iPhone[1]. To further the point, bootleg recordings and demo tapes of unreleased songs in artists’ back catalog continue to be popular with devoted fans. Average people like good music, even if the quality of the mixing and mastering is not stellar.
Even The Beatles released songs with minimal takes from a live rooftop performance on Let It Be; hardly an ideal recording set up. Although they surely had better mixing.
[0] https://archive.org/details/2016-the-life-of-pablo-updates
[1] https://genius.com/a/kanye-west-says-20-percent-of-jesus-is-...
It's called bedroom punk now.
Asking a photographer if they use Photoshop creates a framing where the artist still did went out and took a picture to create the work and then Photoshop only modifiers their original creation.
But you could just as easily say "Soon, asking a writer if they use AI will be like asking a photographer if they have a camera". That sounds ludicrous, but that is exactly what generative AI offers: the ability to create content essentially ex nihilo.
AI doesn’t “do everything for you” any more than word processors, music sequencers, or CAD does. AI allows people to operate at higher levels of abstraction, is all. Yes, fewer people will type repetitive copy in to word processors. But that’s fine.
You are mistaking activity for productivity. Few people go into writing because they enjoy typing. AI offers leverage, which is the most amazing thing for anyone interested in producing work.
Ask any professional writer: revisions are a chore. Authors often say a novel is 10% writing, 90% rewriting, and they don’t enjoy the rewriting part.
So, IMO, contrary to your doomer take, I think we will see more people producing better work because they can focus on the parts they love and not do the grind that was only ever a necessary evil.
(The exception being those who love the grind and not the product)
Netflix also infamously used AI to generate background images for an anime, which is work that previously would have put food on the table for illustrators.[0]
It's hard to look all of this and think that all the jobs are going to stick around. I know at least 5 copywriters who have made their entire living as adults writing banal junk for small companies. All 5 of those careers will vanish in the coming years.
[0] https://www.cined.com/netflix-uses-ai-to-generate-anime-shor...
The end has come.
For example: sometimes I will pass a paragraph through an LLM and ask it to return it in a more "active voice". Then I _manually_ edit my original paragraph, if I wish, to make it seem more suitable for my current goals.
This is sort of like Photoshop for photographers.
The problem is that it automates the best parts and transforms the user into a curator/manager.
Some people want that, but other enjoy the creative part more.
To bring it back to photography, that's the problem with digital, you blast through hundreds of photos and spend most of your time selecting and editing at the computer.
With analog, and especially wet plate or direct positive paper, it takes a while to compose your picture and you only have one chance to take it and develop it. It's very easy to mess anything which is what makes the process more meaningful. And you're left with a unique copy, not an artificially limited single copy.
I almost never use the prose word-for-word, but goodness is it helpful.
Of course, it's not my place to say who should or shouldn't call themselves a writer, but I'll simply personally respect someone more if their stories aren't ghostwritten by the AI.
I don't think "AI" is the way most people would complete that sentence. Perhaps "Word"? Or "LibreOffice" if "Photoshop" were replaced with "Gimp"? Personally I use Emacs or Vim. You could incorporate AI into any of those tools, but how exactly? And would it be a core part of the functionality? And would it be something that the best artists make much use of?
If you used AI to make the spelling/grammar checker more reliable, probably most people would use that (assuming no privacy problems), but if there's an AI-driven autocomplete for sentences probably a lot of people, including the best artists, would turn it off, just like I turn off the autocorrect on a word processor today.
If an AI can complete the sentence, then perhaps the sentence isn't worth completing, at least if we assume that the reader is as clever as the writer and has similarly sophisticated tools. Perhaps the AI-driven tool should instead put some kind of wavy line under the second half of the sentence to indicate that it's boring and obvious and doesn't need to be there.
It’s tremendously useful technology for many domains already and you can see the stepwise refinements that will permeate many parts of our lives. Big money corporations are productizing those elements already.
But to be as fair to the pessimists as the optimists, its actually still a very big leap from Modjourney and GPT-4 to something that becomes the next camera or typewriter in terms of ubiquity. Because we saw a huge leap recently, we feel close and excited — and we might prove to be in hindsight — but we also might see that there are some hard conceptual limits that we won’t see anyone break through for another fifty or hundred years.
For image-making, there is a decision in the prompt and selecting images, but that is very different to making a decision about each color and brushstroke, and working to finish a painting. It's orders of magnitude more difficult and why great masters are celebrated. Creating stuff with AI will suit certain people, I definitely don't think necessarily the 'best' will automatically use it. It does seem to take away a lot of the fun of making images and truly original work will always push quite far outside the training set.
For literature, it could be interesting, how authors use new tools, where and in what sense. Maybe have more conclusive plots? Less inconsistencies? Have AI imitate dead poets' style?
We will value the classics more I guess, since they were done the old fashioned way.
New tools open doors, not shut them.
Another argument I keep hearing (most recently from pg), is that we'll always need non-AI training data. That, too, doesn't follow. Training new models on synthetic data does not mean we get stuck in a particular mode or style. We'll continue moving, improving, and trying wildly new things. Bootstrapping with synthetic data doesn't block evolution - it enables faster evolution, even. (I'm using synthetic data to train new models to great effect.)
People are angry that we've lowered the opportunity cost barriers and so they're expressing their frustration.
It's a good thing that life's choices no longer fence us in as much. Everyone should get a chance to learn how to express themselves through art with the new regime of tools. Despite changing economics, there will still be a top 1% that do better than the rest of us.
Artists, by and large, don't do art because they enjoy having art--they do art because they like /the process/ of producing art. If that process can be done faster and better by AI, then yeah, sure, they /might/ be able to still do art for a living (some artists will be able to leverage their experience to maintain an advantage; other, less flexible ones will lose work)--but the work they do will likely not be commensurate to the work they were doing before, and will likely be less enjoyable to them.
The thing that worries people about AI is that it'll make all creatives into middle-managers.
Then I spend 95% of my hobby time tediously sanding boards and applying finish and this is NOT the fun part of my hobby.
I will say the only time I've gotten compensation directly as been for the latter because nobody likes doing that stuff.
I'd say AI will make creatives into micromanaging middle managers, the type that gets way too involved in a critique of your every decision is still going to micromanage, they'll just be harassing an AI instead of humans.
Fundamentally the creative act of making book cover art has always been specification. You can say to an artist "make me a book cover looking like ..." or you can hire a guy and give him no direction and trust him to decide what to do. But the creative act was the specifying, not the mechanics of turning that idea into printing ready art.
Another problem you run into is AI is as currently produced, a groupthink enforcement technology, and most people will be turned off by that. The blandest least offensive corporate dining experience is likely McDonalds. How much do hipster foodies prefer going to McDonalds over ... anywhere else on the planet? AI can only create the ideal big mac, and you can make money shipping big macs, but only selling to poor people, everyone else wants something else, even if in a strict corporate inhuman sense its inferior in every way to a big mac.
I think it's a great thing.
Beyond that, if you think AI algorithms will replace art, you don't understand art. It replaces much of the commercial utility of many artists and creating many types of images, but the idea that it threatens art, generally, is preposterous. People who think art school is only about learning to physically make art are like people who think computer science degrees are only about coding. Most of what you learn in art school is conceptual thinking, communication, really deeply seeing things in a way that most people don't, and stuff like that.
Factories didn't replace potters when they started making pots and dishes or sculptors when they started making cheap home decorations-- it just meant the artisans no longer made money creating a commodity because most people were satisfied buying chintzy shit cheap. Many potters who lived through that change probably died in poverty because of it. Talk big numbers about the way industry shifts all you want, but not everybody can switch to an entirely different category of employment just because someone taught a computer to do it.
There was never anything stopping anyone from picking up a pencil and learning to draw. The idea there's some inborn "talent" for art is absurd. There is only people who practice and people who don't.
What you are actually "democratizing" is the ability to produce finished products without ever putting in any kind of work.
Which means you are absolutely devaluing the people who actually did put in the work.
It is a terrible thing. A miserable race to the bottom of human expression.
Note: I don't really care if people use AI to generate anything but they should be forthright about doing so.
This is all explained well by "Drawing on the Right Side of the Brain". People apply their childhood drawing symbols to realistic art in their teenage years and get frustrated. They never learn to see with an artistic eye and quit.
It’s about expressing and finding yourself and developing a soul through creative action, not just a squirting out a picture that looks “correct”.
You only shortchange your own soul’s development by trying to skip the hard steps that it takes to develop a skill like that.
Because you can make an image on a computer that looks like it was drawn does not mean you can draw. Hell, I’m not even sure it means you can express yourself. The inputs you contribute when using an AI prompt are so minimal compared to the informational input required to create something in the physical space.
Don’t let anyone convince you that it’s an exclusive act. Find a pencil and get at it.
Drawing was for anybody who put in the work.
You may as well say football wasn't democratized until the release of FIFA International Soccer in '94.
Hence I disagree. Even of the training data curators are perfect, there is a risk of "model inbreeding."
And the curators are not perfect. In fact, they seem to prioritize (by necessity) volume and speed over quality and curation when training these LLMs and diffusion models.
Like imagine a label on a thing that says "Made without CNC machinery". So instead of finishing the thing on a CNC mill, they instead stuck it on a Bridgeport manual mill, and finished by hand. Or somebody finished the cast with a file.
Okay, and so? In the end, a hammer is a hammer. If it hammers well, what difference does it make how it got that precise shape? It's not like some inherent goodness is being imparted by a hand file.
Now I get some methods have flaws to them, and some form of associated harm. The problem I see with blaming specific tools is that it's simplifying the problem too much. Eg, if the problem is taking jobs, then picking up one particular tool to blame for that allows manufacturers to use a different tool and cause the same sort of issue. If the problem is say, pollution, then it's not at all a given that the replacement method will be ecologically friendly.
IMO the better thing to do is to target the underlying problem. If say, your issue is ecology, then you want to certify that the manufacture is as environmentally friendly as it can be, not that the thing isn't made from plastic, or there exists the chance that the non-plastic thing will even be worse for the environment.
And if someone is going to send me a sales pitch or email that GPT wrote, I'd rather they don't insult me and just send me the prompt.
I think tfa is a gimmick, but i see value in knowing whether I'm relating to a human brain or to a statistical model
Or manga artists use things like screentones: https://www.youtube.com/watch?v=Q2U4EfKCfjI
I think AI will go down that road. Yeah, lots of people will mess around with prompts, but artists able to retouch the output, and to mix and match stuff will get much better results.
A hammer, a pair of shoes, a leather sofa, etc. They all have an element of craftsmanship and it seems like people who do that work usually take pride in it from an artistic and quality standpoint.
and the prompt makes no sense to you, because you are not able to interpret it the way you would interpret the generated output of said prompt. The information isn't in the prompt, and never was.
"Hand made" has been a selling point since things were first made by things other than hands. Example: "The factory, known as the Ateller, [...] it is the place where our craftspeople assemble each BUGATTI by hand."
In fact it's probably something completely incidental to the entire thing -- they're not assembled by hand because it's better, but because not enough of the cars are made to make automation make sense. It's just a pragmatic business decision, which then some marketer figured out how to spin.
Possibly but not necessarily. Automation limits what you can do as you are restricted to what the machines can do. Hand-made components do not have the same limits. Additionally, materials are often not perfect or uniform and a machine will work with different tolerances than a craftsman. Lastly, in certain cases the idea is the craftsman can discern the better materials and choose accordingly.
So for instance when making acoustic instruments, being "hand-made" means that the craftsman could tap-test the wood and use discretion on which pieces to use for each instrument. They would have examined the wood for both structural and form considerations. There would be special attention to assembly and anything custom can be done. Independent decisions would be made on a case by case basis instead of using a general rule.
Handmade also implies superior materials because when automation is used, often the inputs are the most expensive part. When you go to a handmade hob, the labor generally will far surpass the costs of materials so using premium materials is of little consequence to the final price.
But yes, marketing can be a big part of it in many cases.
This is basically the selling point of the entire luxury watches industry.
and fwiw, a lot of the assumptions in here are akin to saying at the dawn of photography that nobody would ever paint any more. Photography of course replaced certain entire categories of painting, but didn’t erase painting from the face of the earth altogether. Of course there are far more photographs than there are paintings, but volume alone is not the totality of meaning.
Generated content is strictly culturally regressive anyway. After a hundred years of ChatGPT will people still be writing prompts with “in the style of [person who produced all of their work before the year 2000]”? That would be a sad and boring future.
I have things I have written that took me two years for 8-10 pages. I wrote them to help me think through certain things.
Could ChatGPT have written it for me? Maybe. Probably not, though - I kind of had to discover what the content was supposed to be.
Could it have written it better than I did in two years? Probably not. Two years leaves you a lot of time to polish the phrases.
Could ChatGPT have taught me what I learned in those two years? No way.
When photography replaced many of the uses of painting, some painting jobs were lost, but photography ones were gained. Now, many creative jobs will be done by computers, leaving the world less enjoyable for people who enjoy creating.
Sure, some people will continue to be creative, but look at what happened to art after photography: we got modern art, where people just piss on a canvas and sell it for millions based on the name of the artist alone, not the value of the art.
AI is progress in the same way nukes are progress: you have to get it or you'll be left in the dust. But is humanity better for it?
> Art is not what I think when I’m painting. It’s what you feel when you’re looking
Yeah, because the Netflix catalog, for example, is not repetitive and stagnant.
These current advances will enable anybody with a unique idea to produce content. We are right before an immense explosion of human creativity.
I think we’re going to see an explosion of waste.
We are information processors. The input makes the output. What happens when you close that loop?
Examples: let chatGPT.. 1. write blog posts 2. generate scripts for youtube videos 3. create short stories so you can sell them as a book
Painful to see. I would like to think nobody in his right mind is willing to consume these delightful pieces of work. Dead internet theory on the next level.
People tend to overestimate their talent, and get overly attached to their own ideas. So when a tool like midjourney/stable diffusion generates beautiful artwork based on a low effort prompt from the user, it's comfy to think that they had a big part in it.
The reality is that almost everything those tools make is equally gorgeous/amazing regardless of who uses it, so they're effectively interchangeable. As a tool for personal expression, it has a lot of value because of the impact it can have on the individual using it, but as art it's pretty worthless. A generated AI image could potentially have some cultural impact or value, but not on the merits of "it looks pretty", since they all look like that.
My experience browsing midjourney supports this. It's amazing how quickly you go from "wow" to "whatever". But that's probably cold comfort for a huge number of jobbing illustrators. Paying clients typically don't want art.
And I would argue that the current (mostly recent productions) Netflix catalog is indeed repetitive and stagnant. Originality in tv production is currently in a race to the bottom.
Those purists are mostly dead. Everyone uses technology in music today. And music is better for it.
> I thought using loops was cheating, so I programmed my own using samples. I then thought using samples was cheating, so I recorded real drums. I then thought that programming it was cheating, so I learned to play drums for real. I then thought using bought drums was cheating, so I learned to make my own. I then thought using premade skins was cheating, so I killed a goat and skinned it. I then thought that that was cheating too, so I grew my own goat from a baby goat. I also think that is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all.
I just want to say that some answers here are like when artists talk about engineering, we simply don't understand the topic and it shows.
There's a sadistic glee in proclaiming people nothing more than biological machines and acting like a scolding parent whenever any emotion is brought up in any form. I always smile whenever I see comments from one of these types of people explaining how fulfilling they find their lives. Sure, buddy. :)
2. The same argument could be made for "Not with a Computer" and invalidate this website based on identical principles.
3. The future AI brings is a huge unknown, but as we've seen with every major technological advance so far, it's never been nearly as bad as the most fearful and skeptical thought it would be at the time this was unknown.
I generally agree that it will become increasingly harder to distinguish non-AI generated content and authenticity will suffer a lot. Maybe the solution to this is get connected with other humans directly and provide authenticity as first hand experience.
This is the “disruption” we tech and VC has been talking XR/Crypto/5g/metaverse is about. Except it came with large body counts for those who are directly impacted by SD/Llama in the wild,
More seriously, even if we are willing to assume that everyone will be honest about how much they use AI, how do you define whether an image is less than 10% AI generated? Number of pixels? Number of objects?
And what about writing? Does getting an LLM to check your grammar afterwards count as cheating?
This is subjective as all get out. If you write a novel and use Stable Diffusion to draw the cover, does that make the whole novel less than 100% human-generated?[0] What if they had used, say, lots of prompt engineering and inpainting runs instead of just typing in what they wanted and grabbing the first thing they saw? If, say, painting programs start using image generators as brush engines, does that lower the percentage more or less than the inpainting case?
[0] Keep in mind that most covers are designed and drawn by publishers, not the original writers. Writers making their own cover art is very much a self-publishing thing.
I have never considered the cover art for a novel (or most books) to be a part of the novel. The thought that it might be never occurred to me. It's the packaging. So, in my view, such a novel would be 100% human generated.
AI art created by talented artists and writers is the future.
But I’m confident that no amount of AI will make the expert obsolete. Like Photoshop, it’s going to be a complex tool you have to learn to master.
Second is art itself. We have high quality art prints that a large portion of people are happy to buy. Artists themselves are happy to sell prints of their own work. That product is entirely machine made except for the initial knowledge work by the artist. I'm not making a art, not art argument. I'm saying, people are happy to consume manufactured content, companies are happy to generate it for them.
The argument that 'humans will stop producing and things will stagnate' is totally non-serious and doesn't even deserve a retort really..
is a precursor to a more existential revelation our culture is going to have,
the extent to which the entire precept of the rational self-aware agent that we carry as our model of ourselves, is a confabulated falsehood maintained by our own minds.
Most of what most of us do most of the time is habitual, instinctual, autonomic, pre-conscious, whatever—very few of us are present very much of the time.
Even highly-analytic complex logical reasoning can and does often transpire in something akin to a "fugue state," indeed that is a much-noted aspect of technical work like most here engage in.
That doesn't mean that we are not capable of genuine self-awareness, introspection, and methodical reasoned thought—but I don't think it's controversial (except in our lay conception) whether or not those things are primary or common modes of being in the world.
Record yourself for 24 hours and compare honestly the sequence of your utterances and behaviors with your internal record of agency, and you'll find you are on autopilot in some sense most of the time.
From this I infer that the first "real" seeming AGI will like us be not monolithic capable systems, but relatively loosely coupled aggregates, with many components serving as analogs to aspects of mind and embodiment largely distinct form and only loosely coupled to the "executive" function.
However, the items I added to my works have a stronger claim than 90% AI-free: I used "100% AI-free organic content."
I think I'll keep this stronger claim because I can; I have not tried any of these tools. Not even once. I refuse to.
I am betting on the fact that if I don't, I'll start to have more influence, like Paul Graham said a few days ago.
If, by some chance, this actually caught on and was some kind of indicator of quality, it would be in a predictive model's best interest to integrate the logo into any kind of web design it produces. Furthermore, no human-curated content farm would hesitate to include this symbol, regardless of the content's human 'purity'.
Anyway, nice idea. I like it. Kudos to you.
Yours, KodingKitty
I also liked the idea behind the label, but on second thoughts the whole idea behind a label sounds counter intuitive to me. Like there's some kind of truth in sticking a label on something. So, should I trust a label then? Is there some authority in a label? Do I even need authority to point out what's real and what's not? What's reality then? Does it even matter? Anyway, you get the point. A lot of questions.
And then I read further on that site, and it tells me how to use the label. Don't change the label. Don't change the color, don't change the text. I think it's human to break rules, in one way or another. Or at least to push the boundaries. So - to me - it's more human to change this label, and not use it as intended by the creator of the label.
Maybe that's the difference between AI and humans. AI will follow rules (set by it's masters), whether it allows for randomization or not, it will follow rules. Human beings - sometimes - break rules.
If the goal is to highlight creation by humans, I’m struggling to see how slapping a big fat sticker that says “AI” in large type on it would be a good idea?
All it does is give AI more attention and associates the work with it.
Of course it does. If you make a purchase based on this badge and you later find out it's false, that's fraud. You don't need to have a certification program before false advertising becomes illegal.
There's a sort of insecurity about this. And I get it, people are freaking out. I feel unsettled in many ways and I truly believe this technology is profound. Wait until it's realtime and embedded in everything. But I don't believe it's a cause for concern. I don't think we need to legislate transformers trained on public data as something illegal, for example.
I think it's a tool, a fantastic tool and a powerful tool, but a tool none the less. It will democratize many things and it will lead to increased productivity and creativity. But we, humans, will find more things to do; ways we can't see right now because it wasn't possible and still probably isn't.
So made by human or w/e is a similar concept.
Soon, many digital image editors will incorporate some simple AI-powered features to detect edges, faces, various objects, etc. Are content-aware plug-ins to Photoshop AI?
This happened years ago. Agreed with the first part though - there is almost literally no possible route to avoid AI on consumer phones.
Gives a bit info.
This is an absurd thought. AI content is still created by humans using AI, just like photos are created by humans using cameras.
Also who says that AI can't be as creative as humans or better? Why would humans be better at being creative "manually" instead of with machines?
I say AI creativity will be better than simple humans at creating stuff. Humanity will overall benefit from it and the world will be less dull.
I doubt if this concern is valid for humanity. Creating is an innate desire of human beings. We are going to have lots of AI-generated content, but I have confidence that people will continue to generate high-quality content too. Of course, how to find such quality content will be a different matter in the future. Maybe an industry-standard watermark or metadata for generated content is a cheap enough solution.
Well, to me, these anti-AI things are much more like NFT/crypto. It's just trying to create novelty out of nowhere.
To avoid AI content means intentionally avoiding sites that are designed for cheap monetization and nothing more. That includes most blogs today; the ones that aren't personal websites. Most 'review' websites are exactly this; they offer a generic description and add an Amazon affiliate link.
We won't know for sure what sites are using it and which aren't, but knowing the nature of the site (commercial/non-commercial) is a pretty good proxy.
The kicker is the "90% of content will be generated by AI by 2025" quote by an "expert" which links to a Yahoo-TV clips of some rando influencer spouting nonsense. Better luck next time, ChatGPT!
It does a lot more than that. It is capable of novelty and striking out in novel directions per the AI artist’s direction. Maybe there’s some soul-grounded capital-t truth forever out of reach, but that’s metaphysics.
People have always wondered if even people are creating anything new ("nothing new under the sun"), so it's quite silly to say that a model which produces data based on its training set and prompt is doing anything other than mix.
Some day AI will be able to create by itself, but the current state of the art isn't doing it.
Perhaps somewhat ironically, if we start identifying all human-created works as such, it creates a new set of pristine training data for future generative AI/LLMs.
I thought I’d have an open mind and read the “why” section.
> it helps your audience recognize your human-first approach
So, no. Refraining from using AI is not a human-first approach, when using AI can actually help significantly with human problems.
The badge is a false and outmoded signal which says more about the user being a virtue signaling type personality.
Other than that, it doesn't seem like a bad idea for people to be able to mark their work as "created by a real person" with a cool little badge like this. It's simple but it works.
Isn’t this already a “made by AI” artwork?
And how much is 10%? If a traditional painter that uses photo reference starts using AI generated photorealistic imagery instead, is that more or less than 10%?
* If there is any kind of real value to having this badge, then people will just use it even if they're using AI
* Rejecting technological advancement is like begging for extinction
No need to make websites or manifestos, do not worry humans will still human!
Anyway, I like the initiative. We'll need much more to make a digital divide that can guarantee human autonomy.
- For text prompting, a good starting point is reading the whitepapers in this repo [0].
- For AI art generation, a good starting point are AI prompt libraries to see results of good prompts.
Fine-tuning (and creating the examples for fine-tuning) is also an interesting challenge.
- The Stanford Alpaca repo [1] made it really easy to fine-tune LLaMa based on your custom dataset.
[0] - https://github.com/promptslab/Awesome-Prompt-Engineering [1] - https://github.com/tatsu-lab/stanford_alpaca
[ Art Too Bad To Have Been Done By AI ]
/S but only a bit.
If there is such a way, someone please let me know.
In the 2000s, if you were talking on the internet, you were pretty sure that you were talking to other humans.
This will quickly reverse as in being sure that one is talking to AI most of the time on anything remote (internet, phone, email, ...)
Which in turn will massively strengthen personal (like non-remote) face-to-face interactions.
phew. need GPT to show me how to properly use the badge.
Of course, a juxtaposition between this and Apple is very silly.
There is a lot of evidence that language drives cognition and there have already been instances of ANIs creating their own efficient languages. Imagine that the year is 2040 and euclidian encryption is augmented via transcoding into AI languages to safeguard against quantum computing or replaced with quantum-resistant algorithms altogether. A turing complete quantum processor was actualized a few years ago, or we just finally figured out that ML and the probabilistic nature of Quantum Computing are a perfect fit. People tend to vastly underestimate the rate of progress over 10+ years.
I can't help but think that social networks and search engines will start to negatively rank AI generated text. In my opinion, this would impact many more people than we think.
I'm not quite convinced humans create original ideas as often as some think
Any text I write is vaguely influenced by everything I've experienced(Other media, Real life experiences) even if it's not a conscious process
Depending on where you draw the line, I'd say most artistic work would't qualify as actually new
> displaying the badge on any asset does not guarantee the content is not majorly created by AI.
What a farce.
On the bright side, this is likely also going to play out this way and the market for the prompt "artists" will be nonexistent unless they add any value (which they rarely will).
We are meat based, the new minds are silicon based.
They have different strengths and limitations than us, will surpass us in all cognitive abilities soon.
Our hardware evolved to help us survive and reproduce but the evolution is a slow process compared to the intelligent design we do.
AI will surpass us and build itself beyond what we can reason about.
We will do art and work because we like to do it, just like playing video games, no other reason will survive.
What is your take on that, what do you think?