If we could develop literal eyeballs that could look at these images and translate the information the way humans do, the resulting capability is still no more human-like (in the sense that it should be afforded some human-like status) than any other program IMO.
If we achieved AGI tomorrow, we'd still need to have a conversation about what it is allowed to "see", because our current notions about humans seeing things are all based on the constraints of human capability. Most people understand that a surveillance camera seeing something and a human seeing something have very different implications.
In the short term, it's a conflation that I'd argue makes us see less clearly about what these systems are/are not, and leads to some questionable conclusions.
In the long term, it's a whole other ball of wax that will still require either new regulations or new ways of thinking.
You said a lot of words, but I believe your argument comes down to “computers are super powered compared to humans doing the same thing”? Is that accurate? Because magnitude of ability, to me, makes no difference at all. It’s perfectly acceptable for a human to study the artwork of a specific person and then create their own works based on that style. Why wouldn’t it be the same for an automated process?
> I believe your argument comes down to “computers are super powered compared to humans doing the same thing”? Is that accurate?
No, that doesn't really touch it. The speed/power disparity between humans/computers at certain tasks are certainly a factor to consider, but the more fundamental point I was trying to make is much simpler: "computers and humans are fundamentally different, so let's stop building arguments on the mis-belief that they are the same".
> Because magnitude of ability, to me, makes no difference at all.
What is your position on autonomous AI weapons? Does that position change when there's a human in the loop? If such weapons were suddenly available to everyone, would that be functionally no different than allowing people to own firearms or baseball bats?
> It’s perfectly acceptable for a human to study the artwork of a specific person and then create their own works based on that style. Why wouldn’t it be the same for an automated process?
I'd turn that question around: why would it be the same for an automated process?
It is perfectly acceptable for a human to shoot an intruder entering their home in most states if they believe their life is in danger. An AI-controlled gun would be far more effective (I wouldn't even have to wake up!), but is clearly in a different category.
Is a human sitting on a neighborhood bench in view of your house the same thing as a surveillance camera on a nearby telephone pole? I think the answers to this question are useful when looking at the emerging issues of AI, at least to orient our basic instincts about what feels ok vs. what doesn't.
The AI software has only "learned" in the sense that it has operated on the input data such that it can now provide outputs that are of convincingly high quality to make it appear to "know" what it is doing.
Whatever the similarities, such learning lacks the vast majority of the context and contents of what a humans learns by viewing the same image, such that the word "learn" means something fundamentally different in each situation.
It's perfectly acceptable for a human being to drive a a car, but driving one drunk is completely unacceptable. Conversely, there is no rule against creating or consuming art while intoxicated.
So to answer your question, because it is not a matter of life and death. Take your argument and apply it to mass produced goods that were once the realm of only skilled craftsman.
> Is a human sitting on a neighborhood bench in view of your house the same thing as a surveillance camera on a nearby telephone pole?
If a person never leaves and keeps notes, yes, it is exactly the same. I'd call the police for stalking. The issue here is privacy, which is tangential to AI reproducing the styles of known individuals.
> The AI software has only "learned" in the sense that it has operated on the input data such that it can now provide outputs that are of convincingly high quality to make it appear to "know" what it is doing.
Completely disagree with you about the nature of learning here. If a person produces art in the style of an individual, they have no idea the internal machinations of the original artist, they just "appear to 'know' what they are doing".
You've lost me here. Are you saying that the most important factor when judging whether or not something is appropriate is based on whether or not the activity is dangerous enough to be fatal?
There are plenty of laws and cultural/ethical norms that restrict behavior for many other reasons.
> If a person never leaves and keeps notes, yes, it is exactly the same. I'd call the police for stalking.
You're arguing that a person taking notes with a pen and paper is the same as a video camera recording the same scene?
> The issue here is privacy, which is tangential to AI reproducing the styles of known individuals.
The point is that two forms of "seeing", one mechanical, and one biological, have two very different implications. If you don't believe that, ask the hypothetical person with a notebook to provide you with a 4K rendering of the scene over the last 30 days.
The AI reproducing art is just a single use case. The point of concern has little to do with how innocuous it is to produce images, but whether or not it is acceptable to use arguments about humans when judging what is or is not acceptable in an AI program.
> Completely disagree with you about the nature of learning here. If a person produces art in the style of an individual, they have no idea the internal machinations of the original artist, they just "appear to 'know' what they are doing".
Frankly, this is nonsense. We may not understand all of the underlying processes involved in learning, but we certainly know a lot more than nothing. Even if we knew literally nothing at all about the human brain, there is no standing to conclude that this lack of knowledge must imply that humans use some internal denoising algorithm when imagining what they will draw next.
We know enough to know that human processing of information is subjective, contextual, cultural, emotional, and there are a myriad of factors involved.
We know enough to know that what software like Stable Diffusion is doing looks very little like the human process for achieving a similar outcome, even if there are biologically inspired components inside.
No one knows why we are conscious. We have sliced the brain up a thousand ways and we will slice it up a million more and will never find consciousness because it is an emergent property of healthy brain, just like light is an emergent property of a working light bulb. No matter how you disassemble a light bulb, you will never find the light, though I grant you'll eventually figure out how light is produced, the assumption that a light bulb contains light is wrong headed. It's just a metaphor.
There is no worse slander than the truth: Strong AI can not be achieved, not with digital computers and programming and machine learning, and most likely by no other method either. Please, please grow up, and set aside your childish beliefs, because we need you now more than ever, here, in the real world.
If you place a human and a computer in front of a painting. A human seeing the painting is a consequence of biology. A computer seeing the painting is a consequence of design.
There's always a distinction between happenstance and premeditation.