> Computers can't learn abstract concepts
Goalposts can be moved on whether it has "truly learned" the abstract concept, but at the very least neural networks have the ability to work with concepts to the extent that you can ask to make an image more "chaotic", "mysterious", "peaceful", "stylized", etc. and get meaningfully different results.
When a model like Stable Diffusion has 4.1GB of weights and was trained on 5 billion images, the primary impact of one particular training image may be very slightly adjusting what the model associates with "dramatic".
> If the person directly copied another work, that's a derivative work and requires a license
Not if it falls under Fair Use. Here's a fairly extreme example for just how much you can get away with while still (eventually) being ruled Fair Use: https://www.artnews.com/art-in-america/features/landmark-cop... - though I wouldn't recommend copying as much as Richard Prince did.
> The inputs are directly used in the outputs
Not "directly" - during generation, normal prompt to image models don't have access to existing images and cannot search the Internet.