Right, but that equally fits a biological NN if you zoom in that close. You'll need more than wikipedia to appreciate what deep-neural-networks are doing here, it's dimensional space that's key. What DNNs do that is similar to the human brain is that they order "concepts" in high-dimensional space. Colors, textures, shape and hierarchies of same are organized and cross-referenced with text in an incredibly complex connectome. It would be useless to memorize images with their textual descriptions as that would be horrendously inefficient/ineffective during inference. Rather, the model must do what we do and understand what makes an image a "landscape" or a "portrait" or a "cartoon". It needs to understand what is an artist's style and how to perform it on a work never before created.
"Understanding" can only mean ordering meaningless letters and pixels in multidimensional space so that they line up with human understanding (and human 'understanding', in turn, can only mean ordering meaningless sensory perceptions in the brain's multidimensional connectome such that reality turns out to be approximately predicted and controlled). The only systems that work this way efficiently are neural networks, biological and artificial.