This does not mean anything more than that the AI has a greater breadth of training background, which is likely.
We get the output most likely expected from any of (or the average of) the humans whose writing/drawing/whatever was included in the input set.
What we will not be getting from the AIs is any creative output based on unique understanding, as we would from an intelligent, creative human. Many of hte humans in the input set would see the same prompt and produce an actual novel and meaningful output, not simply a cut-and-paste from prior works. (& yes, seme novel output may come from some randomizing algo, but if it is correct, it is no more correct than the broken clock that is correct twice every day.)
Or, another example, I was involved in a legal deposition where an "AI" transcription system was used instead of a skilled court reporter. The output LOOKED fantastic, until I actually read it, and it was absolute garbage. The standard errata sheet has room for the deponent to put in about a dozen corrections, and most are less than a handful. My errata list was multiple pages. These errors often reversed the meaning of sentences, substitutin "I have ..." for "You have...", dropping or adding "not", or substituting in common names for unusual names (e.g., "Jack Kennedy" for "John Kemeny". note human transcribers always ask for correct spellings of names in the next break, this crap just inserted it like it had a clue).
So, even though the total "experience" or training set of the may go beyond the experience of the reader, so some of the output is surprising, this is no more so than a search engine produces surprise. In fact, I think this is the best use of the AIs, to have them trained on an enormous data set, and provide possibly better results, defined as more on-point, but likely less thorough.