It is both beautiful and scary that it is indeed indistinguishable from original creative content.
Any content generated by ML is very unlikely to be anything else other then original.
Imagine linear regression. Then pick a random point on that line. How likely is it that that point on that trend line coincides with an actual data point? Very unlikely.
ML is simply a multidimensional version of this. Some 1000 dimensional surface and the result you see is simply a point on that surface. The likelihood of that point on the surface to coincide with an actual data point is astronomically low.
Nobody is being carried away here. What you are seeing is raw creativity by an AI. It is a highly simplified version of human creativity, but the deep fundamentals are identical.
In fact all intelligence is simply a huge thousands or more dimensional surface. A giant curve fitting methodology. The only gap between ML and human intelligence as of now is the amount of neurons, the training algorithm and the actual template for the equation describing the model. But in essence we have the fundamentals down at a high level.
That doesn't mean AIs can't have robotic bodies or access complex simulations, they could also have a society or be integrated in ours, and evolutionary techniques could be part of that process. For example AlphaGo got a good enough environment and evolutionary selection of agents, and it topped human abilities very fast. It's only a matter of time until they can have all our advantages.
I mean, look at this sentence.
> But overall, I think the point of the article is that the AI was able to produce something fairly coherent on its own.
The AI not only combined words. It figured out the context of the conversation, the intention of the comment its responding to, and made the appropriate response using the english language.
This isn't true. It's been explained numerous times on HN how mistaken this view is.
Language models do not work like this. They can copy content but usually that's for something like the GPL language text.
Generally they work on a character by character basis predicting what is the most likely character to appear next.
This very rarely results in copying text, and almost never rare text.
So what is left in your opinion? Do you really think us more than complex remix machines? What is fundamentally different to the current AI?