There are many things that pattern matching over large amounts of data can solve, like eventually we can probably get fully generated movies, music compositions, and novels, but the problem is that all of the content of those works will have to have been formalized into rules before it is produced, since computers can only work with formalized data. None of those productions will ever have an original thought, and I think that’s why GPT-3’s fiction feels so shallow.
So it boils down to a philosophical question, can human thought be formalized and written in rules? If it can, no human ever has an original thought either, and it’s a moot point.
The fact that feelings of love and closeness could be prompted by a mere chemical was deeply saddening to me. It wrecked my worldview.
"Love is just the result of some chemical? Then it's not even real!" I thought to myself.
Fast-forward ~20 years later, and that's proven to be an obvious— and massive— and useless— oversimplification.
Of course love isn't "just a reaction caused by a chemical." It's a fantastically complex emergent property of our biological system that we still absolutely do not understand.
It's the same with thinking: are parts of it analogous to pattern matching? Sure! Is this the whole story? Not even close.
Now contrarian to the contrarian view: many of us live in bubble echos and go for the popular opinion instead of critical thinking, so maybe that's a bar too high even for humans.
and how do you do that? By pattern-matching on "high-quality source"
LLMs do not have that capability, fundamentally.
Making totally new innovations in art, particularly ones that people end up liking, is a whole different ball game.
Look at something like [Luncheon on the Grass](https://en.wikipedia.org/wiki/Le_D%C3%A9jeuner_sur_l%27herbe)
This painting was revolutionary. When it was first exhibited in Paris, people were shocked. It was rejected from the Salon (the most prominent art exhibition at the time). Yet, 10 years later, every painting in the Salon resembled it. And you can draw a line from this painting, to Monet, from which you can draw a line to Picasso, from which you can draw a line to Pollock....
Obviously, none of these are totally new innovations, they all came from somewhere. Pattern making.
The only difference between this and these language models is that Manet and artists like him use their rich sensory experience obtained outside of painting to make new paintings. But it's all fundamentally pattern matching in the end. As long as you can obtain the patterns, there's no difference between a human and a machine in this regard.
I was thinking the same: can a (future) model be like Leonardo or Beethoven, and actually innovate?
Assuming that what Beethoven did is not "just" making music similar to pre-existing music.
And yes, I'm aware the bar was raised from "average human" to Beethoven.
It seems to me that making art that people like is a combination of pattern matching, luck, the zeitgeist, and other factors. However it doesn't seem like there's some kind of unknowable gap between "making similar art" and "making innovations in art that people like". I'm of the opinion that all art is in some sense derivative in that the human mind integrates everything it has seen and produces something based on those inputs.
All art is derivative.
Do you have evidence that human brains are not just super sophisticated pattern matching engines?
Humans read novels, listen to compositions, watch movies, and make new ones similar in some ways and different in other ways. What is fundamentally different about the process used for LLMs? Not the current generation necessarily, but what's likely to emerge as they continue to improve.
The strongest evidence I have is that people are notoriously difficult to predict, individually.
If so it means the union of all human expertise is a few gigabytes. Having seen both a) what we can do in a kilobyte of code, and b) a broad range of human behavior, this doesn't seem impossible. The more interesting question is: what are humans going to do with this remarkable object, a svelte pocket brain, not quite alive, a capable coder in ALL languages, a shared human artifact that can ace all tests? "May you live in interesting times," indeed.
Clearly the key takeaway from GPT is that given enough unstructured data, LLM can produce impressive results.
From my point of view, the flaw in most discussion surrounding AI is not that people underestimate computers but overestimate how special humans are. At the end of day, every thoughts are a bunch of chemical potentials changing in a small blob of flesh.
It is probably true that at a given point many many people had the same or very similar ideas.
Those who execute or are in the right place and time to declare themselves the originator are the ones we think innovated.
It isn't true. Or rarely is true. History is written by the victor (and their simps)
No, and I think it's because human thought is based on continuous inferencing of experience, which gives rise to the current emotional state and feeling of it. For a machine to do this, it will need a body and the ability to put attention on things it is inferencing at will.
Right now it's possible to simulate memory with additional context (eg system prompt) but it doesn’t represent existence experienced by the model. If we want to go deeper the models need to actually learn from their interaction, update their internal networks and have some capabilities of self reflection (ie "talking to themselves").
I'm sure that's highly researched topic but it would demands extraordinary computational power and would cause lot of issues by letting such an AI in the wild.