Simple statistical models that aren't "AI" so much as just generic ML were and are quite useful: like recommendation and newsfeed engines ("the Algorithm" as we call it so often today). Love 'em or hate 'em, they can be quite good at predicting interest/engagement.
The resurgence in deep learning in the 2010s has shown us new magic tricks, but they're still just that: parlor tricks. At least they're more convincing tricks than what we had 40 years ago!
That's what ultimately depresses me about AI. It's still just a parlor trick. We haven't actually taught computers to think, to reason, to be innovative. Deep learning is definitely having its day, but I suppose this too will pass unless we can unlock certain ways to make AI reliable and responsible. Or... to just start understanding it in general.
And what do you feel when we make these parlor tricks more capable than us at the majority of tasks?
And what do you feel when we understand it well enough to realize we're the same type of parlor tricks?
To me it seems like you're most interested in a magic 'aha' moment and will miss or not be prepared for how the road in front of us likely unfolds.
That’s called positivism and it has a lot of philosophical issues. I wouldn’t be so quick to assume that sensory appearance is equivalent to reality.
This seems like the logical fallacy of "begging the question" since it is far from apparent to me that they are "more capable than us at the majority of tasks."
I know nothing about AI but it seems like we're approaching it from the other end - the human mind seems like magic and when we approximate it using technology it feels like we'll reach a moment of "that's all it is?" and refuse to believe we actually did it because we doubt ourselves.
Along the same lines, if achieving equal human rights for all humans were a trip to the corner store, the fight for AI rights is going to be like Mount Everest.
I find this to be quite comforting. It means we haven't completely uprooted all of society overnight and have time to stop and think about what this new technology can do for us and what it means for the future.
AI is a branch of statistics. Nobody said that statistics must limit itself to quasi-linear models of numerical data. It was just a limitation of computational resources (initially "AI" was developed by human computers).
The trick is to get people not to associate the dictum "lies, damn lies and statistics" with "hallucinations, damn hallucinations and AI".
It isn't that much different than human behaviors.
People tend to repeat stuff they have seen done by our parents, sibling, friends, medias. Listen to people smalltalking in the streets, repeating the same things all over again every day. The easy success of marketing, politicians, dictators. The power of marketing and success of consumption society. Racism, bigotry, religions, addictions. All these are easily explained because people barely think. They just respond to internal and external stimulus with recipes they have been taught to follow without giving a second thought.
Isn't this just semantics, and the expectations that go with them, really?
If the marketing language surrounding ML wasn't so hyperbolic and sci-fi-y ("artificial intelligence"? "neural network"? give me a break!) I think we all could agree that what we can achieve now is really interesting and impressive in its own right.
Even if these models aren't on a path to some kind of "thinking computer" as you envision it, their "parlor tricks" are doing things I would've relegated to the realm of sci-fi even a decade ago, much less 25 years ago.
We don't necessarily have to replicate the way the human brain works, but as long as the machine is capable of performing quasi-cognitive tasks, there will be immense value (and disruption) to society.
Disappointing for sure, if you're still waiting for a sentient robot, but they do something we thought was limited to humans back in the day: going to a bookstore, and having the owner tell you "hey, I know you enjoy mystery novels taking place in England, I might have something new for you".
If you think about it we've gone the other way. We're teaching/conditioning humans to think less and react more. This has only gotten worse the last few years and I don't see any shift coming soon. Humanity unfortunately seems just as simple as the algorithm, use the right inputs, in the right context and you can make most of us act just like you want.
In terms of actual sentience, understanding
I had chatGPT answer questions in the following order:
1. Describe to me what happens when you throw sugar in coffee
2. Describe the same thing in terms of atoms.
3. Is this an example of entropy decreasing?
4. What if the arrow of time was reversed. Describe what happens again at the atomic level.
5. What if there were two regions in space one with increasing entropy and another with decreasing entropy. I place the coffee cup such that one half is in one region the other half is in the other region. What happens?
Each answer displayed astonishing understanding of what occurs. This was the final answer to the fifth question: -----------------------------------------------
The scenario you've proposed is purely hypothetical and quite speculative because it defies our current understanding of the laws of physics, particularly the Second Law of Thermodynamics, which stipulates that entropy in an isolated system tends to increase. Nevertheless, it's a fascinating thought experiment. Here's what might happen if you had a cup of coffee straddling two such regions:
The Increasing Entropy Region:
Dissolution and Distribution: On this side, sugar would behave as it does in our observable universe. It would dissolve in the coffee and disperse randomly throughout the half of the liquid.
Mixing and Thermal Energy: Molecules in this region would continue to collide and interact based on thermal motion, contributing to faster dissolution and dispersion of sugar.
Sweetening: The coffee in this half would become sweet as the sugar molecules interact with the liquid and dissolve.
The Decreasing Entropy Region:
Reforming Crystals: On this side, dissolved sugar would spontaneously start forming sugar crystals, leading to a decreasing level of entropy.
Unmixing: The sugar would start to separate from the coffee and move to form a distinct layer or solid chunks, further decreasing entropy in that half.
Unsweetening: Any sweetness initially imparted to this side would disappear as sugar molecules revert to a more ordered state.
Challenges:
Interface: The most mysterious part would be at the interface where the two regions meet. Would a sugar molecule crossing from the increasing to the decreasing entropy region suddenly reverse its course? Would there be some form of "entropy gradient" or "entropy tension" at the boundary? This part is extremely speculative and likely paradoxical.
Energy Conservation: Energy conservation laws might become complicated here. The energy required for sugar to crystallize in the decreasing entropy zone would have to come from somewhere, and how this balances with the increasing entropy side would pose a significant scientific puzzle.
Thermodynamic Anomalies: The coffee cup itself would be under very peculiar thermodynamic conditions, subject to two entirely different sets of rules. The temperature might behave inconsistently across the boundary, and heat exchange would likely be disrupted in perplexing ways.
Taste Experience: If you were to taste the coffee, each half of the cup would offer a radically different experience, one sweet and the other likely reverting back to its original unsweet state.
Perceptual Paradox: From a human perception standpoint, witnessing sugar both dissolve and crystallize in the same cup of coffee would be a profound and likely disorienting experience.
Time's Arrow: This set-up implicitly questions the nature of time itself. Time seems to behave differently on each side, challenging our deepest notions about causality, sequence, and even reality.
Again, it's worth noting that this scenario conflicts with our current understanding of physics, but it's a fascinating avenue for speculative thought. Such a universe, if it could exist, would fundamentally challenge our understanding of physical laws and reality as we know it.
-------------------------------------------------
This answer here is anything but a magic trick. It shows deep understanding of conceptual reality by answering complex questions that are unlikely to exist anywhere else on the internet. The AI literally talked about the mysterious interface without me prompting it at all. It knows the conceptual consequence of two regions of space side by side with differing levels of entropy. The answer is not simply a trick of the next best language token.Is it a practical answer? No. Because the question itself isn't practical. But a non-practical answer does not make this answer a parlor trick.
The entire internet is dismissing this thing as a parlor trick because LLMs fail to add large numbers. I mean come on man. You don't need to be able to do math like a calculator in order to "understand" things.
AI is not yet completely practical. That much is true. However, it is clearly No longer a parlor trick and it is getting closer and closer to transitioning into practical. When that day comes.... Good luck to us all.
It is a bit like technobabble.
The responses are based on the accumulated knowledge of humans and not machines. The systems have not thought through anything and understand nothing. A process of analysing or pattern matching the input question with the data stored retrieves an answer. But that data stored is human knowledge and human effort not machine.
If you look very carefully at the results obtained, it either contains "interesting errors" (for which an intelligent human would pick up) or it is a summation of human knowledge.
The answers still have to be tested and confirmed for rationality and applicability by humans. In other words, this is a tool like all tools created by humans.
I have seen too many examples of what are supposed to be correct answers that contained subtle and not so subtle errors.
Like every system we have ever made, Garbage in gets us Garbage out. We are the ones responsible to checking those answers and making sure that they make sense in the real world.
That said, I agree with your arguments and conclusions. Humans are not able to do this kind of reasoning until they have had some education on the topic - and when they reason they get feedback to reinforce their conclusions if others agree or they get a more rigorous proof. We could do the exact same thing with an LLM. Would that potentially make a reasoning AI? For me the jury is still out, albeit I think we need better algorithms for it to be feasible.
https://chat.openai.com/share/71224df4-5c6c-45f7-88fd-eec316...
Is ChatGPT playing a trick on us by mimicking the sentience of the humans whose writings it ingested, or is the trick that by doing so it began to actually think like us and so simulates a conscious mind within?
I lean towards the former; but we don't know what sentience even is yet, so we can't prove it.
It's a season.
Like the seasons, the cycle repeats.
Examples: Graphics User Interface. The iPhone seated mobile compute as a permanent fixture, it and Android bringing internet and computing to billions of humans for the first time. Not just the wealthy industrials. Or IBM DB2 for SQL. Or Ford's Model T. Or the Gutenberg. None of these were the first first. Maybe even on the n-th iteration 2 decades after coming out of DARPA or global university research, something is just ready for commercialized prime time.
Computing was solved by Lady Ada and Babbage. It's all electrical engineering, software, fabrication, productization, mfg, displays, sensors etc. after that.
Semiconductor was a big deal in making it happen. But that's almost besides the point of the theory of compute if it can be solved in other novel ways using alternative material science.
100m people used this latest iteration. It’s hardly a winter.
Do the intrinsic properties of the system really matter at the end of the day if it performs as well as we do at some task? Heck they’ve been doing many things better for decades, but those are the types of tasks we take it for granted that a machine should be able to do. Solve differential equations. Play chess. But now computers are doing “human” tasks competently. Writing creative fiction. Generating graphical art.
We don’t have a good working definition or metric for intelligence. Surely it is not a monolithic property. Animals exhibit many traits we associate with intelligence. Some of the stuff GPT 3+ generates sounds pretty intelligent. It is the type of things an intelligence may have produced, because it was trained to do just that. If we look at intelligence as a cluster of traits, or behaviors, I think we are surrounded by intelligence - human, artificial, or otherwise. Doesn’t have to be an AGI to fall in that category. It doesn’t even have to be particularly impressive.
I have a definition of intelligence. [...]
Intelligence is prediction.
In the case of intelligent living processes ranging from single celled organisms to complex multicellular life, intelligence arises from the need to predict the future to survive and reproduce. More intelligent organisms build more elaborate models of the world using better developed senses in order to do so.
Humans model the world primarily with language, which allows us to share our models with each other, across both space and time! Without language, it is extraordinarily more difficult to communicate complex abstract thoughts. As a side effect of our high level of intelligence, our wetware is capable of modeling things outside of language, such as mathematics. [...]
In general, I think we need to stop equating intelligence with consciousness, agency, moral value, and living things. These are entirely different concepts, and, as a species, we are blinded by our egotistical identity of being smarter than the other animals.
According to my definition, intelligence is actually all around us. We are blind to it because we focus only on how intelligence manifests in humans (defined by by our specific social organization and biological senses), and then use that as a benchmark to judge every other thing in the world.A less socially-charged definition of intelligence would make it easier to compare intelligence across living and non-living processes, though it would not be "popular science" useful for ranking humans.
I can't think of areas Amazon does worse in today than these two areas.
Reviews are so untrustworthy they are just noise.
Those coffee beans I ordered a month ago? I have to go back to my orders page, search (and for some reason their search is dirt slow) - then wade through unrelated products to finally find them.
But oh, hey, you just bought a vacuum cleaner? I know what you would really like! MORE VACUUMS!
I guess they innovated on those two areas in 1998, and then since then the only thing they did was remove the `review-count-rank` sorting option so their AI can suggest Amazon Brand products.
This is a common refrain, but I would be amazed if it weren't backed up by data and I anecdotally fit into the mold. If I buy a product that I really like, then I'm likely to buy it again to give as a gift to friends. If I buy something like a a vacuum and I don't like it, then I'm likely to buy a different model that better fits my needs. It seems silly when you aren't interested in buying another one of whatever item you bought, but there are multiple legitimate reasons that people would want to.
Ask the user if they liked the product and would likely gift it in the future and add the item to a "Gift ideas" list.
If the user returned then start suggesting immediately an alternative.
Blindly recommending the same thing just because there's a correlation seems stupid. Would be nice to at-least have a nicely visible button to stop recommending this item since I'm done purchasing anything similar for the next couple of years.
Amazon did literally this to me just a couple days ago.
I'd bought a vacuum cleaner on Amazon a week earlier, and, when doing an Amazon checkout of a cart with one item, it threw up a list of consumable items that I'd bought in the past, to possibly add to the order... including another one of those $120 upright vacuum cleaners.
(Maybe they have data that says this makes them more money, even though I'd guess it might hurt customer confidence in the site.)
Amazon: Sellers! Do you want to advertise to customers interested in vacuums ? Seller: YES! Take my money!
(Amazon proceeds to uprank vacuums to customers with prior vacuum sales)
I have worked on the same recommendation systems. It's also the most often oncall issue. The problem is mostly due to lag in event processing (especially orders).
https://developers.cloudflare.com/stream/viewing-videos/usin...
Open to any recomendations for alternative as i too am quite displeased with the state of such things. But still prefer it to YouTube.
I believe the media server is set to reqire referer to prevent embeding on alternative origins.
[1]: https://w3c.github.io/webappsec-csp/#frame-ancestors-navigat...