At the same time for many of those of us for whom it is useful, a lot of the initial use was experimentation to find the right use cases. I still use it, but I use it for the things where I know it works for me, and so I use it less than at the peak when use was dominated by learning.
Others will have found areas where alternatives like self-hosted LLMs are good enough.
Use will like grow again over time as people get more experience and build useful things on top of it. But it won't be the same rush.
We used to just call that "a search engine". Remember that? Somebody would put up a website about their favorite pet topic and when the search engine unearthed it, that's what you got.
It was the ad-ification of search engines that killed that. So, AI allows us to go back to the Internet circa 2000? That's a big innovation? I mean, I'll take it, but that's a pretty low bar ...
I actually use it and I find that that 99% is ridiculously high.
> The amount of times it's wrong or misleading is not significant enough to even be annoying during day to day use
There may be use cases where it isn’t wrong a lot, or you may jave a high tolerance before annoyance hits, or you may be failing to detect it being wrong, but that certainly doesn’t match my experience.
well then they wouldn’t be people now, would they?
your reaction is fair. and rare.
ChatGPT and its ilk don't enable me to do something today that I couldn't do yesterday. Nor do they enable me to do something an order of magnitude faster than I could do yesterday.
Contrast this to when microprocessors hit. Suddenly, things like industrial control went from the size of multiple refrigerators to a PC board. When the price dropped (things like the 6502), engineers went absolutely bonkers building amazing things.
"AI is the exact opposite of a solution in search of a problem. It's the solution to far more problems than its developers even knew existed."
And that's why I personally don't have much faith in the LLM approach. Natural language is too full of ambiguity to pretend it delivers meaning. It makes oblique references to meaning, based on awful assumptions about the audience and presumed mental context.
I see enough absurd miscommunication and word salad between native speakers. I really don't want a magic box to vaguely emulate this.
What I think he is trying to say is that it is a solution that addresses a broad range of problems that are clear now but weren’t envisioned by its creators, which is both impossible given the early claims (its creators already billed it as a general solution with easentially no limits) and, even if you ignore those claims and reduce it to a question of it being broadly applicable as a solution, generally premature (most of the more specific things it is billed as a solution for it hasn’t solved, though some people might have ideas, not yet proven jn practice, of how it can be a component of a solution).
It’s like nobody wants to talk about reasonable solutions that incrementally make things better, everything has to be some meme “revolution”
The cap table is moot in the near-term because of the profit precedence structure.
what he actually said: [The exact opposite of (a solution in search of a problem)] => naive translation (a problem in search of a solution)
but what he meant is: a solution in search of a problem that ended up finding far more problems than anyone suspected.
"problem in search of solution" => bad. exact opposite of bad => good.
what it really is? I think closer to the first one.
why do they talk in such gobbledygook
Anyone have examples of how GPT can help there?
Good examples are typical courier service, or FedEX. Or companies like Amazon make their own delivery services.
They usually have standard boxes/envelopes with standard barcode markings for correspondence.
All these boxes/envelopes gathered from sending offices and concentrated into sort facility, where machine vision could sort them, to separate container for each destination branch office, or for some places, big central office have it's own container.
Than containers load to big truck and run to destination city, where unload at destination branch office.
In destination branch office next sorting distribute packages to containers of small offices.
Then smaller truck deliver containers to small offices.
Similar thing happen on returns, just in target office created new mark and placed over old mark.
What could be wrong? - Some delivery companies allow containers of free form, not just rectangular or envelopes, just limiting weight and sum of sizes, so you for example could send car body kit, or skis, not inside rectangular container.
For human, handling such things is trivial task, but for machine it is nightmare. But if this thing will see something like GPT-4, which claimed multimodal, possible it will detect it right and understand how to handle it.
For others his words, they are against Capitalism, because this is it's typical way - Pareto's principle, use new tool on 20% cases, where you could make 80% profit, and don't wait, until tool will handle 100% cases.
If mankind lived against Capitalism, we would not have cars and planes, we would still use steam engines.