At the same time for many of those of us for whom it is useful, a lot of the initial use was experimentation to find the right use cases. I still use it, but I use it for the things where I know it works for me, and so I use it less than at the peak when use was dominated by learning.
Others will have found areas where alternatives like self-hosted LLMs are good enough.
Use will like grow again over time as people get more experience and build useful things on top of it. But it won't be the same rush.
We used to just call that "a search engine". Remember that? Somebody would put up a website about their favorite pet topic and when the search engine unearthed it, that's what you got.
It was the ad-ification of search engines that killed that. So, AI allows us to go back to the Internet circa 2000? That's a big innovation? I mean, I'll take it, but that's a pretty low bar ...
The difference is that reading someone's blog post forces you to take their trajectory through the material and it might not go over the exact points you're curious or confused about. With a forum like StackOverflow you often have to settle for problems that are merely close enough to your own that the solutions apply to it.
Models like ChatGPT allow you to ask for blog posts on any topic on demand and then ask for follow-up blog posts about whatever aspect of the previous blog post you want to elaborate.
Isn't this a bit of an oxymoron?
I feel like every comment on HN of mine now lately is just defending ChatGPT, but I don't think that's a reason to self-censor my comments... yet.
I was watching a car chase on youtube yesterday and used an LLM to tell me the city location based on a description of the news station logo. So that was a "niche" I guess. I also got it to teach me how to use the https://gene.iobio.io/ software as I was using it, and I'm pretty good at it now! I asked it to help me to understand the connotations of several similar sentences in another spoken language I'm learning. It's helped me with understanding property tax appraisal records. We use it at work daily to analyze code, it shortens research by 30% or so. Calling it "niche" is 100% correct: yes it's very good at its "niches" which happen to be... almost everything except math? Have it craft a choose-your-own-adventure detective story and get back to me, because that niche is surprisingly fun.
If you can predict the implications of this technology or make an insightful assessment beyond "we don't have flying cars yet" ... let me know.
I actually use it and I find that that 99% is ridiculously high.
> The amount of times it's wrong or misleading is not significant enough to even be annoying during day to day use
There may be use cases where it isn’t wrong a lot, or you may jave a high tolerance before annoyance hits, or you may be failing to detect it being wrong, but that certainly doesn’t match my experience.
well then they wouldn’t be people now, would they?
your reaction is fair. and rare.
He read something he didn’t like and then attacked the person who said it by basically calling them an unthinking robot. It’s not fair and it’s incredibly common.
1. Data reformatting, data parsing, anything that requires transforming data in anyway. E.g. give me all blah in this massive raw content of whatever. Give this data as json, this as csv.
2. Writing small scripts to do various other data related actions, automations.
3. Summaries. Extracting key points. Asking questions about content I don't understand well. Asking clarification questions.
4. Bulletpoints, outlines, asking for feedback for my own content, what could I be missing, brainstorming.
5. Coding obviously, but with Copilot mainly rather than ChatGPT specifically.
6. Asking it to review/criticise whatever work or output I do.
7. Anything learning/research related, I use it avidly. If I want to learn about any subject. I haven't found hallucinations to be an issue at all, because I view the content with grain of salt in the first place and I double check if I think something is out of place, but it's really, really good for learning, because I can actively question it compared to a course or whatever, and me being able to debate with it is amazing for learning. And compared to a person I never have to feel that my question may be stupid, I can just spew to it whatever immediately comes to mind and get clarifications. If I had a person as a mentor, I might be more careful, but this removes the aspect of social anxiety completely. I can keep asking clarifying questions infinitely.
8. Decisions. Trying to understand pros and cons of various decisions. Allowing it to brainstorm those pros and cons. I combine with my own pros and cons of course.
To me it's like this dream thing you can bounce your thoughts back and forth with, without having to worry about judgment etc.
ChatGPT and its ilk don't enable me to do something today that I couldn't do yesterday. Nor do they enable me to do something an order of magnitude faster than I could do yesterday.
Contrast this to when microprocessors hit. Suddenly, things like industrial control went from the size of multiple refrigerators to a PC board. When the price dropped (things like the 6502), engineers went absolutely bonkers building amazing things.
"AI is the exact opposite of a solution in search of a problem. It's the solution to far more problems than its developers even knew existed."
And that's why I personally don't have much faith in the LLM approach. Natural language is too full of ambiguity to pretend it delivers meaning. It makes oblique references to meaning, based on awful assumptions about the audience and presumed mental context.
I see enough absurd miscommunication and word salad between native speakers. I really don't want a magic box to vaguely emulate this.
What I think he is trying to say is that it is a solution that addresses a broad range of problems that are clear now but weren’t envisioned by its creators, which is both impossible given the early claims (its creators already billed it as a general solution with easentially no limits) and, even if you ignore those claims and reduce it to a question of it being broadly applicable as a solution, generally premature (most of the more specific things it is billed as a solution for it hasn’t solved, though some people might have ideas, not yet proven jn practice, of how it can be a component of a solution).
It’s like nobody wants to talk about reasonable solutions that incrementally make things better, everything has to be some meme “revolution”
The cap table is moot in the near-term because of the profit precedence structure.
what he actually said: [The exact opposite of (a solution in search of a problem)] => naive translation (a problem in search of a solution)
but what he meant is: a solution in search of a problem that ended up finding far more problems than anyone suspected.
"problem in search of solution" => bad. exact opposite of bad => good.
what it really is? I think closer to the first one.
why do they talk in such gobbledygook
Anyone have examples of how GPT can help there?
Good examples are typical courier service, or FedEX. Or companies like Amazon make their own delivery services.
They usually have standard boxes/envelopes with standard barcode markings for correspondence.
All these boxes/envelopes gathered from sending offices and concentrated into sort facility, where machine vision could sort them, to separate container for each destination branch office, or for some places, big central office have it's own container.
Than containers load to big truck and run to destination city, where unload at destination branch office.
In destination branch office next sorting distribute packages to containers of small offices.
Then smaller truck deliver containers to small offices.
Similar thing happen on returns, just in target office created new mark and placed over old mark.
What could be wrong? - Some delivery companies allow containers of free form, not just rectangular or envelopes, just limiting weight and sum of sizes, so you for example could send car body kit, or skis, not inside rectangular container.
For human, handling such things is trivial task, but for machine it is nightmare. But if this thing will see something like GPT-4, which claimed multimodal, possible it will detect it right and understand how to handle it.
For others his words, they are against Capitalism, because this is it's typical way - Pareto's principle, use new tool on 20% cases, where you could make 80% profit, and don't wait, until tool will handle 100% cases.
If mankind lived against Capitalism, we would not have cars and planes, we would still use steam engines.