You hear about this new programming language called "Frob", and you assume it must have a website. So you google "Frob language". You hear that there was a plane crash in DC, and assume (CNN/AP/your_favorite_news_site) has almost certainly written an article about it. You google "DC plane crash."
LLMs aren't ever going to replace search for that use case, simply because they're never going to be as convenient.
Where LLMs will take over from search is when it comes to open-ended research - where you don't know in advance where you're going or what you're going to find. I don't really have frequent use cases of this sort, but depending on your occupation it might revolutionize your daily work.
Just yesterday I was trying to remember the name of a vague concept I’d forgotten, with my overall question being:
“Is there a technical term in biology for the equilibrium that occurs between plant species producing defensive toxins, and toxin resistance in the insect species that feed on those plants, whereby the plant species never has enough evolutionary pressure to increase it’s toxin load enough to kill off the insect that is adapting to it”
After fruitless searching around because I didn’t have the right things to look for, putting the above in ChatGPT gave an instant reply of exactly what I was looking for:
“Yes, the phenomenon you're describing is often referred to as evolutionary arms race or coevolutionary arms race.”
Evolutionary arms race is somewhat tautological; an arms race is the description of the selective pressure applied by other species on evolution of the species in question. (There are other, abiotic sources of selective pressures, e.g. climate change on evolutionary timescales, so while 'evolution' at least carries a broader meaning, 'arms race' adds nothing that wasn't already there.)
That said, using your exact query on deepseek r1 and claude sonnet 3.7 both did include red queen in their answers, along with other related concepts like tit for tat escalation.
Firstly, "Evolutionary Arms Race" is not tautological, it is a specific term of art in evolutionary biology.
Secondly, "evolutionary arms race" is a correct answer, it is the general case of which the Red Queen hypothesis is a special case. I do agree with you that OP described a Red Queen case, though I would hesitate to say it was because of "equilibrium"; many species in Red Queen situations have in fact gone extinct.
GP was looking for a specific term that they had heard before. It was co-/evolutionary arms race, and ChatGPT guessed it correctly.
Also GPT-4o elaborated the answer (for me at least) with things like:
> However, the specific kind of equilibrium you're referring to—where neither side ever fully "wins", and both are locked in a continuous cycle of adaptation and counter-adaptation—is also captured in the idea of a “Red Queen dynamic”.
> You could refer to this as:
* Red Queen dynamics in plant-insect coevolution
* A coevolutionary arms race reaching a dynamic equilibrium
* Or even evolutionary stable strategies (ESS) applied to plant-herbivore interactions, though ESS is more game-theory focused.Or updated for the LLM age, "the best way to get the right answer from an LLM is not to ask it a question and use its answer; it's to post its response on a site of well-educated and easily nerdsniped people"
Time to pop some popcorn and hit refresh.
They’re very helpful for helping me ask more refined questions by getting the terminology correct.
I think of AI as an intelligent search engine / assistant and, outside of simple questions with one very specific answer, it just crushes search engines.
I use the LLMs to find the right search terms, and that combination makes search engines much more useful.
LLM by themselves give me very superficial explanations that don't answer what I want, but they are also a great starting point that will eventually guide me to the answers.
Even with supposedly authoritative peer-reviewed research papers it is extremely frequent to find errors whenever the authors claim to quote earlier work, because the reality is that most of them do not bother to read carefully their claimed bibliography.
When you get an answer from an AI, the chances greatly increase that the answer regurgitates some errors present in the publications used for training. At least when you get the answer from a real book or research paper, it lists its sources and you can search them to find whether they have been reproduced rightly or wrongly. With an AI-generated answer it becomes much more difficult to check it for truthfulness.
I will give an example of what I mean, on which I happened to stumble today. I have read a chemistry article published in 2022 in a Springer journal. While the article also contained various useful information, it happened to contain a claim that seemed suspicious.
In 1782, the French chemist Guyton de Morveau has invented the word "alumine" (French) = "alumina" (Latin and English), to name what is now called oxide of aluminum, which was called earth of alum at that time ("terra aluminis" in Latin).
The article from 2022 claimed that the word "alumina" had already been used earlier with the same sense, by Andreas Libavius in 1597, who has been thus the creator of this word.
I have found this hard to believe, because the necessity for such a word has appeared only during the 18th century, when the European chemists, starting with the Swedish chemists, have finally gone beyond the level of chemical classification inherited from the Arabs and they have begun to classify all known chemical substances as combinations of a restricted set of primitive substances.
Fortunately, the 2022 article had a detailed bibliography, and using it I was able to find the original work from 1597 and the exact paragraph in it that was referred to. The claim of the 2022 article was entirely false. While the paragraph contained a word "alumina", that was not a singular feminine adjective (i.e. agreeing with "terra") referring to the "earth of alum". Instead of this, it was not a new word, but just the plural of the neuter word "alumen" (= English alum), in the sentence "alums or salts or other similar sour substances can be mixed in", where "alums" meant "various kinds of alum", like "salts" meant "various kinds of salt". Nowhere in the work of Libavius there was any mention of an earth that is a component of alum and that could be extracted from alum (in older chemistry, "earth" was the term for any non-metallic solid substance that neither dissolves in water nor burns in air).
I have given in detail this example, in order to illustrate the kinds of errors that I very frequently encounter whenever some authors claim to quote other works. While this was an ancient quotation, lots of similar errors appear when quoting more recent publications, e.g. when quoting Einstein, Dirac or the like.
I am pretty sure that if I would ask an AI assistant something, the number of errors in the answers will not be less than when reading publications written by humans, but the answers will be more difficult to verify.
Whoever thinks that they can get a quick answer to any important question in a few seconds and be done with it, are naive because the answer to any serious question must be verified thoroughly, otherwise there are great chances that those who trust such answers will just spread more disinformation, like the sources on which the AI has been trained.
Google 55% as GPT is not a local search engine
GPT 45% but use it for more intelligent learning/conversations/knowledgebase.
If I had a GPT phone ... sorta like H.E.R. the movie I would rarely leave my phone's lockscreen. My AI device / super AI human friend would do everything for me including get me to the best lighting to take the best selfies...
For example: Take the ingredient list of a cosmetic or other product that could be 30-40 different molecules and ask ChatGPT to list out what each of them is and if any have potential issues.
You can then verify what it returns via search.
The reason is pretty simple. If the result you want is in the first few search hits, it's always better. Your query is shorter so there is less typing, the search engine is always faster, the results are far better because you side step the LLM hallucinating as it regurgitates the results it remembers on the page your would have read if you searched.
If you aren't confident of the search times, it can take 1/2 an hour of dicking around with different terms, clicking though a couple of pages of search results for each set of term, until you finally figure out the lingo to use. Figuring out what you are really after from that wordy description is the inner magic of LLM's.
Most often not true in the kind of searches I do. Say, I search for how to do something in the Linux terminal (not just the command, but the specific options to achieve a certain thing). Google will often take me to pages that do have the answer, but are full of ads and fluff, and I have to browse through several options until I find the ones I want. ChatGPT just gives me the answer.
And with any halfway decent model, hallucination only seems to be a problem in difficult or very specialized questions. Which I agree shouldn't be asked to LLMs (or not without verifying sources, at least). But over 90% of what I search aren't difficult or specialized questions, they're just things I have forgotten, or things that are easy but I don't know just because they're not in my area of expertise. For example as a learner of Chinese, I often ask it to explain sentences to me (translate the sentence, the individual words, and explain what a given word is doing in the sentence) and for that kind of thing it's basically flawless, there's no reason why it would hallucinate as such questions are trivial for a model having tons of Chinese text in its training set.
I asked Claude to give me a recipe that uses mushrooms and freezes well and it give me a decent looking soup recipe. It might not be the best soup ever, but it's soup, kinda hard to mess up. The alternative would be to get a recipe from the web with a couple dozen paragraphs about how this is the bestest soup ever and it comes from their grandma and reminds them of summer and whatnot.
It didn't suggest adding glue? I imagine it would freeze real well if you did that. /s
Interesting, I just random words. LLM not care sentence.
But what I'm talking about is when I want to read the page for myself. Waste of time to have to wait for an LLM to chew on it.
Really, for many “page searches”, a good search engine should just be able to take you immediately to the page. When I search “Tom Hanks IMDB”, there’s no need to see a list of links - there’s obviously one specific page I want to visit.
Are you feeling lucky?
Unfortunately you can’t really show ads if you take someone directly to the destination without any interstitial content like a list of links…
Grok is great for finding details and background info about recent news, and of course it's great for deep-diving on general knowledge topics.
I also use Grok for quick coding help. I prefer to use AI for help with isolated coding aspects such as functions and methods, as a conversational reference manual. I'm not ready to sit there pretending to be the "pilot" while AI takes over my code!
For the record, I do not like Google's AI generated results it spams at me when I search for things. I want AI when I choose to use AI, not when I don't choose it. Google needs a way to switch that off on the web (without being logged in).
I know what I'm looking for. I just need exact URL.
Perplexity miserably fails at this.
Traditional search is dead, semantic search through AI is alive and well.
I can't yet count once AI misunderstood the meaning of my search while Google loves to make assumptions, rewrite my search query, and deliver the results that pay it the best which have the best ads (in my opinion as a lifetime user).
Lets not even mention how they willingly accept misleading ads atop the results which trick the majority of common users into downloading malware and adware on the regular.
The reason Google is still seeing growth (in revenue etc.) is that for a lot 'commercial' search still ends with this kind of action.
Take purchasing a power drill for example, you might use an LLM for some research on what drills are best, but when you're actually looking to purchase you probably just want to find the product on Home Depot/Lowe's etc.
Ad-sponsored models are going to be dead as soon as people realize they can't trust output.
And because the entire benefit to LLM search is the convenience of removing a human-in-the-loop step (scanning the search results), there won't be a clear insertion/distinction point for ads without poisoning the entire output.
What? On Planet Earth, this is already a thing.
Kind of like a manual, with an index.
RTFM people.
Sounds trivial to integrate an LLM front end with a search engine backend (probably already done), and be able to type "frob language" and it gives you a curated clickable list of the top resources (language website, official tutorial, reference guide, etc) discarding spam and irrelevant search engine results in the process.
https://news.ycombinator.com/item?id=9224
The LLM could "intelligently" pick from the top several pages of results, discard search engine crap results and spam, summarize each link for you, and so on.
We don't have that now (or for 30 years - I should know, I was there, using Yahoo!, and Altavista, and Lycos and such back in the day).
Or any other LLM that’s continuously trained on trending news?
Instead of the core of the answer coming from the LLM, it could piece together a few relevant contexts and just provide the glue.
How do you know the media isn't lying to you ? It's happened many times before (think pre-war propaganda)
Odds are pretty good that, at least for not very popular projects, the homepage's themselves would soon be produced by some LLM, and left at that, warts and all...
> In responding to user queries, Grok has a unique feature that allows it to decide whether or not to search X public posts and conduct a real-time web search on the Internet. Grok’s access to real-time public X posts allows Grok to respond to user queries with up-to-date information and insights on a wide range of topics.
Other considerations:
- Visiting the actual website, you’ll see the programming languages logo. That may be a useful memory aide when learning.
- The real website may have diagrams and other things that may not be available in your LLM tool of choice (grok).
- The ACT of browsing to a different web page may help some learners better “compartmentalize” their new knowledge. The human brain works in funny ways.
- i have 0 concerns of a hallucination when readings docs directly from the author/source. Unless they also jumped on the LLM bandwagon lol.
Just because you have a hammer in your hand doesn’t mean you should start trying to hammer everything around you friend. Every tool has its place.
For some cases I absolutely prefer an LLM, like discoverability of certain language features or toolkits. But for the details, I'll just google the documentation site (for the new terms that the LLM just taught me about) and then read the actual docs.
I'm hard pressed to construction an argument where, with widely-accessible LLM/LAM technology, that still looks like:
1. User types in query
2. Search returns hits
3. User selects a hit
4. User looks for information in hit
5. User has information
Summarization and deep-indexing are too powerful and remove the necessity of steps 2-4.F.ex. with the API example, why doesn't your future IDE directly surface the API (from its documentation)? Or your future search directly summarize exactly the part of the API spec you need?
I recently configured Chrome to only use google if I prefix my search with a "g ".