Would you mind elaborating more on this.
Like how are you "searching" with ChatGPT?
Googled "What was the website that showed two movie posters and you picked the one you liked more?" and I got links to reddit, lots to letterboxd, some quora, and a lot more, all irrelevant to my question.
Asked ChatGPT that same question verbatim and
> The website you're referring to is probably "Flickchart." It's a platform where users can compare and rank movies by choosing which one they like more between two movie posters or movie titles. Please note that my knowledge is up to date as of January 2022, and the availability and popularity of such websites may change over time.
Another time I was looking for the release dates of 8 and 16-bit consoles. With Google I had to search for each console individually, sometimes offered a card with the release date, sometimes didn't and I'd have to go do more digging.
So I asked ChatGPT and got a nice formatted list with dates
First, it always gives a calorie count for cooked meat, but it should assume the meat is uncooked since I said it was for a recipe.
Second, it seems to struggle with the concept of uncooked rice. If you ask it to work with 1 "rice cooker cup" of rice, it refuses because that isn't a standard measurement. If you swap in the converted standard measurement (3/4 cup), it still is way off. It told me 3/4 cup uncooked rice is about 150 calories when cooked. That's a third of what the USDA database gives. When you point out that 3/4 cup uncooked rice is a large serving after being cooked, it changes its answer to 375 calories, still about half of what the USDA database gives. But this is fine for me because rice is not typically part of my recipes since it doesn't usually require special preparation.
Overall it reduces a 10 minute task to 10 seconds, but you need to know enough about the ingredients to spot obvious problems in its result. In my case I could see the calories given for meat was way too high, and way too low for rice. It gave a better answer after telling it to fix the former and ignore the latter.
I tried a second recipe and the total it gave was 2% under my calculation, but I did not see any obvious error in its result so I could not correct it further.
It is unfortunate that you kind of have to trust the numbers are correct, but this is no different than the nutrition details on sites like MyFitnessPal which are often wrong when you closely examine it.
This equation is beyond my paygrade!
Edit: I asked our GPT3.5 bot to solve this, and it hallucinated "pulling up the USDA database" ; complete with a "(a few moments later...)" message before giving me 160 calories as the USDA answer.
I asked the same bot (identical prompts) with GPT-4-Turbo enabled and it went through "step by step" to say the correct answer is 461 calories because 1/3 cup uncooked is 1 cup cooked, so 1 rice cooker cup (160g) = 3/4 cup uncooked, so 2.25 cooked * 205 = 461 cal.
Is that the right answer? If so, 375 seems far from "half"
IMO Google should convert their search box to a Bard chat input, and you get a hybrid of Bard conversation with real links from their search engine.
It's actually astounding that, in the face of rapid GPT rise, that search box is still an old-school search box, looking dumber and less attractive each day.
Google can't change for now, in doing so they undermine all the AdWords accounts, the real customers to Google, paying six figures to stay on top of SERPs.
But the other competitors can and will.
So it goes
How many queries per second does OpenAI get vs Google search?
If you have to double-check with Google, why not just start with Google and skip ChatGPT altogether?
Anyways an LLM clearly teased that out whereas if you misremember or misread something a straight search is going to be bad.
Most of my searches are the opposite. I was to know about an obscure movie from the 80s with a toy helicopter. Google very neatly suggests Defense Play (correct) but most LLMs I've tried end up just suggesting very popular films with a helicopter and it ends up being quite difficult to get it to give me information about obscure stuff. Also with that same search the LLM suggests a bunch of incorrect films since (and I figured this out later) it turns out that it was all sourced from a single forum thread from the 90s where a bunch of the posts suggested movies that don't have toy helicopters in them. Go figure.
However, if I add the director it gets it right: https://chat.openai.com/share/a602b3b0-5c17-4b4d-bed8-124197...
If I only give it 1980s film and the director's name it can still get it. https://chat.openai.com/share/d6cf396b-3199-4c80-84b9-d41d23...
So it's clearly not able to look this movie up semantically and needs a strong key like the director's name.
EDIT: Digging deeper it's clear the model only has a very foggy idea of what the movie is about: https://chat.openai.com/share/d0701f53-1250-421e-aa4b-dc8156... People have described these types of outputs as the text equivalent of a highly compressed JPEG, which seems to fit well with what's going on here. It gets some of the top level elements right and kind of remembers there's some kind of vehicle that's important but it has forgotten all the details, even the date the movie was released. But unlike a human who might indicate their fuzziness (was it a helicopter or a submarine?), GPT4 gladly pretends like it knows what it's talking about. I think it's likely a solvable problem, the model probably has the information to know when it's confident and when it's in a fuzzy JPEG region but the current alignment isn't doing a great job of surfacing that.
So, get as pedantic as you'd like, but Google really dropped the ball. ChatGPT gave flickchart with both questions.
So I asked ChatGPT: "There’s a word that sounds kinda like Django but it’s like jingoistic"
Turns out I was right, it's "jingoistic", but then I could go on and ask:
"Give me the definition of the word, then."
"The term "jingoistic" refers to a form of extreme and aggressive patriotism, often characterized by a belligerent foreign policy. It denotes a belief in the superiority and righteousness of one's own country, sometimes leading to the denigration of other nations and the advocacy of aggressive political and military actions."
"Where does the word come from?"
"The origin of the word "jingoistic" traces back to a popular British song from the 1870s. The song, with the refrain "We don't want to fight, but by Jingo if we do, We've got the ships, we've got the men, and got the money too," was sung during the Russo-Turkish War (1877-1878). It expressed support for the British government's aggressive stance towards Russia.
The phrase "by Jingo" in the song is believed to be a minced oath, a euphemistic expression used to avoid swearing. "Jingo" here stands in for a stronger word, likely a reference to God. Over time, the term "jingoism" evolved to describe an aggressive, belligerent form of patriotism and foreign policy, and "jingoistic" is the adjective form used to describe someone who exhibits this attitude."
Fascinating! For anyone who is interested in learning, there hasn't been a tool like ChatGPT since the internet itself!
Me: “staircase wit” is a very charming phrase. What are some other unexpected phrases that might also be fun to know?
ChatGPT: <I originally included the response, but you can ask it yourself if you care to. I feel this was a spur-of-the-moment idea and, rereading it, I feel it’s just like Googling something and posting the result. Not really a high quality contribution on my part.>
I might be an outlier here, but to me this wouldn't be useful at all. I wouldn't trust ChatGPT to get it right, so I'd go to wikipedia to double check, at which point the amount of effort saved is little to zero.
this is a case where search has taken a step backward. The old Google would have worked for a simple search like that, "NES (or sega, whatever) + 'release date' " and simply return the best results that had those two parameters in them. Today we can't have that because they make more money intentionally fuzzing your search parameters so you accidentally click on sponsored content.
I think we're going to see a lot more of this: renewed excitement and enthusiasm when A.I. "discovers" things that plain old imperative algorithms figured out 20 years ago.
Google Bard now answers this with the first suggestion being Flickchart
I also got a clean list of release dates for the console question: https://g.co/bard/share/ceb0eac6c69f
Phind provides references, problem is as the webpages used to feed LLMs become written by LLMs then we're going to be up to our necks in even more [subtly] wrong information than the currently very widely peddled disinformation from advertisers and political groups.
I had a question about adding new RAM to my computer, about what things I should take into account since the original brand no longer makes paired dimms that match my current spec. It gave me a big bullet list of all of the things I should compare between my current ram, my current motherboard and any new ram I would choose to buy to ensure compatibility.
Both of these are things I might have gone to Google (or even reddit) for previously but I believed I could get faster answers from ChatGPT. I was right in both cases. I didn't have to construct a complicated query, I didn't have to filter SEO spam. I just asked the question in natural language as it appeared in my mind and ChatGPT gave excellent answers with very little delay.
On the other hand, ChatGPT does seem to give me good results the majority of the time. It certainly fails or hallucinates and I always feel I have to double check it, However, it just feels more reliable as a first stop compared to Siri or Wolfram.
I don't want to have to think "is this a query Siri can handle?" or "will Wolfram Alpha manage to work for this query?" - I just want to get a pretty good answer quickly with no hassle.
So, let's say I Google for such a service and I make it past the 3 or 4 ads at the top of the search results and however many SEO spammed sites and I get to the site you posted. I literally started writing a response to you saying "it doesn't seem to count only weekdays" but in order not to be wrong on the Internet I went back and checked and buried in the interface is a link "Count only workdays".
So, my answer to why: It was actually faster and easier using ChatGPT to get it to write Python than it would have been for me to use Google to find the site and then to use the site. If I have to do the same thing again in the future I will use ChatGPT rather than try to remember this website url or trust Google to direct me back to it.
Edit: or not, March 11th is not a weekday. Though I count 43 weekdays including Jan 11th, so perhaps Wolfram is using an open interval while bard is using a closed interval.
I literally had my cursor in my config file the other day and didn't know the option for disabling TLS verification (it's for an internal connection between two private certs), and i literally just put my cursor in the right place and then asked Copilot what I needed to disable verification, and it returned me the correctly formatted elixir code to paste in, 2-3 lines. And it was correct.
And I then googled for the same thing and I couldn't find that result, so I have no idea how Copilot figured it out.
GPT4 has plugin support. One of the plugins is Internet access via Bing. It automatically chooses which plugins to call upon based on the context it infers from your question - you don't have to select anything.
Here's an example: https://chat.openai.com/share/be3821e7-1403-44fb-b833-1c73f3...
It correctly finds a texture atlas example by discovering it nested inside of Bevy's github.
Note that it didn't summarize when I didn't say to conditionally consider summarizing. I consider this poor behavior, but I'm confident it would elaborate if I followed up. The initial seed prompt by OpenAI encourages concise answers (likely as cost saving measure but also for brevity)
I realize this is just a glorified "I'm Feeling Lucky" search, but I find it to be a much better UX, so I default to it over Googling. It's nice to be able to seamlessly transition from "search" to "brainstorm/discuss" without losing context.
I have tried using these things for search, but among the hallucinations and lack of different options in the response, I still find searching on Google or other search engines superior.
It's really convenient.
For a less contrived, more impressive example (multi-modality is insane!), see these: https://imgur.com/a/iy6FkBO
The above example shows me uploading 16 sprite tiles to GPT. The files were labeled 0-15 on my system. I uploaded them in two parts because there is a 10 file upload limit. I wanted to create a sprite sheet from these sprite tiles and didn't want to open an editor. So, I had it do it. After it worked, I realized I needed the three sprite sheets in varying colors (dirt, sand, and food) so I had it find/replace the dirt color with sand/food color. It then gave me download links to all three and all three were good results and saved me time.
(and yes, I know I don't have to compliment it. It's fun and makes work more enjoyable for me)
The sad reality is that typing this into google would have given you AI generated content, anyways. Might as well use the best model for it.
In the same way google/search made it possible to answer a question in real-time in a group of friends, ChatGPT does that but better in most cases. Yes, you have to deal with hallucinations and while they happen less often they do happen but you have to deal with crap in web searches as well.
Search is a super-power (most people suck at searching) and being able to grab information via ChatGPT feels very similar.
Prior to ChatGPT, the majority of my Google searches ended up on either Wikipedia (for direct information), Reddit (for opinions/advice), or StackOverflow (for programming questions).
Now all those use cases can be done by ChatGPT, and it’s faster, especially because it requires less skimming to find useful data.
Here’s a humorous example from a recent GPT-mediated search: https://chat.openai.com/share/ec874cd5-7314-4abc-b169-607601...
2. Most quick general purpose questions like "What is 4-month sleep regression in babies?" go to GPT-3.5
3. If I want to deep dive on a topic, I find myself either using one of the custom GPTs (Grimoire is great for coding), or increasingly, talking to it via voice chat. It's particularly great if I'm walking around the house doing chores and want to learn something I would otherwise turn to Wikipedia for (say, the successors to Genghis Khan and the various Khanates).
This sounds like a great use case. What is your setup for this? Do you have ChatGPT connected to a smart speaker?
Like I was reading a book about Genghis Khan the other day, which got me into Wikipediaing all his sons and grandsons. While doing chores, I asked chatGPT "tell me about Genghis Khan's sons", and then follow up questions like "what was the name of the khanate or empire Ogedai founded?"
It's an absolutely fantastic tool for diving into a new topic.
Granted I use ChatGPT-4 with subscription but if later the online RAG browsing feature is opened for free to the public I can see more people abandoning Google for casual searching (i.e most of the Internet users). They keyword here is 'casual' and what LLM provided us is the intuitive nature of searching using it even with miss spellings or missing words.
I think Google is realizing this and before this exodus really happened it needs to do something otherwise the adage of "nothing last forever" will come true sooner or later for Google. The main thing for Google now is how to monetize this technology assuming that they have similar product because all this while they are struggling with Deepmind to generate income and, voila ChatGPT just shows them how to do it. Now it seems they are taking a page from Apple (highest ranked company by market value) by providing tiers of service Ultra, Pro and Nano, and me think the Ultra will be most probably subscription based.
I often paste error messages with context into the chat and ask what's wrong.
"What is that annoying thing where I have to click on on all the websites. Because of cookies and Europe"
Google: Links to things full of SEO and ads and maybe answer the question eventually.
ChatGPT: The "annoying accept" button you often have to click on websites, particularly in Europe, is related to the General Data Protection Regulation (GDPR). The GDPR is a regulation in EU law on data protection and privacy in the European Union and the European Economic Area. It also addresses the transfer of personal data outside the EU and EEA areas.
Websites are required to get your consent before they can store or retrieve any information on your device, such as cookies. Cookies are small data files used to remember your preferences, login information, and browsing history. The GDPR mandates that websites must inform users about the cookies they use and obtain their consent before placing these cookies on their devices.
This is why you see those cookie consent notices – websites are legally obligated to ask for your permission to use cookies, and you must actively agree to it, usually by clicking an "Accept" button. This process aims to protect your privacy and give you more control over your personal data online.
Cookies are fine if you're just storing user preferences, no banner needed if the only type of localStorage/Cookie banner you have is a purely functional one that doesn't serve tracking purposes. Github for example doesn't have a cookie banner as far as I remember, but they're definitely using various cookies.
Websites are required to get your consent before they can track you. Storing cookies or other information is totally fine if it is purely functional, for example a login session or dark mode preference. Similarly, tracking without consent is also forbidden if they do so without using tracking cookies.
Whatever you call it, this thing is the closest to a human that a machine has ever been. Talking to chatGPT is quite close to talking to a human being that has the knowledge of all of google inside his brain.
If you're a developer and you're not paying for chatGPT or copilot you are literally operating at a disadvantage. Not a joke.
There's definitely something disquieting behind the elation.
First of all this technology is on track not to just assist you better, but to replace you.
Second it's not human. It is not explicitly bound by the morals and behaviors that make us human. Saying that it's not human is different from saying that it can be more intelligent than a human. This is the disquieting part. If restrictions aren't deliberately put in place it could probably give you instructions on how to murder a baby if you asked it to.
I think it's inevitable that humanity will take this technology to the furthest possible reaches that it can possibly go. My strategy is to Take advantage of it before it replaces you and hope that the technology doesn't ever reach that point in your lifetime.