But underneath all the bullshit is something truly useful, so I wouldn't necessarily say that Confidently Incorrect™ is ChatGPT's primary ability.
Coincidentally, over the past few weeks of this ChatGPT craze, I've needed to create a lot of fake data to seed a database. Normally not a big deal, but these records need to have foreign keys pointing to one another. I wondered whether ChatGPT could help me out, so I briefly described the fields I needed, their format and data type, the type of information they contained, etc etc. It did it almost perfectly, and fixing the little errors was trivial. I was dreading having to do this because it's such a pain in the ass.
To me that kind of thing is going to be the most useful application. Everyone's freaking out (in both the "scared" and "excited" sense) over AI's ability to replace creativity, but I'm focused on it's ability to replace toil.
Oratory prowess is way more important than accuracy. If you ever want to "lose a debate" on reddit, pick something that's counter-intuitive or widely misunderstood and use nuance and citations in your defense. You'll get cyberbullied every time - sometimes even banned from the sub.
For example, I ask it to simulate a discussion between a flow engineer and a geologist, and it doesn’t seem to understand they work together regularly.
Not like there’a a stack overflow for this sort of thing.
Now, instead of authoritative sounding humans who are possibly wrong or with an agenda, we have an "infallible", "impartial" oracle capable of inventing whatever it wants to. Yes, I have seen ChatGPT treated as infallible on this very forum: "you are wrong because ChatGPT says <insert nonsense here>". Can't wait to see LLMs taking "post-truth" to a whole new level. The propaganda potential is immense, to point out just one application.
> Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues.
> In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
https://en.wikipedia.org/wiki/Michael_Crichton#GellMannAmnes...
>Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.
>In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
Sounds like the old days to me, where we would meet up , discuss the world and what have you, without any means to fact check anything online.
Changes there was a real expert on the subject in your circle was at best 20% So we just enjoyed / continue our life. No harm done as long as nobody took the discussion as gospel.
edit: minor typo
I wouldn't rely on ChatGPT or any system that relies on it, except perhaps for summarising/rephrasing a single, known document. It worries me that people are using it for software development - creating stubs, bootstrap code. It's perhaps OK if you know the language/platform well enough to fix issues, but if you then use it on something that your not an expert in, what is the outcome.
For example:
Finding the source of literary quotes - miss attributing them - took 5 or 6 QA responses for it to get close.
Explaining how a p-channel enhancement mode MOSFET works - explained n-channel
Explain regex - often gets the technical parts right but can make very bad suggestions on what the regex aim is.
Name some classic cocktails created in Europe - suggested Manhattan, which it said was acceptable because it was popular in Europe
Suggest some gin based cocktails that don't include citrus, suggested those with lemon juice.
I have some expertise and knowledge is one or two specialised fields, and when I read Wikipedia, news articles and blogs about those fields, it's maddening how incorrect many people are, whilst sounding perfectly knowledgeable.
I’m finding that it’s fairly good for code snippets. You can test that pretty quickly. But asking it something you don’t know and can’t verify is a huge risk.
It's especially common on Hacker News.
Yep. I asked it to summarize a URL. It happily invented something completely fabricated and convincing sounding (until you go to the URL) based of the domain name and article slug.
At the time it _seemed_ like it wasn't just doing a mindless string replace, because different parts of the url changed on most of the erroneous links. I figured it wasn't going out and spidering the link I gave, but it led me to believe maybe there was _some_ form of index that it simply hadn't prioritized prior, but was able to do with the additional context.
OpenAI handed this tool to people with effectively no guidance or instructions so that they can trade on the glow of misconceptions and harvest user data. The whole point of giving the thing away for free these last few months is to let people make fools of themselves like this while pumping up OpenAI’s value as a technology firm. It’s no accident that they don’t engage to clarify all these persistent misunderstandings.
When one logs on to Chat GPT, they get the following notices:
> This is a free research preview.
> Our goal is to get external feedback in order to improve our systems and make them safer.
> While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.
> Conversations may be reviewed by our AI trainers to improve our systems.
> Please don't share any sensitive information in your conversations.
Effectively no guidance or instructions indeed. Perhaps ChatGPT is sometimes confidently incorrect because it learned from online commenters.
It's very unlikely the model weights would contain enough detail to let it match content to the URL that the content originally came from.
ChatGPT by itself can't but if you build a product on top of its API, you can fetch the url itself and send the data in the ChatGPT's prompt
This might not be entirely true. People were able to fetch urls and data from the internet that couldn't possibly be in any training dataset (in eg posted the same day for the first time).
https://medium.com/@neonforge/i-knew-it-chatgpt-has-access-t...
> But ChatGPT has rewritten my article in a different style, what a interesting finding…
The URL conveniently contains the topic of the blog-article, meaning ChatGTP can 'hallucinate' the contents, even without internet connection.
To show this, I tried to follow the same steps as @neonforge, but with a non-existing URL. Unfortunately `lynx` 'didn't work', but curl showed some interesting information.
`curl https://medium.com/@peter/learn-how-to-use-the-magic-of-terr...` returned some html. Instead of a 404 and a <title> element containing just 'Medium' it hallucinated the following title tag: `<title>Learn How to Use the Magic of Terraform Locals (Step-by-Step Guide) | by Peter | Medium</title>`
I'm not saying that ChatGTP definitely has no access to internet, but so far I have not seen any proof or indication that it does.
If anything, this blog post is a perfect example about how people can put in whatever they want as an input and take the output as truth, without any rigorous approach about what would count as a true fact about the model.
It convinced him of what he wanted to believe. He didn’t do a very good job of testing.
If you want to prove this to yourself, tail a server log and paste a URL into ChatGPT and see if you get a hit.
You can still do useful things with it, that much is extremely clear. But it doesn't think.
The old school way is google sends you links, and through those code examples, you build up all the edge cases you need for your code to work.
Thats completely obsolete as a way to synthesize information! Google is going down!
Thanks for reading my daily chat gpt anecdote! Sorry if its irrelevant, I just cant believe how much better it made my day.
The perfect use case for ChatGPT for my personal workflow has been to give me the 70-percent-of-the-way-there skeleton for a given query, Terraform module, or even just to prevent myself from lookup up the syntax of for loops in <language> for the 30,000th time. I can then take that 70 percent chunk and bolt on my edge cases as I go.
There hasn't really been a time where the answer didn't need some massaging and of course occasionally it's flat out wrong, but that's becoming more and more apparent to me those instances are the result of feeding it sub-optimal prompts. I've found it to be eerily similar to my experience learning to use search engines back before SEO titles were a "thing."
It's frankly incredible how little I find myself using a traditional search engine compared to six months ago, at least in the context of work stuff. I'm less inclined to feed it general knowledge prompts, but it's encouraging to see the remarkable LLM tech leaps from generation to generation, and in such short order.
Yes, current AI is not a perfect replacement yet
I need marketing help. Or I should just stop trying. I don't see how people are able to get posts on the home page.. I could not get any attention or comments at all or any real user engagement today. I know this is a useful system or at least worth of a tiny amount of discussion. After working on it for months there is almost no reaction.
I don't think its the website. I think its just that I probably need to buy upvotes or something. But I don't have money for marketing. I'm very bitter.
What I keep telling myself is that marketing is going to be as hard as the programming. Which I have been doing the programming for months, so I will need to do the marketing for a long time also. But I am out of money.
If this doesn't get any traction (which it seems very close to zero) I will be desperate for any kind of contract within about a week. But I don't want to give up on the idea despite the apparent lack of interest. I think the biggest thing is that I only have 100 followers on Twitter or 600 followers on the other account and those people only care about one specific domain.
I also seem to have not been able to get a single upvote on reddit. And only one very short comment.
So I am suspecting that these sites are rigged. Or I should just give up on life? Not that I would actually consider that but that's the feedback that I seem to be getting repeatedly from the world.. that my efforts aren't even worth a short comment disparaging them.
Also, I should note your comment is kind of off-topic, have you thought about creating a Ask HN talking about your issue? It's very possible other HNers could help you get your website and product more interesting. I would also recommend not being bitter about it, Hacker News is all about Silicon Valley and startups, the main rule for these is that 9 times out of 10 you'll fail. It's normal.
At last I should note, if you're doing things as a hobby, do them for yourself, remember to have fun and create NO expectations of what you will or will not be able to do. If you're trying to get a startup or product going, there's a reason why Y Combinator is so popular, if you truly believe in what you're building you should try getting it into an incubator.
Just my two cents.
Only thing seems to consistently work is Twitter - and then you should REALLY be building public if you have no audience so that you can build one. Have you tried this? If not, and you've been building in stealth mode, you've missed out on an opportunity but you can still start now.
You are figuring out so much stuff in an area without expert - make a Twitter post every day about what new problem you solved and you will quickly find a following as you establish authority in this new field. So many people are learning from each other about this stuff, it doesn't take much to become an expert.
I've been going through all the Indie Hackers podcasts. Consider listening to Indie hackers #008 or #009, I think they specifically talk about bootstrapping an audience from nothing.
You may even find people are more interested in an educational product from you than your service, and pivot, for instance.
As far as the website premise itself, I feel like this might've been useful 20 years ago when it was more popular to write static vanilla HTML with jquery sprinkled in. These days, if a person wants a relatively no code solution they're gonna go for WordPress or square space, otherwise they're probably going to design something using a framework such as react svelt or vue.
Wait until how great your day will become when you’re unnecessary ;).
I needed to extract info out of a page with quick and dirty js. I always just copy and paste the data after console log. But this time, I used ChatGPT. Gave it an example of the html and told it how I wanted the csv.
It wrote code that made the csv, but then also created a Blob with the right mime from it and an anchor element, appended the anchor element to the document and called the click() to download the csv direct! Mind blown. Makes me wonder how many other simple things I just have never thought to do but ChatGPT will uncover.
"I'm playing Old School Runescape. I want to efficiently level up my Mining. I'm currently level 90 and I want to do a tick-manipulating method. Something like 3t4g. I don't understand when to click and what I need. Can you explain the method?"
Here's the method it replied minus kruft at the start and end: " 1. Equip your pickaxe and head to the location where you want to mine.
2. Find a 3x3 square of rocks that you want to mine. The square should be in a straight line in front of you, with the rock you want to mine in the middle.
3. Stand in the square so that your character is in the middle of the square, facing the rock you want to mine.
4. Start mining the rock as normal.
5. As soon as you see the animation of your pickaxe hitting the rock, immediately right-click on the rock and select "Prospect". This will cancel the animation of your pickaxe hitting the rock and allow you to perform another action.
6. As soon as you see the prospect animation start, immediately right-click on the rock and select "Mine". This will cancel the prospect animation and allow you to perform another action.
7. Repeat steps 5 and 6 as quickly as possible.
8. Every fourth action, switch to a different rock in the 3x3 square. "
This is total nonsense. Not only is this not 3t4g but the actions it says don't work like that in the game. Obviously it's not understanding anything and it's forming sentences that sound natural in the context of the question. OSRS has one of the best most complete wikis of any game. To me this is an example of being very cautious of trusting what it says when you ask it about something you don't know about.
Earlier this week I was discussing with family why "you should not freeze things twice". Someone made the mistake of asking ChatGPT, and it confidently pooped out that bacteria keep reproducing in frozen food!
(The actual reason is that freezing does not actually kill bacteria already present - cooling/freezing stops the growth, but it is cumulative. So if you have stuff out, then re-freeze, then out again, the second time it may reach toxic levels.)
P.S. https://www.theregister.com/2023/03/02/chatgpt_considered_ha... - fake obituary link and all ...
On the other hand, I do find it quite useful for suggesting improvements to things. You can't rely on the suggestions, but they can be thought provoking.
> To me this is an example of being very cautious of trusting what it says when you ask it about something you don't know about.
More and more like humans all the time
ChatGPT remembers nothing in between chat sessions - so "dozens of hours of training" here doesn't make sense.
What would make sense here instead is coming up with a prompt (within a sensible token length limit) that can be pasted into future ChatGPT sessions that gives it some extra context it needs based on prior experimentation.
The thesis of the web revolution was “the world is too big to be understood by any one person”. The response was “let us all contribute to a shared map to compensate”
The thesis of the AI revolution is “the world seems compressible”. The response has been “let us condense and contain all the meaningful parts in a single generative pattern that can understand everything”
The first thesis is correct. The second thesis is wrong.
Not only is it wrong, but it depends on the fruits of the first thesis to create enough compressible material to trick you into thinking it may be true. The more society leans into AI the less genuine content will exist and the worse it will work.
The way I see it is that internet content is used to bootstrap the models, then supervision is used to train the models without the risk of a feedback loop causing quality loss.
I'm pretty new to ML so I may be missing something.
https://www.youtube.com/watch?v=qxicoCHRStA
...or rather telling a user what to do.