Note: As I'm shy of my writing style, GPT helped me refine the above.
I don't really know Typescript, so I've been using it a lot to supplement my learning, but I find it really hard to accept any of its answers that aren't straight code examples I can test.
I find that ChatGPT is good at helping me with "unknown unknown" questions, where I don't know how to properly phrase my question for a search engine, so I explain to ChatGPT in vague terms how I am feeling about a certain thing.
ChatGPT helps me understand what to search for, and then I take it from there by looking for a reputable answer on a search engine.
But other than that it makes me nervous when people say they're "learning with ChatGPT": any serious conversation with ChatGPT about a subject I know about quickly shows just how much nonsense and bullshit it conjures out of thin air. ChatGPT is extremely good at sounding convincing and authoritative, and you'll feel like you're learning a lot, when in fact you could be learning 100% made-up facts and the only way to tell is if you understand the subject already.
>I don't really know Typescript, so I've been using it a lot to supplement my learning, but I find it really hard to accept any of its answers that aren't straight code examples I can test.
- How are you using it?
- What are the questions you're asking it?
- What are your thoughts about the answers and how are you cross checking them?
Edit:
>If you don't know the subject, how can you be sure what it's telling you is true? Do you vet what ChatGPT tells you with other sources?
I can't, but i can take a look at books i have or search google to find additional sources.
To me, the biggest power of it is to help me understand and build mental models of something new.
For more open ended questions I tend to treat it more like a random comment in a forum. For example, I often notice that Typescript code examples don't use the `function` keyword often, they tend to use anonymous functions like `const func = () => blah`. I asked ChatGPT why this is and it gave a plausible answer, I have no idea if what it's saying is true, but it seemed true enough. I give the answer the same amount of trust as I would some random comment on Stack Overflow. The benefit of Stack Overflow though is at least you know the reputation of the person you're talking to.
People are reading too much into the comment. You wouldn't use ChatGPT to become as knowledgeable as obtaining a PhD. The idea is "If I wanted to ask an expert something, I have easy access to one now."
The real questions are:
1. For a given domain, how much more/less accurate is ChatGPT?
2. How available are the PhDs?
It makes sense to accept a somewhat lower accuracy if they are 10 times more available than a real PhD - you'll still learn a lot more, even though you also learn more wrong things. I'll take a ChatGPT that is accurate 80% of the times and is available all day and night vs a PhD who is accurate 90% of the times but I get only 30 minutes with him per week.
That applies to any article, book, or a verbal communication with any human being, not only to LLMs
I can pick up a college textbook on interval calculus and be reasonably assured of its veracity because it's been checked over by a proofreader, other mathematicians, the publisher, and finally has been previously used in a classroom environment by experts in the field.
The same question could be asked when we're learning through books or an expert. There's no guarantee that books or experts are always spitting out the truth.
Unlike the PhD, the AI model has benchmark scores on truthfulness. Right now, they're looking pretty good.
Seriously, you're veering into sophistry.
People have reputations. They cite sources. Unless they're compulsive liars, they don't tend to just make stuff up on the spot based on what will be probabilistically pleasing to you.
There are countless examples of ChatGPT not just making mistakes but making up "facts" entirely from whole cloth, not based on misunderstanding or bias or anything else, but simply because the math says it's the best way to complete a sentence.
Let's not use vacuous arguments to dismiss that very real concern.
Edit: As an aside, it somehow only now just occurred to me that LLM bullshit generation may actually be more insidious than the human-generated variety as LLMs are specifically trained to create language that's pleasing, which means it's going to try to make sure it sounds right, and therefore the misinformation may turn out to be more subtle and convincing...
Edit: Please stop playing devils advocate and pay attention to the words “in the way that LLMs do”. I really thought it would not be necessary to clarify that I know humans lie! LLMs lie in a different way. (When was the last time a person gave you a made up URL as a source?) Also I am replying to a conversation about a PhD talking about their preferred subject matter, not a regular person. An expert human in their preferred field is much more reliable than the LLMs we have today.
For example, on Stack Overflow you'll see questions like how do I accomplish this thing, but the best answer is not directly solving that question. The expert was able to intuit that you don't actually want to do the thing you're trying to do. You should instead take some alternative approach.
Is there any chance that models like these are able to course correct a human in this way?
I understand it has no sense of knowledge-of-knowledge, so (apparently) no ability to determine how confident it ought to be about what it's saying — it never qualifies with "I'm not entirely sure about this, but..."
I think this is something that needs to be worked in ASAP. It's a fundamental aspect of how people actually interact. Establishing oneself as factually reliable is fundamental for communication and social cohesion, so we're constantly hedging what we say in various ways to signify our confidence in its truthfulness. The absence of those qualifiers in otherwise human-seeming and authoritative-sounding communication is a recipe for trouble.
It is scary in the sense that people love following confident sounding authoritarians, so maybe AI will be our next world leader.
I don't mind it giving me a wrong answer. What's really bad is confidently giving the wrong answer. If a human replied, they'd say something like "I'm not sure, but if I remember correctly..", or "I would guess that..."
I think the problem is they've trained ChatGPT to respond condidently as long as it has a rough idea about what the answer could be. The AI doesn't get "rewarded" for saying "I don't know".
I'm sure the data about the confidence is there somewhere in the neural net, so they probably just need to somehow train it to present that data in its response.
- ChatGPT
4 years later the second doctor asked me "I wonder why did my colleague decided not to take a tissue sample from insert some place in the stomach. I said out loud "I didn't even know what that is, let along ask him why he didn't".
No, that's not the same way that anyone lacking knowledge gains confidence in the things that others tell them.
A technique one can use instead of blindly trusting what one person may tell us is seeking out second opinions to corroborate new info. This works for many things you might not have personal experience with: automobiles, construction, finance, medicine, &c.
Some random redditor ended up figuring it out. Then every physician from that point forward agreed with the diagnosis.
Licensed based medicine :(
I am sure if you always wishes do thave a personal PhD in a particular subject you could find shady universities out there who could provide one without much effort.
[I may be exagerating but the point still stands because the previous user also didn't mean a literal PhD]
I live near UCI and yes, I can find one, but at a sizable cost. I'm not opposed to that, but it's still a good chunk of money.
...without going anywhere.
Wikipedia isn't great compared to a degree from a top university, but it's also readily available and is often a first reference for many of us.
I can ask it about the certification process, what certified pilots can and can’t do, various levels of certification, etc.
.... maybe.