I don't really know Typescript, so I've been using it a lot to supplement my learning, but I find it really hard to accept any of its answers that aren't straight code examples I can test.
I find that ChatGPT is good at helping me with "unknown unknown" questions, where I don't know how to properly phrase my question for a search engine, so I explain to ChatGPT in vague terms how I am feeling about a certain thing.
ChatGPT helps me understand what to search for, and then I take it from there by looking for a reputable answer on a search engine.
But other than that it makes me nervous when people say they're "learning with ChatGPT": any serious conversation with ChatGPT about a subject I know about quickly shows just how much nonsense and bullshit it conjures out of thin air. ChatGPT is extremely good at sounding convincing and authoritative, and you'll feel like you're learning a lot, when in fact you could be learning 100% made-up facts and the only way to tell is if you understand the subject already.
Some of these people are just learning about the relationship between temperature and pressure or current and voltage, etc. something well within the bounds of LLMs and its enriching their lives dramatically.
I asked it a question once to clarify a fact from a book I was reading that temporarily baffled my 2am barely awake mind.
“Why is humid air less dense than dry air? Isn’t water heavier than air”
It went on to explain the composition of air, the atomic weights of all the most common air molecules and how the atomic weight of water molecules is lighter than nitrogen (N2) and oxygen (O2)
And my fallacy was in comparing air to liquid water that people are more familiar with rather than water vapor which is what would be found in humid air.
>I don't really know Typescript, so I've been using it a lot to supplement my learning, but I find it really hard to accept any of its answers that aren't straight code examples I can test.
- How are you using it?
- What are the questions you're asking it?
- What are your thoughts about the answers and how are you cross checking them?
Edit:
>If you don't know the subject, how can you be sure what it's telling you is true? Do you vet what ChatGPT tells you with other sources?
I can't, but i can take a look at books i have or search google to find additional sources.
To me, the biggest power of it is to help me understand and build mental models of something new.
For more open ended questions I tend to treat it more like a random comment in a forum. For example, I often notice that Typescript code examples don't use the `function` keyword often, they tend to use anonymous functions like `const func = () => blah`. I asked ChatGPT why this is and it gave a plausible answer, I have no idea if what it's saying is true, but it seemed true enough. I give the answer the same amount of trust as I would some random comment on Stack Overflow. The benefit of Stack Overflow though is at least you know the reputation of the person you're talking to.
People are reading too much into the comment. You wouldn't use ChatGPT to become as knowledgeable as obtaining a PhD. The idea is "If I wanted to ask an expert something, I have easy access to one now."
The real questions are:
1. For a given domain, how much more/less accurate is ChatGPT?
2. How available are the PhDs?
It makes sense to accept a somewhat lower accuracy if they are 10 times more available than a real PhD - you'll still learn a lot more, even though you also learn more wrong things. I'll take a ChatGPT that is accurate 80% of the times and is available all day and night vs a PhD who is accurate 90% of the times but I get only 30 minutes with him per week.
That applies to any article, book, or a verbal communication with any human being, not only to LLMs
I can pick up a college textbook on interval calculus and be reasonably assured of its veracity because it's been checked over by a proofreader, other mathematicians, the publisher, and finally has been previously used in a classroom environment by experts in the field.
Of course, it's not a trivial task to find the reputable sources and the great books about a subject you don't know about. But there are many ways to find that out, for example by checking out the curriculum of respected universities to see which textbooks they use.
Well, even a very popular scientific theory, that supported by the whole consensus of academic society at its time, could be proved wrong decades later.
Oddly enough that's usually only the case for big theories, but not for everything. You'd hard pressed to prove wrong our understanding on how to build bridges, for example.
Would you live in the skyscraper designed by chatgpt?
The same question could be asked when we're learning through books or an expert. There's no guarantee that books or experts are always spitting out the truth.
Unlike the PhD, the AI model has benchmark scores on truthfulness. Right now, they're looking pretty good.
Seriously, you're veering into sophistry.
People have reputations. They cite sources. Unless they're compulsive liars, they don't tend to just make stuff up on the spot based on what will be probabilistically pleasing to you.
There are countless examples of ChatGPT not just making mistakes but making up "facts" entirely from whole cloth, not based on misunderstanding or bias or anything else, but simply because the math says it's the best way to complete a sentence.
Let's not use vacuous arguments to dismiss that very real concern.
Edit: As an aside, it somehow only now just occurred to me that LLM bullshit generation may actually be more insidious than the human-generated variety as LLMs are specifically trained to create language that's pleasing, which means it's going to try to make sure it sounds right, and therefore the misinformation may turn out to be more subtle and convincing...
The only real difference is that you’re imputing a particular kind of intention to the ai whereas the human’s intention can be assumed good in the above scenario. The BS vs unknowing falsehood distinction is purely intention based, a category error to attribute to an llm.
That's not even remotely true and if you've worked with these technologies at all you'd know that. For example, as I previously mentioned, humans don't typically make up complete fiction out of whole cloth and present it as fact unless those humans possess some sort of mental illness.
> The only real difference is that you’re imputing a particular kind of intention to the ai
No, in fact I'm imputing the precise opposite. These AIs have no intention because they have no comprehension or intelligence.
The result is that when they generate false information, it can be unexpected and unpredictable.
If I'm talking to a human I can make some reasonable inferences about what they might get wrong, where their biases lie, etc.
Machines fail in surprising, unexpected, and often subtle ways that make them difficult for humans to predict.
Edit: Please stop playing devils advocate and pay attention to the words “in the way that LLMs do”. I really thought it would not be necessary to clarify that I know humans lie! LLMs lie in a different way. (When was the last time a person gave you a made up URL as a source?) Also I am replying to a conversation about a PhD talking about their preferred subject matter, not a regular person. An expert human in their preferred field is much more reliable than the LLMs we have today.
This applies to PhDs as well and I don't agree that an expert human is automatically more reliable.
For example, on Stack Overflow you'll see questions like how do I accomplish this thing, but the best answer is not directly solving that question. The expert was able to intuit that you don't actually want to do the thing you're trying to do. You should instead take some alternative approach.
Is there any chance that models like these are able to course correct a human in this way?