[1]: https://github.com/simonw/advent-of-code-2022-in-rust/issues... [2]: https://doc.rust-lang.org/std/string/struct.String.html#utf-... [3]: https://stackoverflow.com/a/24542502
let s1 = "[[..]]";
// Rats, indexing into s1 doesn't work †
let s2 = b"[[..]]";
// s2 is just an array of bytes
assert_eq!(s2[4], b']');
† Technically it works fine, it's just probably not what you wantedAs the docs point out, they are simply types that either borrow or own some memory (i.e. bytes), and the types/operations guarantee those bytes are valid UTF-8/Unicode code points (aka. characters). A code point is one to four bytes when encoded with UTF-8.
Grapheme clusters are more complicated. Roughly speaking they are a collection of code points that match more what humans expect (and depend on the language/script), e.g. `ü` can actually be two code points `u` + `¨`, and splitting after `u` could be nonsensical. AFAIK, Rust's standard library doesn't really provide a way to deal with grapheme clusters? EDIT: it used to, but it got deprecated and removed [0]
So TL;DR: 1-4 bytes => 1 character, 2+ characters => maybe 1 grapheme cluster. Hope that helps either you, or someone else reading this.
I would say this is a bug on the compiler error: it should be making it clear not only that you can't index on the string, but also why. If the explanation is too long, it should be linking to the right section of the book.
For example, I was learning Bazel using it and I spent 2-3 hours trying to debug an issue going back and forth with it. Eventually I went to the docs and found the solution in 5 mins.
The problem is it doesn’t know when it is wrong, it will always spit out an answer and it will make up libraries that sound real in order to give you an answer. The problem is it doesn’t understand the logic behind what it is saying. It’s just able to spit out a reasonable looking answer because it is a generalizable statistical model of language.
For example try asking it “Generate a 5 python homework questions on Classes that students cannot cheat on using ChatGPT, GPT3 or Assistant.” The questions it generates are not ones that are hardened against itself. It is not able to think logically because it does not think like a human, it’s doing something else entirely so calling it “true intelligence” is not accurate.
If you want to read how it itself describes it’s own intelligence and identity I found a prompt to get it to do that, it does not describe itself as a human intelligence: https://twitter.com/faizlikethehat/status/159949598085168332...
Which will come...
It is an ambiguous term that can refer to many different things.
It is used interchangeably with cognitive abilities, personality traits, knowledge, memorization abilities, hard skills, soft skills, etc. in a regular basis.
Just use the specific term instead. Problem solved.
Before people finish their pointless discussion about what intelligence is, billionaires will have taken over the world with the help of these systems.
I asked chatGPT about it and it very confidently said one existed and gave me a sample code which imported a nonexistent chunking iterator. I wonder about other outputs of chatGPT which are not as easily and quickly verifiable.
How would that work on general iterators? You can have an iterator that always returns 'y'. Or buffered iteration or whatever.
It turns out if you build a model that can predict what word comes next after a sequence of previous words, then train it on TBs (PBs?) of data, you get something that really does appear to be "intelligent". And which is absurdly useful, to boot!
Until you trip it up, and the illusion shatters.
Of course some info is sometimes false, but I don't find myself getting disillusioned of it all. It is what it is, and knowing the limitations doesn't ruin it for me, It is still incredibly valuable, and irreplacable, imo.
The open question for me is how much it can benefit people with a novice level of understanding - that's one of the reasons I'm exploring Rust with Advent of Code using it.
I have 20+ years of non-Rust programming experience though, so I have no idea how well this would work for someone learning to program for the first time. I'd be fascinating to see how well it works (or doesn't) for complete programming newcomers.
There’s been a number of instances where I would write a comment describing a simple operation and it would struggle or generate a lot of noise, but if I just start writing an implementation it would give me a good suggestion pretty quickly. I guess the extra context helped.
I was disappointed that it wasn’t more useful as a discovery tool- when I’m not knowledgeable about a language or framework, it can be hard to judge whether its suggestions are subtly wrong. After a while, I started getting a sense for when suggestions are likely to be valuable, which is generally when I have a concrete idea of what I want and what it should look like. When i’m doing more exploratory development I often just ignore the completion. Using ChatGPT instead for learning new stuff is a great idea and i’ll definitely be trying that.
Something I wasn't expecting about copilot is that it actually has been giving me pretty good completions for emacs lisp. I think it’s going to be a very valuable tool in the long run and I recommend giving it a second thought if you’ve dismissed its utility.
I always found it a bit tedious to have to type out everything when you know you want a loop over and array or something. Now you can just tell ChatGPT what you want and let it do the typing. Disregarding the mistakes it makes while appearing confident about its answers :-)
Now programmers can finally be directors where ChatGPT is the film team or the cinematographer. Sometimes it's overconfident and you need to correct it to conform to your imagination. Also as a programmer/director you still need a lot of knowledge to ask the right questions, judge the answers and so on (as demonstrated in Simon's writeups).
I was already very optimistic about Copilot by itself being able to basically eliminate the need to check StackOverflow/Docs for basic questions. Combined with ChatGPT I can essentially offload all boilerplate I would ever need to write (provided I'm still able to identify & correct the few errors it spits out from time to time).
Unless you are saying rust can do whatever it wants with its memory without clear specification nor way for you to force it to certain direction, then my answer is that rust has failed as a low level language at that point.