Reed Solomon was about two thirds of the way through the semester, and the gist is that it's based on polynomials - with enough points you can define exactly where the polynomial is - so include some extra points and that way if some get lost along the way you can recreate them.
The rest of it is how to apply that for binary data (finite fields). Which is mathematically beautiful, but where they get somewhat complex.
https://www.thonky.com/qr-code-tutorial/error-correction-cod...
https://dev.to/maxart2501/let-s-develop-a-qr-code-generator-...
The blog owner exudes elitist vibes in the commentary. A quick skim of the blog reveals a request for Bitcoin donations, suggesting $3 as the amount, without considering that a large portion of this donation will be eaten up by fees. </rant>
I can see why these racists come to the conclusion that all Indians speak a certain way. If they see something written with a few quirks common to Indian English they confirm their bias that all Indians speak and write that way. If they see text without that tell, their bias is still confirmed because they conclude this person must have grown up elsewhere.
For the racists at the back - language diverges over time. That’s perfectly normal. As the reader/listener it’s easier for us to make the effort to understand than it is for someone to change how they speak. If you’re ok with making an effort to understand unusual words and phrases used by Australian, Scottish, Irish, Kiwi people but you won’t do the same for Indian people, reflect on why you do that.
People from NZ change most “e” sounds to “i”, so they’d eat pincakes for breakfast for example. I find that quirk endearing. Or Australians using words like ute, jaffle etc. But somehow only white English speakers are given the benefit of the doubt when they do this? Why can’t Indians get the same thing when they’re speaking their second language?
The characterization of some of these senders as lazy is simply not true: I once engaged with a student who wrote like some of those examples and was trying to contribute to a FOSS project I ran; he turned out to be an excellent contributor who nobody could reasonably say was lacking in either skill or effort. It is usually just a combination of shyness and excessive respect that produces these 'lazy' requests. And, frankly, using words like 'ur' and 'thx' are how some 100% native speakers of English write. (The ever-relevant XKCD strikes again, #1414 this time.)
I consider myself to be extremely lucky that my native language happens to be the lingua franca of the computer industry if not the world, and even luckier that I don't have any impediment such as dyslexia that would hinder me capitalizing on that good fortune to the full.
And finally, yes, most of what one receives online is spam. Lots of spam, in fact, but when someone makes as least as much effort to contact me individually I try to make at least that much effort in return.
In some cases, Nayuki is not even happy with people pointing out legit errors in their blog. For example,
> I want to use the parameters [...] and > > invNTT(NTT(invec)) !=invec ?
And then there's "that shameless country", "that needy country", "that unspeakable country", as others have pointed out. ... really? Yeah, we've all gotten spammy emails from Indian senders, joking about it is one thing, but that's just gross.
I was looking for algorithms to extract out a QR code from an image.
I am looking for a guide that walks you through implementing all the algorithms necessary after you have the decoded raw image data.
https://greggman.github.io/qr-code/
might add more options but in truth I don't think most users need the options
The part I'd also like to know about is error correction, if you have anything useful related to QR codes for that.
[1]: https://pdf.ahaprintables.com/pdf/preview/aha/zebra-puzzles-... (PDF)