story
Because, there are other countries which use more than English language?
I fucking hate you ascii-centric ignorant morons sometimes, you know, for example
- display welcome message character by character fro left to right
- Extract the first character because it's always the surname
- catch two non-ascii keyword and find its index in a string
In the first example, should I just put byte by byte, which displays as garbage, and suddenly three bytes become a recognizable character?
> I fucking hate you ascii-centric ignorant morons
Nice.
> You ignorant, arrogant fuck.
This is why I quit posting under an alias, so I wouldn't be tempted to say such things.
> display welcome message character by character fro left to right
UTF-16/UCS-4/UCS-2 doesn't solve anything here. Counting characters doesn't help. For example, imagine if you try to print Korean character-by-character. You might get some garbage like this:
ᄋ
아
안
안ᄂ
안녀
안녕
안녕ᄒ
안녕하
안녕하ᄉ
안녕하세
안녕하세ᄋ
안녕하세요
Fixed width encodings do not solve this problem, and UTF-8 does not make this problem more difficult. I am honestly curious why you would need to count characters -- at all -- except for posting to Twitter.Splitting on characters is garbage. (This example was done in Python 3, so everything is properly encoded, and there is no need to use the 'u' prefix. The 'u' prefix is a nop in Python 3. It is only there for Python 2.x compatibility.)
>>> x
'안녕하세요'
>>> x[2:4]
'ᆫᄂ'
I tried in the Google Chrome console, too: > '안녕하세요'.substr(2,2)
"하세"
> '안녕하세요'.substr(2,2)
"ᆫᄂ"
I'm not even leaving the BMP and it's broken! You seem to be blaming encoding issues but I don't have any issues with encoding. It doesn't matter if Chrome uses UCS-2 or Python uses UCS-4 or UCS-2, what's happening here is entirely expected, and it has everything to do with Jamo and nothing to do with encodings. >>> a = '안녕하세요'
>>> b = '안녕하세요'
# They only look the same
>>> len(a)
5
>>> len(b)
12
>>> def p(x):
... return ' '.join(
'U+{:04X}'.format(ord(c)) for c in x)
>>> print(' '.join('U+{:04X}'.format(ord(c))
for c in b))
>>> print(p(a))
U+C548 U+B155 U+D558 U+C138 U+C694
>>> print(p(b))
U+110B U+1161 U+11AB U+1102 U+1167 U+11BC U+1112 U+1161 U+1109 U+1166 U+110B U+116D
See? Expected, broken behavior you get when splitting on character boundaries.If you think you can split on character boundaries, you are living in an ASCII world. Unicode does not work that way. Don't think that normalization will solve anything either. (Okay, normalization solves some problems. But it is not a panacea. Some languages have grapheme clusters that cannot be precomposed.)
Fixed-width may be faster for splitting on character boundaries, but splitting on character boundaries only works in the ASCII world.
Why? If you can count characters (code points) then it's natural that you can split or substring by characters.
Try this in javascript:
'안녕하세요'.substr(2,2)
Internally Fixed length encoding is much faster than variable-length encoding.> Unicode does not work that way.
It DOES.
> Splitting on characters is garbage.
You messed up Unicode in Python in so many levels. Those characters you seen in Python console is, actually not Unicode. These are just bytes in sys stdout that happens be to correctly decoded and properly displayed. You should always use the u'' for any kind of characters. '안녕하세요' is WRONG and may lead to unspecified behaviors, it depends on your source code file encoding, intepreter encoding and sys default encoding, if you display them in console it depends on the console encoding, if it's GUI or HTML widget it depends on the GUI widget or content-type encoding.
> I'm not even leaving the BMP and it's broken!
Your unicode-fu is broken. Looks like your example provided identical Korean strings, which might be ICU module in Chrome auto normalized for you.
> You can't split decomposed Korean on character boundaries.
In a broken unicode implementation, like Chrome browser v8 js engine.
> I happen to be using Python 3. It is internally using UCS-4.
For the love of BDFL read this
dietrichepp is talking about Normalized Form D, which is a valid form of Unicode and cannot be counted using codepoints like you're doing.
Maybe you can try:
'𠀋'.substr(0,1)
>>> u'𡘓'[0:1]
u'\U00021613'
>>> u'Hi, Mr𡘓'[-1]
u'\U00021613
>>> u'𠀋'[0:1]
u'\U0002000b'
Javascript won't work because UCS2 in js engine, duh.Actually Javascript is messed up with Unicode string and binary strings, that's why Nodejs invented Buffer
Consider the following sequence of code points: U+0041 U+0308 [edit: corrected sequence]
That equals this european letter: Ä
Two code points, one letter. MAGIC! You can also get the same-looking letter with a single code point using U+00C4 (unicode likes redundancy).
Not all languages have letters. Not all languages that have letters represent each one with a single code point. Please think twice before calling people "morons."
Yes I under stand there are million ways to display the same shape using various unicode. But how does that make code point counting impossible?
AND if you explictly using COMBINING DIAERESIS instead of single U+00C4, counting diaeresis separately is wrong somehow?
Why don't we make a law stating that both ae and æ is single letter?
Yeah, like your Jamo trick is complex for a native CJK speaker.
Thought Jamo is hard? Check out Ideographic Description Sequence. We have like millions of 偏旁部首笔画 that you can freestyle combine with.
And the fun is the relative length of glypes, 土 and 士 is different, only because one line is longer that the other. How would you distinguish that?
But you know what your problem is?
It's like arguing with you that you think ส็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็็ is only one character.
IMPOSSIBU?!!!???
And because U+202e exists on the lolternet so we deprive your ability to count 99% normal CJK characters???!??!111!
Combination characters is normalized to single character in most cases, and should be countable and indexable separately.
If you type combination characters EXPLICITLY, they will be counted with each combination, naturally, what's wrong with that?
Or else why don't we abandon Unicode, every country deal with their own weird glype composition shit?
It seems to me you're deriding us for being native speakers of languages with alphabets, and also deriding us for wanting APIs that prevent developers from alphabet-language backgrounds from making the mistakes our assumptions would incline us towards. You're going to have to decide if you're angry because you like the "simplicity" of UTF-16, because we don't speak a CJK language as well as you do (maybe Dietrich or Colin does; I have no idea) or because you're just angry and this is where you've come to blow off steam. If it's the third, I hope you'll try Reddit first next time, since this kind of behavior seems to be a lot more acceptable there than here.
fixed width can count code points (I worded it as "character") faster than variable-length
Then his dietrichepp tries to educate me two code points combined should be treated equaly with another single code point, WTF y u no normalization?
Downvote me as you like, but you can't change the fact that UCS4 is used internally in Unicode systems.
Any reason other than for faster code point counting?
-----------------
dietrichepp also offended me that unicode characters should not count or offset. QTF:
> Why do you want to count Unicode characters? Why do you care if it is fast to do so? Why would you ever need to use character-based string indexing?