If not:
UTF-16 is born of UCS-2 being a very poor codeset, as it was limited to the Unicode BMP, which means 2^16 codepoints, but Unicode has many more codepoints, so users couldn't have the pile-of-poo emoticon. Something had to be done, and that something was to create a variable-length (in terms of 16-bit code units) encoding using a few then-unassigned codepoints in the BMP. The result yields only a sad, pathetic, measly 2^21 codepoints, and that's just not that much. Moreover, while many codesets play well with ASCII, UTF-16 doesn't. Also, decomposed forms of Unicode glyphs necessarily involve multiple codepoints, thus multiple code units... Many programmers hate variable length text encoding because they can't do simple array indexing operations to find the nth character in a string, but with UTF-8, UTF-16, and just plain decomposition, that's a fact of life anyways. If you're going to have a variable-length codeset encoding, you might as well use UTF-8 and get all its plays-nice-with-ASCII benefits. For Latin-mostly text UTF-8 also is more efficient than UTF-16, so there is a slight benefit there.
Much of the rest of the non-Windows, non-ECMAScript world has settled on UTF-8, and that's a very very good thing.
UTF-8 uses a variable length encoding that allows for more characters-- if restricted to four bytes, it allows for 2^21 total code points; it's designed to eventually allow for 2^31 code points, which works out to about 2 billion code points that can be expressed.
(Granted, this is all hypothetical-- Unicode isn't even close to filling all of the space that UTF-16 allows; there aren't enough known writing systems yet to be encoded to fill all of the remaining Unicode planes (3-13 of 17 are all still unassigned). But UTF-16's still nonstandard (most of the world's standardized on UTF-8) and kind of ugly, so the sooner it goes away, the better.)
* Your timeline is backwards. UTF-8 was designed for a 31-bit code space. Far from that being its future, that is its past. In the 21st century it was explicitly reduced from 31-bit capable to 21 bits.
* UTF-16 is just as standard as UTF-8 is, it being standardized by the same people in the same places.
* 17 planes is 21 bits; it is 16 planes that is 20 bits.
https://www.joelonsoftware.com/2003/10/08/the-absolute-minim...
https://en.wikipedia.org/wiki/Comparison_of_Unicode_encoding...
I was confused about this for years, too. But it turns out it's just a problem of bad naming. Happens more in this industry than we'd like to admit.
As other explained, it boils down to UTF-16 being 16-bit, and UTF-8 being anything from 8- to 32-bit. It should have been named UTF-V (from "variable") or something, but here we are.
UTF-8 is a variable-length encoding using up to 4 code units (though it used to be up to 6, and could again be up to 6) each of which are 8-bits wide.
Both, UTF-16 and UTF-8 are variable-length encodings!
UTF-32 is not variable-length, but even so, the way Unicode works a character like ´ (á) can be written in two different ways, one of which requires one codepoint and one of which requires two (regardless of encoding), while ṻ (LATIN SMALL LETTER U WITH MACRON AND DIAERESIS) can be written in up to five different ways requiring from one to three different codepoints (regardless of encoding).
Not every character has a one-codepoint representation in Unicode, or at least not every character has a canonically-pre-composed one-codepoint representation in Unicode.
Therefore, many characters in Unicode can be expected to be written in multiple codepoints regardless of encoding. Therefore all programmers dealing with text need to be prepared for being unable to do an O(1) array index operation to get at the nth character of a string.
(In UTF-32 you can do an O(1) array index operation to get to the nth codepoint, not character, but one is usually only ever interested in getting the nth character.)