If we still need to target es5 4 years later, and transpilation is standard practice, why bother? Is the evolution of JS not directed in practice by the authors of Babel and Typescript? If no one can confidently ship this stuff for years after, what’s the incentive to even bother thinking about what is official vs a Babel supported proposal.
I like the idea of idiomatic JS with powerful modern features, but in practice every project I’ve seen seems to use a pretty arbitrary subset of the language, with different ideas about best practices and what the good parts are.
If you never release new standards you'll never be able to use them. I realize that in JS world 4 years is basically an eternity so it's hard to project that far but I'm sure that if you still have to write javascript code 10 years from now you'll be happy to be able to use the features of ES2019.
Regarding transpilation surely it's not as standard as you make it out to be? It's popular to be sure but handwritten javascript is not that rare nowadays, is it?
The impression I got from working on various projects over the past couple of years was that if the build doesn't include Babel/Webpack/Packet/etc then you're not doing a 'professional' job.
> ... handwritten javascript is not that rare nowadays, is it?
I love handwriting vanilla javascript, though I only really get the opportunity to do it in my personal projects.
When I made the decision earlier this year to rewrite my canvas library from scratch, I made a deliberate choice to drop support for IE/Edge/legacy browsers. (I can do this because nobody, as far as I know, uses my library for production sites). Being able to use ES6+ features in the code - promises, fat arrows, const/let, etc - has been a liberation and a joy and made me fall in love with Javascript all over again. Especially as the library has zero dependencies so when I'm working on it, it really does feel like working in uncharted territory where reinventing a wheel means getting to reinvent it better than before.
I wish paid work could be such fun.
Long time front end developer here. In the last couple of years I can't recall seeing even a single project without a build pipeline (not that they don't exist, I just haven't encountered them at my day job, first or third party).
Whereas in JS-land, the support for upgrades is even more trailing because it's the end-users who need to upgrade, not just the individual/organization doing the packaging.
On Windows VCRUNTIME installs are mostly painless, easily backgrounded, and people don't realize they sometimes have as many as 100s of different versions installed side-by-side because every game installed wants a slightly different version.
But on Linux, it's often as much as like three-fourths of the reason that code needs to be rebuilt so often for every possible different Linux distribution because most distributions lock to only a single libc and prefer every app dynamically link that specific libc. Apps get held back by distributions slow to adopt libc updates all the time in Linux, and app devs try to stick to common libc versions based on distribution popularity (user preference), which isn't dissimilar to the lagging browser problem.
(Then there are arguments about statically bundling libc / VCRUNTIME, etc.)
It's not as bad as JS land on average, but that doesn't mean that C is immune to the same problem. As soon as you are dealing with shared libraries / platforms / runtimes, you run into having to deal with what users are willing to install (and practically no platform is immune, depending on trade-offs one is willing to take).
Once a transpile step is required, it doesn't make as much of a difference what is on the other side.
FYI, this doesn't happen in C, all C versions are backwards compatible - you can compile C89 code on any compiler supporting C99, C11 or C18.
I sometimes wonder what's the point of new versions of C, too.
Also, a new feature I want to use needs only be supported by one C compiler: the one I'm using. With JS, I need all of them to support it.
At this point I'm mostly fine with the feature set of C99 so I can live without the newer standards. Actually there's stuff in newer standards that I find questionable, but that's a different discussion.
C99 on the other hand was sorely needed, if only for standardizing some basic features that up until then were only available as vendor-specific extensions. Things like inline, stdint.h, bools, variadic macros, restrict, compound literals and more[1].
Writing code without these features is often severely limiting. Or rather, you probably won't be limited but you'll have to rely on vendor extensions and write non-portable code. Or maybe you'll use a thousand-line long configure script to make the code portable.
>Also, a new feature I want to use needs only be supported by one C compiler: the one I'm using. With JS, I need all of them to support it.
If you're making proprietary software that makes sense, if you're developing open source code you very much care about portability and compatibility. I care about the compiler I use, the compiler OpenSuse uses, whatever garbage Visual Studio uses, the compiler FreeBSD uses etc...
Besides I have basic code snippets I wrote over a decade ago that I still use today, regardless of the environment. That's valuable too.
I've always had the impression that C programmers also care about standards compliance, and aren't typically willing to marry their project to a particular compiler.
At least, it's the language community where you see "language lawyers". I'm sure there are "language lawyers" in other language communities, but I've never seen discussions about what causes "undefined behavior" or what's "implementation dependent" or discuss interpretations of particular passages of the standard like I do with the C and C++ communities.
Using modern JS features does not require universal support.
Even if you’re only targeting evergreen browsers, the popular build tools also perform minification and dependency resolution/linking. There’s so much a tool like webpack can do for you that I imagine it will remain hugely popular even as the need for transpilation wanes.
If you are building a modern web "app", not a one off set of web pages, then yes, it is the standard. It would be very weird to not see a compile (transpilation) step.
> Variable Length Arrays are not supported (although these are now officially optional)
> restrict qualifier is not supported, __restrict is supported instead, but it is not exactly the same
> Top-level qualifiers in array declarations in function parameters are not supported (e.g. void foo(int a[const])) as well as keyword static in the same context
They only started seriously working on actual C99 support (aside from the bits of C99 which were part of C++) for VS2013 or so.
Though to be fair it seems both Clang and GCC are still missing bits and bobs:
> The support for standard C in clang is feature-complete except for the C99 floating-point pragmas.
for GCC I found https://gcc.gnu.org/c99status.html, it's unclear how up-to-date it is, and if GCC is still missing any required feature.
I'd say for the last 5 years 90% of my browser JS projects used Webpack and Babel.
Sure, but if C came out with a new standard every year, you'd essentially never be on the latest version. Isn't there at least a valid argument to slowing down a bit to give the implementations a chance to catch up instead of having a new ES 20XX every year?
Even if you support IE, you can still reap the benefits of modern JS if you're willing to do some differential serving. You can use the module/nomodule pattern to serve ES2017 without making changes to the server.
It is the part I was most excited about C99 and was very disappointed that it is so poorly supported.
For one, not everyone works on the client. I can write for Node and use everything v8 supports without ever touching Babel and Typescript.
>I like the idea of idiomatic JS with powerful modern features, but in practice every project I’ve seen seems to use a pretty arbitrary subset of the language, with different ideas about best practices and what the good parts are.
Good parts/best practices are orthogonal to native features and libs, which is what we're discussing here.
You totally can do that--but you probably shouldn't, because writing TypeScript is better for you and for future you. ;)
- Experimental language proposals can be tested in the wild
- Real non-ivory-tower feedback is raised to TC39
- Everything feeds into the canonical ES spec (~no splintering)
- Us regular folk are able to harness new syntax immediately
- Users continue to have their old runtimes supported
JS is a notoriously quirky and inconsistent programming language. Clearly it's sufficiently usable for writing complex, powerful and reliable programs, but it's error-prone for non-experts and encourages programming patterns that make importing accidental complexity the norm.
For many programming situations it'd be easy to just pick a different language, but obviously thus isn't the case for writing browser-based programs.
The best possible scenario for me would somehow involve deprecation and removal of the nasty parts of JS, and a path towards a smaller, simpler, more consistent language. Right now it feels like the cost of backwards forever compatibility is paid every day, in every project, and it's completely wasteful, given that transpile and polyfill is widely considered best practice.
Whether this could the job of TC39 or some other institution could go either way for certain.
I've recently been working in Electron, and I find having app logic in both browser JS and Node to be more of a frustrating uncanny valley than a help. I suspect I'm in the minority on this one though, at least amount people with workaday skill set in client side JS.
I agree, this is super important. My inclination whenever I did into a JS project has been to use lodash/underscore everywhere for everything, assuming that it is popular enough that someone will be able to maintain it without much headache, and I can actually get stuff done without breaking my brain over JavaScript's notorious quirks. I'm curious at what point this stops being a good practice. It certainly was 4 years ago.
It’s similar to all the new tweaks and elements in HTML or DOM. If you’re working on Wikipedia, you will likely never get to use them. But if you work on a more niche app, they become quite useful. Over time, old browsers die out, and the amount of people who can use new features expands; early adopters do the testing for the late majority.
This is a dramatic improvement over the 10 year lifespan of IE6 with es3. Once the need to target es5 drops, the next level gets even shorter - I don't think we'll get less than 2 years as a practical matter, but even at a 2 year delay, that's still regularly progress.
> a pretty arbitrary subset of the language
Best practices don't come from a mathematical model - programming is about communication and being compatible with user/business demands (which keep changing). Thus, best practices come about from a lot of experimentation and retrospective. That's ongoing. This doesn't mean it won't settle down. Heck, as it is, a lot of dynamic best practices influence the direction of non-dynamic languages - all of which arises from time and experimentation.
Your “we” isn’t everyone else’s. Some places need to support very old browsers but even in places like that usually not every app does. Those people are pushing the state of the art forward since they do real work outside of the standards process and that provides useful feedback to both the standards committees and browser developers.
In practice? I suppose in practice, JavaScript is driven by everyone in aggregate and what people consider to be "JavaScript" rather than "something that can be made to run in traditional JavaScript environments like browsers." I'm not sure if you mean that, or simply who directs actual changes to the traditional JavaScript environments (like browsers) themselves.
But yeah, if transpilation tools are reliable and continue to be well-maintained, you can say "why bother updated the 'official' language and implementation in browsers?" But I don't understand how this is a bad thing. You're getting the best of both worlds: browsers will implement new JavaScript features and optimizations, and some dev teams can also use build tools to use those new features, and other potential new features, and still make their work available to older browsers.
It doesn't seem like a problem to me, unless you're thinking about all the language development effort in the JavaScript community as a fixed pie such that "non-official" language development like Babel and TypeScript take away effort that otherwise would be allocated to official language development. And I certainly don't think that is the case.
How about simply compiling IE5 into WebAssembly, wouldn't that solve the problem? ;)
I'm sure there are plenty of build systems out there that still indiscriminately turn things into ES5, because "that's how we wrote it years ago and it still works", but anyone who actually cares about performance will think twice before using babel today to turn nice, clean, concise modern code into incredibly verbose and shimmed legacy code, and will certainly think twice before serving ES5 code to users on a modern browser.
1. Node.js greatly benefits from these features. Transpiling is not as prevalent there
2. Some people do exclusively target "evergreen" browsers and don't care about IE support
3. Those that do differential bundling (different bundles per browsers) can see quite a performance boost by not transpiling on newer browsers
This is only true if you're writing JavaScript for the client and you have to support IE11. For many companies, the usage for IE11 is so low now that it can be safely dropped, for instance my SaaS products all just target evergreen browsers.
The primary benefit is that I don't have to rely on hacks.
This is how I build the backend to my CMS and my clients are happy to keep their browsers updated. (nearly trivial to do these days).
This also means that as soon as I see a new JS/CSS feature that will eventually become mainstream, I can use it in my admin as soon as both major browsers (Firefox/Chrome) support it. And even sooner if it's not a critical feature. (eg, I can skip adding a feature like "lazy loading" because browsers will have it built in eventually, etc...)
On the public facing front end, that is a different story though. It's motivation to keep things simple.
Right now, for example, the apps I'm working on require at least async/await support as a minimum test.
Also about the “subset” thing: the last 10 years or so I have been moving away from OOP, and more towards FP, so stuff like class support or Typescript have been a big yawn, anyway.
We had “C with classes” (I.e. C++ as it was known at the time) shoved down our throats in Uni back in the 80s, since garbage collection was impractical. That “wisdom” turned out to be short lived, and thus my migration towards FP (beyond the Lisp I had in an AI class back in the 80s)
If you only need to support a subset of browsers you can turn off compilation/polyfills for specific features, which sometimes leads to better performance and smaller bundles.
I think of it like TypeScript and Babel running ahead experimenting with new ideas, ES following along turning the good ideas into a spec, and browsers taking up the rear implementing the spec. It’s a pretty decent system.
IMO, anything below that is effectively a custom macro anyway (for which you may want to consider sweet.js or babel.macro to make it clear that this may change and help you find places you use the feature). Real-world feedback may change anything from syntax to behavior (`flatMap -> flat`, `Object.observe -> Proxy`, `EventEmitter -> Observable -> Emitter?`, and the it-feels-like-dozens-of-options pipeline syntax)
What percentage of your users is on IE11? Are you making money from them? Can you serve only them a compiled bundle?
You don't. Unless you care about IE11 (for most consumer products and mobile apps isn't necessary) you can use many of the features up through ES2016 or later without transpilation. My business uses JS classes, arrow and async functions, and new prototype methods without issue.
I would much, much rather that type annotation syntax gets standardised first, because it is comparatively easy to build pattern matching when that's in place but going the opposite direction is difficult. What is a type if not a pattern?
Plus it's a stage1 proposal, meaning it's far from serious.
Could you link the type annotation proposal, I can't find it.
How do you match against a `Buffer` pattern/type?
EDIT: here is the confirmation TS 3.7.0 got tagged: https://github.com/microsoft/TypeScript/issues/16#issuecomme...
EDIT 2: wow just noticed, that the issue ID is "16" and it has been open since Jul 15, 2014 (I guess: good things take time ... ;) )
I understand it's based on destructuring; the syntax still just doesn't work for me.
https://dev.to/kayis/pattern-match-your-javascript-with-z-nf...
See Scala[1] or Swifts[2]'s implementations.
[1] https://docs.scala-lang.org/tour/pattern-matching.html
[2] https://docs.swift.org/swift-book/ReferenceManual/Patterns.h...
My idea is that `let`, `var` and `const` return the value(s) being assigned. Basically I miss being able to declare variables in the assertion part of `if` blocks that are scoped only during the `if()` block existence (including `else` blocks).
Something along these lines:
if( let row = await db.findOne() ) {
// row available here
}
// row does not exist here
The current alternative is to declare the variable outside the `if()` block, but I believe that is inelegant and harder to read, and also requires you to start renaming variables (ie. row1, row2...) due them going over their intended scope.As previous art, Golang's:
if x:=foo(); x>50 {
// x is here
}
else {
// x is here too
}
// x is not scoped here
And Perl's if( ( my $x = foo() ) > 50 ) {
print $x
} {
let row;
if (row = await db.findOne()) {
//
}
else {
//
}
}Also having "phantom" scope blocks get very nasty to read once you have more involved logic, as the block itself has no implied meaning and the programmer has to walk a few lines into it to get what's going on.
const user = await db.findOne()
if (user) ... else ...
Typescript can even narrow the type to null vs. User in each branch block.Other than saving a few characters (the variable name), I don't see any benefit of this, while it makes code harder to read.
Too bad the `with` [0] keyword has been reserved for crap, it sounds nice (not for this, but maybe for something else).
>> and also requires you to start renaming variables (ie. row1, row2...) due them going over their intended scope
Variable shadowing [1] is a really bad practice that makes it hard for people to collaborate and keep the code sane. Bad habits are not a reason for language changes.
[0] - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
int foo(int);
int bar(int x) {
if (int y = foo(x)) return 0;
return x;
} if (foo = bar()) { // syntax error!
}
if (let foo = bar()) { // works fine
}
if (const foo = bar()) { // also works fine
}
if (var foo = bar()) { // also also works fine
}But I'm still very nervous about some of the stuff mentioned here with regard to mutation. Taking Rust and Clojure as references, you always know for sure whether or not a call to e.g. `flat` will result in a mutation.
In JS, because of past experience, I'd never be completely confident that I wasn't mutating something by mistake. I don't know if you could retrofit features like const or mut. But, speaking personally, it might create enough safety-net to consider JS again.
(Maybe I'm missing an obvious feature?)
Proper immutable support (or a stronger concept of const) would also help with this.
Is that just
arr.map( e => b )
? let arr = [{a: 1, b: ["a", "b"]}, {a: 9, b: ["a","c"]}];
let b = "!"
for(let e of arr) e = b;
console.log(arr)
[{a: 1, b: ["a", "b"]}, {a: 9, b: ["a","c"]}] (unmodified) for(let e of arr) e.a = 2;
console.log(arr)
[{a: 2, b: ["a", "b"]}, {a: 2, b: ["a","c"]}] (modified) for(let e of arr) {let copy = {...e}; copy.a = 4;}
console.log(arr)
[{a: 2, b: ["a", "b"]}, {a: 2, b: ["a","c"]}] (unmodified) for(let e of arr) {let copy = {...e}; copy.b[0] = "!";}
console.log(arr)
[{a: 2, b: ["!", "b"]}, {a: 2, b: ["!","c"]}] (modified)The most frustrating thing about all of this is that the best way to make a deep copy to avoid all unwanted modification is JSON.parse(JSON.stringify(arr))
The arr.map providing you had a variable on the left side would output the result of each iteration into said array ( so an array containing all b values - however I guess you meant arr.map(e => e)
const a = [1,2,3]
a.push(4)
const b: readonly number[] = [1,2,3]
b.push(4) // Property 'push' does not exist on type 'readonly number[]'. > Object.freeze([1,2,3]).push(4)
TypeError: can't define array index property past the end of an array with non-writable length (firefox)
Uncaught TypeError: Cannot add property 3, object is not extensible (chrome)
Of course, it will only blow up at runtime. But better than not blowing up at all, creating heisenbugs and such.I often find myself writing classes where the last step of a constructor is to Object.freeze (or at least Object.seal) itself.
I think this is the kind of thing you just have to learn when you use any language. But when you're switching between half a dozen, being able to rely on consistent founding design principles really makes things easier. And when there aren't any, this kind of guide helps.
The only real downside is the lack of return values mean you can't chain mutations, but personally that never bothered me.
Mutates: push, pop, shift, unshift, splice, reverse, sort, copyWithin
Does Not Mutate: everything else
In my side project, which is a high performance web app, I was able to get an extra ~20fps by virtually removing all garbage created each frame. And there's a lot of ways to accidentally create garbage.
Prime example is the Iterator protocol, which creates a new object with two keys for every step of the iteration. Changing one for loop from for...of back to old-style made GC pauses happen about half as much. But you can't iterate over a Map or Set without the Iterator protocol, so now all my data structures are hand-built, or simply Arrays.
I would like to see new language features be designed with a GC cost model that isn't "GC is free!" But I doubt that JavaScript is designed for me and my sensibilities....
Array.flat() => flatten
Array.flatMap() => mapcat
String.trimLeft() => triml, trimr
Symbols are great but they’re much more useful when you can write them as (optionally namespaced) literals, which are much faster to work with: (= :my-key :your-key) ;; false
(= :my-key :my-key) ;; true
Object.entries() and Object.fromEntries() are both covered by (into). You can use (map) and other collection-oriented functions directly with a hashmap, it will be converted to a vector of [k v] pairs for you. (into {} your-vector) will turn it back into a new hashmap.And...all of these things were already in clojurescript when it was launched back in 2013! Plus efficient immutability by default, it’ll run on IE6, and the syntax is now way more uniform than JS. I’m itching to use it professionally.
In javascript you kind of have to reason backwards and declare your variables as immutable (const). Though there are still some bugaboos; object fields can still be overwritten even if the object was declared with const.
Personally I just use TypeScript which can enforce not mutating at compile time (for the most part).
Part of the immutable value proposition is being able to work with the objects. Based on [0] Freezing feels more like constant than immutable. And the 'frozenness' isn't communicated through the language - I could be passed a frozen or unfrozen object and I wouldn't know without inspecting it.
And freeze isn't recursive against the entire object graph, meaning the nature of freezing is entirely dependent on the implementation of that object.
I really like the language-level expression and type checking of Rust. But it does require intentional language design.
I'm not criticising JS (though I think there are plenty of far better langauges). Just saying that calling `freeze` 'immutable' isn't the full story.
[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Yes, although the new object is not frozen by default. Adding is quite straight forward, expecially with the spread syntax
let x = { a: 1, b: 2 };
Object.freeze(x)
let y = {...x, b: 3}
// y == { a: 1, b: 3 }
Removing is less intuitive let x = { a: 1, b: 2 };
Object.freeze(x)
let { b, ...y} = x;
// y == { a: 1 }
> And the 'frozenness' isn't communicated through the languageYes, but given that JS is a dynamic language I wouldn't expect anything different (everything must inspected at runtime).
> And freeze isn't recursive against the entire object graph
You're right, although one could quickly implement a recursive version.
In any case I find Object.freeze not much useful since trying to mutate a frozen object will simply ignore the operation; I think that most of the time trying to do that should be considered an error and I would prefer to have and exception raised.
const foo = Object.freeze({ a: 1, b: 2 })
const fooCopy = { ...foo }
And you are right that Object.freeze doesn't work recursively (although making it work recursively is fairly easy to implement yourself if you use it a lot).But like it or not JS isn't a language with a powerful type system, and it doesn't pretend to have one so knocking it for that is like knocking Python for using whitespace, or knocking Rust for needing a compiler.
Luckily, Typescript and Flow have most of what you are asking for, and they work pretty damn well across the entire ecosystem.
Off the top of my head, I know typescript has the ability to mark things as read-only even at the individual property level. [1] And they have tons of the type checking nice-ness that you can expect from other "well typed" languages like Rust.
[1] https://basarat.gitbooks.io/typescript/docs/types/readonly.h...
In my experience most "immutability" in JS is enforced by convention or, at best, static type systems. It's not ideal, but it works.
If so then yeah, that can be annoying and/or confusing.
It becomes especially important in React where you share objects up and down an immutable structure of objects.
2.4.1 :001 > a = [4,3,5,1,2]
=> [4, 3, 5, 1, 2]
2.4.1 :002 > a.sort
=> [1, 2, 3, 4, 5]
2.4.1 :003 > a
=> [4, 3, 5, 1, 2]
2.4.1 :004 > a.sort!
=> [1, 2, 3, 4, 5]
2.4.1 :005 > a
=> [1, 2, 3, 4, 5]I makes chaining things while debugging so much harder:
let a = a.project();
let a = debug(a);
let a = a.eject();
vs let a1 = a.project();
let a1d = debug(a1);
let a2 = a1d.eject();Given that JS doesn't restrict the type of a declaration you can just assign a new value to it, place it in a small scope or use a chain.
I don't know - it's never confusing to me. I just use the IDE that allows me to view the types of the variables whenever I need to see them.
IDE also highlights the definitions and then the usages of the variable, including the syntax scope where it's used.
You're definitely using the wrong tools for the job if you get confused with that little detail.
>> Given that JS doesn't restrict the type of a declaration you can just assign a new value to it, place it in a small scope or use a chain.
Yeah, but I don't want to semantically assign a new value to the variable. I want this to be a new variable, because it is a new variable.
So the point of var is slightly over exaggerated because they could have gone the python way and simply allow declarations to be anything that is an assignment.
let a = a.project();
a = debug(a)
a = a.eject();
This is perfectly legal.Any var/let statement of the form var a = 1; is interpreted as 2 statements. (1) The declaration of the variable which is hoisted to the beginning of the variable scope, and the (2) setting of the value, which is done at the location the var statement is at.
Having multiple let statements would mean the same variable is declared and hoisted to the same location multiple times. So it's basically unnecessary and breaks hoisting semantics.
In addition, the downside risk of accidentally redefining a variable is probably far greater than the semantic benefits of making the redefinition clear to a reader (esp since I think that benefit is extremely limited in a loosely typed language like JS anyways).
Think of the closure as an object. It contains variables like `this`, `arguments`, a pointer to the parent closure, all your variables, etc.
The interpreter needs to create this closure object BEFORE it runs the function. Before the function can run, it has to be parsed. It looks for any parameters, `var` statements, and function statements. These are all added to the list of properties in the object with a value of `undefined`. If you have `var foo` twice, it only creates one property with that name.
Now when it runs, it just ignores any `var` statements and instead, it looks up the value in the object. If it's not there, then it looks in the parent closure and throws an error if it reaches the top closure and doesn't find a property with that name. Since all the variables were assigned `undefined` beforehand, a lookup always returns the correct value.
`let` wrecks this simple strategy. When you're creating the closure, you have to specify if a variable belongs in the `var` group or in the `let` group. If it is in the `let` group, it isn't given a default value of `undefined`. Because of the TDZ (temporal dead zone), it is instead give a pseudo "really undefined" placeholder value.
When your function runs and comes across a variable in the let group, it must do a couple checks.
Case 1: we have a `let` statement. Re-assign all the given variables to their assigned value or to `undefined` if no value is given.
Case 2: we have an assignment statement. Check if we are "really undefined" or if we have an actual value. If "really undefined", then we must throw an error that we used before assignment. Otherwise, assign the variable the given value;
Case 3: We are accessing a variable. Check if we are "really undefined" and throw if we are. Otherwise, return the value.
To my knowledge, there's no technical reason for implementing the rule of only one declaration aside from forcing some idea of purity. The biggest general downside of `let` IMO is that you must do extra checks and branches every time you access a variable else have the JIT generate another code path (both of which are less efficient).
When you're refactoring, you then have to be much more careful when moving lines of code of code around. With unique names, you get more of a safety net (including compile time errors if you're using something like TypeScript).
If you want a variable you can assign successive different values to, it's an entirely different thing, and there have always been var and the assignment operator for that.
That's pure BS. This is only true for atomics, the value can change (under let, var and const) as we can easily see with Array.push, for example.
var a = ...;
a = a.project();
a = debug(a);
a = a.eject();Rust allows this, and it really clears codes up. I don’t have to make up different identifiers for same data but different representations. (e.g. I would do the above code in JS as... someKindOfDataAsArray = [...someKindOfObectAsNodeList])
let projected = a.project();
let debugged = debug(projected);
let ejected= debugged.eject();And by the way - if you paid attention in the first place my post actually has exactly what you've just written.
Lodash wasn't necessary.
In general the spread operator should only be used for forwarding arguments not for array operations.
Much rather have the magic word “flat”
It's also confusing that `arr.flatMap()` is not equivalent to `arr.map().flat()`, but to `arr.map.flat(Infinity)`
x = [[[1, 2]], [[2, 3]], [[3, 4]]]
x.flatMap(x=>x)
output: [[1,2], [2,3], [3,4]]
x.map(x=>x).flat()
output: [[1,2], [2,3], [3,4]]
x.map(x=>x).flat(Infinity)
output: [1, 2, 2, 3, 3, 4]
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
> It is identical to a map() followed by a flat() of depth 1, but flatMap() is often quite useful, as merging both into one method is slightly more efficient"I think Array#flatten should be shallow by default because it makes sense to do less work by default, it aligns with existing APIs like the DOM Node#cloneNode which is shallow by default, and it would align with the existing ES3 pattern of using Array#concat for a shallow flatten. Shallow by default would also align with flatMap too."
However, generally you don't want to operate on a list of lists and are trying to process each value one by one -- the nesting doesn't add anything. In this case, we use flatMap, which "flattens" or concatenates the interior lists so we can operate on them like it's just a big stream of values.
This is also the case for another type like `Optional`, which represents either a value `T` or the absence of a value. An optional can be "mapped" so that a function is applied only if there is a value `T` present. flatMap works the same way here, where if you want to call another method that also produces an `Optional`, flatMap will "unwrap" the optional since you never really want to work with the type `Optional<Optional<T>>`.
'map' as a function name isn't great either, since we have the same name for a data structure. What it has in its favor is being short and traditional.
JS flatMap = C# SelectMany
And also here is a good recap of ES 6/7/8/9 (just in case you missed something) (also not mine): https://medium.com/@madasamy/javascript-brief-history-and-ec...
The last step is pretty annoying without it.
The entire thing is very common in Python, where Object.entries() is spelled `.items()` and `Object.fromEntries(…)` is spelled `dict(…)`
If you're familiar with C#'s linq and it's reliance on SelectMany it's somewhat easier to see the significance.
In C#'s linq you might write something like:
from host in sources
from value in fetch_data_from(host)
select create_record(value, host)
with flatmap (and some abuse of notation) you can more easily implement this as: sources.flatmap( host => fetch_data_from(host)
.flatmap( value => create_record(value, host))
If you dig even further you'll find that what makes this powerful is that the flatMap, together with the function x => [x], turns arrays into a Monad. The separate functions map and flat also work, but this adds more conditions. Haskell folks tend to prefer flatMap because most of the conditions for a Monad can be encoded in its type signature (except [x].flatMap(x => x) == x, but that one is easy enough to check).You'll find equivalents in all the JS utility libraries and most functional programming language standard libraries (and languages like Ruby with functional-ish subsets), so there's a lot of evidence that people who write code in that style like to have such a function available.
I personally feel flatMap is a much more used method than flat, so if you want to remove one, I would remove flat.
Flat can flatten any level of nesting (it just defaults to 1), so would be difficult to implement in terms of flatMap.
function flatten(x, n=1) {
return n > 0 ? x.flatmap(y => flatten(y, n-1))
: x;
}"I think Array#flatten should be shallow by default because it makes sense to do less work by default, it aligns with existing APIs like the DOM Node#cloneNode which is shallow by default, and it would align with the existing ES3 pattern of using Array#concat for a shallow flatten. Shallow by default would also align with flatMap too."
I really hope there could be syntactic sugar like do expression in Haskell, for in Scala, and LinQ in C# for flatMap instead of type limited version like async await.
Another thing is pipe operator seems to be very welcome among the proposals. There will be no awkward .pipe(map(f), tap(g)) in RxJS since then.
1. it all but requires that ES-defined functions stringify to their source code. Pre-ES2019 that's implementation-defined
2. it standardises the placeholder for the case where toString can't or won't create ECMAScript code (e.g. host functions), this could otherwise be an issue as with implementation-defined placeholders subsequent updates to the standard might make the placeholder unexpectedly syntactically valid, by having the placeholder standard future proposals can easily avoid making it valid
3. the stringification should be cross-platform as the algorithm is standardised
https://tc39.es/Function-prototype-toString-revision/
https://github.com/tc39/Function-prototype-toString-revision...
Personally, I am not a fan of languages growing. I think C is awesome, because everyone can understand the code, and doesn’t have to be a language lawyer like with C++. Concepts, Lambdas, crazy Template preprocessing, and more. The team can just work, pick up any module and read it without magic.
In C++ I am not even sure if a copy constructor would run vs an overloaded = operator without looking it up.
How much of ES5.5+ was guided by jQuery?
[1,2,,3]
> (str.match(regexWithGroup) || [, null])[1]
I.e. if the regex matches, then give me the first group (1st index) otherwise give me null.
var arr = [];
arr[0] = 1;
arr[1] = 2;
arr[3] = 3;
so there's not really much downside to also allowing a literal syntax for the same thing.I really love languages that force you to handle errors up to the top level.
In those cases, forcing the extra parameter in the catch, even though you are not using it, is slightly annoying. I mean it's literally 3 characters, but in this age of linters encouraging to not specify arguments you don't use, it just feels unnatural.
I really hate when a system tells me "Unknown error occured" or "Either this or that happened" because the software doesn't care to be specific with the errors.
You should at least log the error message, not ignore it.
In Java it led to lots of exception wrapping and leaky abstractions.
Not sure what the answer is - although my golang experience was better.
Java programmers need to be comfortable letting exceptions have the default behavior until they're sure they have a better idea. Declaring throws is usually enough.
I've always really liked checked exceptions in my own designs. Though I'm not crazy about the syntax.
stream.map(f).collect(...)
should be able to throw anything f can throw, but instead f has to wrap everything, which makes people give up and stop declaring checked exceptions.Are you implying that no other languages ever add features that other languages have? All languages except JS are feature complete? Come on..
And it has brackets.
IMO the three features that would make a much more significant impact in front end work are:
- optional static types
- reactivity
- some way to solve data binding with the DOM at the native level
It seems like an unnecessary change - if the source needs to be accessed then get the source file.
Usually it doesn't matter, because these methods are used on functions that luckily only use well known names, mainly properties of window/global.
But it's a risk, and I've seen subtle bugs caused by the assumption that the function's .toString() can be run through text substitutions and eval to get back a variant of the original code.
Contrived example:
let x = "wrong variable";
function f() { let x = "I am x"; return function g() { return x; } }
f()()
=> 'I am x'
eval(f().toString())
g()
=> 'wrong variable'I know this because I have wanted to use fn.toString() as part of some meta-programming many years ago (but couldn't because comments were not stored).
I am sure a lot of effort went in to making a good decision, aiming for a good outcome, but this smells like a bad one.
const test = Symbol("Desc");
testSymbol.description; // "Desc"
---------
Should testSymbol be replaced with test?
What looks out of place to you in that example?
Would it make more sense to you with a very slightly less arbitrary example, perhaps arr = ['Value for 0', 'Value for 1', , 'Value for 3', 'Value for 4']; instead of simple mapping ints to ints?
Because array contents are mutable [even if the array variable itself is declared const] that third index may be populated at a later point in the code.
Most languages don't have sparse arrays so it's really weird.
> Would it make more sense to you with a very slightly less arbitrary example, perhaps arr = ['Value for 0', 'Value for 1', , 'Value for 3', 'Value for 4']; instead of simple mapping ints to ints?
You'd usually put an explicit `null` there, especially as HOFs skip "empty" array cells so
['Value for 0', 'Value for 1', , 'Value for 3', 'Value for 4'].map(_=>1)
returns [1, 1, , 1, 1]
which is rarely expected or desirable.> The flatMap() method first maps each element using a mapping function, then flattens the result into a new array. It is identical to a map() followed by a flat() of depth 1, but flatMap() is often quite useful, as merging both into one method is slightly more efficient.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Even the trim operations they added fall short of the target. In Python (and tcl, by the way) you can specify which characters to trim.
So close, yet, so far.
All major engines have also implemented corresponding trimLeft and trimRight functions - without any standard specification.
So ES2019 implements trimStart() and trimEnd(), which are symmetrical to padStart() and padEnd(), but trimLeft() and trimRight() aliases are maintained as not to break working code.We go on adding fancy new syntax for little or no gain. The whole arrow function notation, for example, buys nothing new compared to the old notation of writing "function(....){}" other than appearing to keep up with functional fashion of the times.
Similarly, python which was resistant to the idea of 20 ways to do the same thing, also seems to be going in the direction of crazy things like the "walrus" operator which seems to be increasing the cognitive load by being a little more terse while not solving any fundamental issues.
Nothing wrong with functional paradigm, but extra syntax should only be added when it brings something substantial valuable to the table.
Also, features should be removed just as aggressively as they are added, otherwise you end up with C++ where you need less of a programmer to be able to tell what a given expression will do and more of a compiler grammar lawyer who can unmangle the legalese.
Incorrect - The main advantage is fat arrow syntax can keep lexical scope of this current context. Hence you dontneed to implement that=this antipattern
The current scene is that most people don't know what the real difference between arrow and function notations and this leads to a lot more number of bugs than if they weren't this overlapping. Overall, my point is, this just leads to poor ergonomics and you'll have a larger number of avoidable bugs.
That's hard to believe unless you're working on the most amateur of teams.
There's a point where you have to expect people to understand the most basic concepts of the language/tools they're hired to use. This shouldn't require more than a simple 5min pull-aside of the junior developer.
Also, you can't change function(){}'s dynamic scope without breaking the web, which is a major downside for your suggested upside of developers not having to learn the distinction. function(){} was always confusing from day one. ()=>{} is a move back toward intuitiveness.
Arrow functions bind this to the lexical scope, which is useful. (In a regular function the value of this depends on how it's called.)
> python which was resistant to the idea of 20 ways to do the same thing
This was in comparison to Perl which intentionally has an unusual excess of different ways to do things.
> "walrus" operator
Simplifies a very common pattern.
m = re.match(r"my_key = (.*)", text)
if m:
print(m.group(1))It allows devs to do the following:
onClick={() => doSomething()}
Without having to worry about binding the function to the correct context.
It's one of the best JS improvements of the last 10 years.
I'm a JS fan but had to admit I chuckled at the implementation of is-even: https://github.com/jonschlinkert/is-even/blob/master/index.j...
What's not hilarious is that, after removing the essentially useless error-checking, is-even is literally just `(n % 2) === 1`. On one hand, JS desperately needs a standard library, on the other hand, JS devs can be so infuriatingly lazy and obtuse.
Pretty sure that tells you if a number is odd. I guess maybe there is a reason these libraries exist.
Function.toString being more accurate is helpful.
But real progress would be removing dangerous backtracking regular expressions in favor of RE2: https://github.com/google/re2/wiki/Syntax