1). OSS critiques should be more polite
2). People are stupid for not doing due diligence
First, I agree criticism should be constructive. Yet there is nothing wrong with 100 people saying here they don't like React Router and in fact such feedback is crucial for the authors and the community. Maybe you agree; but ironically the tone of your post seemed have a bit of the scorn and derision that you are asking people to avoid.Secondly, it's already obvious people are responsible for their own projects. The motivation for stating the obvious seems to be to emphasize people should seek to blame themselves before criticizing others. Wrong. These things are orthogonal - Regardless of one's due diligence, it's perfectly acceptable, indeed beneficial, to critique OSS, when done in a productive way.
Check out https://formidable.com/blog/2016/07/11/let-the-url-do-the-ta... for more on how we differentiate from the RR philosophy.
In general I think a littlw NIH is a good thing. Even if there exists a library that does what you want, it might also include much more that you don't need, and perhaps the kernel of what you want fits into a small function you can write and vet yourself.
https://en.wikipedia.org/wiki/Wisdom_of_the_crowd https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
I'm dealing with a bunch of people now who did something like that. At the time they made these decisions they might have had good reasons, but now they're doing other stuff and the custom things they wrote are a huge liability.
Don't write it unless you intend to own it.
There used to be a feeling in OSS that forking was a solution of last resort. Now, with the process of writing software becoming much more focused on micro-libraries (and the tools for using those micro-libs getting better enough to make it not so painful), the barrier to entry on writing a new library to solve a specific problem is often very low. Routing is not a huge problem. A single developer with some experience can build a reasonably complete one in a week. So...here we are. There's, what, a dozen popular routers? All mostly the same. Maybe one or two use promises, and maybe that seems much more modern, so they get some uptake. But, with one developer behind them, and maybe a couple of occasional other contributors, you have little feedback pushing for stability. The same desire that led to wanting to write a new router (to use the latest technology and ideas) is the same desire that leads to breaking changes.
I'm feeling particularly overwhelmed by the size and...um...inconsistency in quality, of the npm ecosystem. I really have very little of the NIH drive. I'm perfectly happy to put together Lego projects from off-the-shelf components, when possible (my business partner brings enough NIH to the table for both of us). But, I barely know where to start in node. NIH seems to have been elevated to a religion.
I found a package on npm that sounded like it did everything I wanted (plus a few extra things, but I figured I could ignore those). It took longer than I expected to install...so, I did a little digging. It had installed over 53,000 files, and the resulting directory was 110 MB in size!
I was absolutely flabbergasted. I couldn't believe installing one package, for something seemingly simple, could balloon up that large. I won't name names, as I did a little more poking around, and realized that most npm installations pull in thousands of files via automatic dependency resolution, though this one was a particularly egregious example. I've gotten to where I only install stuff via npm when I'm on a free connection; I normally work on mobile broadband, which is very expensive (and adds up to almost $300/month even before I started playing with npm).
Now, to be fair, it was pulling in a web framework...maybe Express or Hapi, I don't remember which, and all of its dependencies, so it was actually a lot more than just the login module. The kind of annoying bit was I already had a global installation of both of those frameworks from following tutorials, but it still seemed to insist on pulling in its own preferred versions of stuff, and putting them into the project directory.
I come from the Perl world, where if you don't spend at least half your time looking for and evaluating libraries before you start writing code, you're not being very productive. I'm, frankly, overwhelmed by how big and unfiltered the npm ecosystem is. I've found myself relieved to start tinkering with more "all in one" libraries and frameworks, because I don't have the time or knowledge to evaluate libs on my own. I ordinarily prefer a more a la carte approach, where you just pull in what you need, and so big libraries and frameworks don't fit that. But, I can't make sense out of the ecosystem without some guidance. There are over 70,000 npm packages! Curation really has turned out to be one of the big problems in computer science.
I'd love it if, NPM would list total package size including dependencies.
Really recommend it. As you can see, react-router's deps are actually extremely conservative. Webpack, another awesome project, gives a good example of a bigg dep tree: http://npm.anvaka.com/#/view/2d/webpack
You could always use this:
https://www.npmjs.com/package/npm-proxy-cache
It caches the package listings and the packages that you download. It will act as a pass through that with a limited TTL on the cache, but there is an option to fallback to the cache if you can't connect to upstream.
Granted, you have to have already installed something for it to work as an offline cache.
Also, part of the problem with all those files is that npm allows packages to installed pinned dependency versions. If package-a requires lodash 2.x and package-b requires lodash 3.x, then both will be installed within the respective package's directory. For example let's dive into the node_modules/ in one of my projects.
$ ls node_modules/**/lodash.js
node_modules/cordova-lib/node_modules/lodash/chain/lodash.js
node_modules/findup-sync/node_modules/lodash/dist/lodash.js
node_modules/findup-sync/node_modules/lodash/lodash.js
node_modules/globule/node_modules/lodash/dist/lodash.js
node_modules/grunt-contrib-less/node_modules/lodash/dist/lodash.js
node_modules/grunt-contrib-less/node_modules/lodash/lodash.js
node_modules/grunt-contrib-watch/node_modules/lodash/dist/lodash.js
node_modules/grunt-contrib-watch/node_modules/lodash/lodash.js
node_modules/grunt-curl/node_modules/lodash/dist/lodash.js
node_modules/grunt-curl/node_modules/lodash/lodash.js
node_modules/grunt-legacy-log-utils/node_modules/lodash/dist/lodash.js
node_modules/grunt-legacy-log-utils/node_modules/lodash/lodash.js
node_modules/grunt-legacy-log/node_modules/lodash/dist/lodash.js
node_modules/grunt-legacy-log/node_modules/lodash/lodash.js
node_modules/grunt-legacy-util/node_modules/lodash/lodash.js
node_modules/grunt-ng-constant/node_modules/lodash/dist/lodash.js
node_modules/grunt-ng-constant/node_modules/lodash/lodash.js
node_modules/grunt-protractor-runner/node_modules/lodash/lodash.js
node_modules/grunt/node_modules/lodash/lodash.js
node_modules/jshint/node_modules/lodash/chain/lodash.js
node_modules/lodash/chain/lodash.js
node_modules/phantomjs-prebuilt/node_modules/lodash/lodash.js
node_modules/preprocess/node_modules/lodash/lodash.js
node_modules/protractor/node_modules/lodash/lodash.js
That's 24 copies of lodash.js installed that could all be a unique version of lodash used only by said module.I'm new to this ecosystem, so I'm definitely not an expert, but it's certainly been an intimidating point for me; maybe the most difficult thing to wrap my head around. I'm used to being able to spelunk into my project, and read everything I'm depending on, or at least skim it and kinda grok where things happen. How would one even do that with 53,000 files? How can anyone trust any application they build with these tools? I mean, the security implications alone are breathtaking, to me.
Basically, if lib1 and lib2 each use lib3, I don't want to upgrade anything until both lib1 and lib2 agree that a newer version of lib3 works.
My view is that dependencies are someone else's solution to my problem of technical debt.
I'd be a straight-up liar if I claimed to be proud of every line of code I've written, either for an employer or for myself. Sometimes you just have to hammer a square peg into a round hole and be done with it because deadlines. Or lazy. Or boredom. Or whatever this project is going nowhere anyway, so wtf? Hack the shit out of it.
I always tell myself I'm going to get back to that later and clean it up, but I often don't because, well, moar deadlines.
Dependency updates--particularly breaking ones--are things I love to hear about. Dependency updates give me an excuse, both professionally and for my side projects, to revisit stuff that I knew was janky and crappy and broke when I wrote it, but have since come to accept.
Security updates are absolute gold in this game of not wanting to suck but still having to meet deadlines.
"Sorry boss, but there's a vulnerability in lib x. We have to update. But it's breaking. So now we have to refactor. Two weeks, at least. Maybe more."
I just got rid of a crap-ton of bad code while I was updating for that dependency. Oops.
Those two habits and https://blot.im made blogging nigh-effortless. Like dandruff, I get it for free.
In this case, the perfect is the enemy of no one. A blog engine is the one side-project that I don't just hack. It's my one and only place for writing pure, elegant code.
And I won't let myself write at length again on someone else's platform until I get this exactly right.
Everyone wins: I don't clutter the internet with inane crap; you aren't tempted to read it.
If I ever decide to start publishing my writing, the best part of it will be the code that presents it.
Got an React project running for a year now. Hundred of deps. I just don't know anymore...
Last year Flummox got deprecated and we had to replace it with Redux, good times...
Recognizing the responsibility for your own actions does not preclude you from being frustrated and at times outright angry at the outcomes of those actions. In this case I (wrongly) assumed that after a 1.0 release the library would be relatively stable and have been repeatedly duped into believing that this time the major version release would be the stable one. I don't really see anywhere that people were issuing an outcry for V4, so I am still confused as to why it was so urgent to release it. It wasn't perfect but it was fine. Unfortunately in my frustration and confusion I chose to write a very strongly worded comment that apparently some people did not like.
> To be clear: developers that release libraries and then iterate the API in public do not deserve personal scorn for doing so
I have never gotten the impression that react-router was someone's personal library, rather it is a community project that is maintained by notable members of the javascript community and it is my belief that delivering half-baked stuff to the people who counted on them, and who they led to believe could count on them, is not a fair thing to do. I don't believe it is unreasonable to be frustrated with fickle leadership from people who stepped up to lead the project. If they can't deal with the criticism or don't have the time/effort/inclination/whatever to lead in a way that is agreeable to most of the community then perhaps someone else could lead. When projects have thousands of stars on github, making rapid successions of breaking changes throws all of those people for a loop.
If you do in fact view the repo as your personal project and want to make huge changes all the time, like every 6 months, I don't think that the version with thousands of stars is really the place to do that. Why not just do it on your own where it isn't going to affect so many people.
Note: Neither this, nor my original comment should be taken as personal attacks on the maintainers although I am aware that both are probably over stated. I'm sure they are all extremely talented developers and kind people.
What I wrote wasn't entirely in response to the HN thread. Similar thoughts about the JS community have been brewing for a while and the discussion prompted me to speak my mind.
My thoughts on the upgrade are pretty much:
- Do we actually need to upgrade? Old react router works fine.
- If we do need to upgrade - how long will it take? If it's just a few hours to shift some code around, maybe it's not that bad. Especially if it's moving to a cleaner more "react" API.
At the end of the day, we could write our own routing class, or another. We'll probably stick with react-router though - but save the upgrade for a day where we've got nothing else to do, or if an engineer has some free time.
Or when you stumble on a show-stopping bug right before a deadline, that is only fixed in supported newer versions.
The only mention I can find is "But the API smelled funny to me, and had not settled, so I continued to wait" that hints that the API is changing and looks weird. But other than that, there is no constructive criticism here.
I'm all for being able to write freely about software we find bad, but without any concrete examples or even pointers to what is bad, I don't think this article have any merit (except that it's good to make sure you understand the dependencies you have).
EDIT: Additionally, I'm not saying React Router is bad per se, just that it's not fully baked. I'm glad to see people working on the problem. I look forward to leveraging the fruits of their labor in the future (in fact, I already do - I use the history library on which RR is built).
Not that I am defending or even using React Router myself, I'm just saying that it's usually better for everyone if the feedback is better explained than just "It's bad" or "It's risky to use".
Is anyone aware of an easier way to assess these longer-term risks of a piece of technology? Things like API stability, community strength and responsiveness, backwards compatibility, upgrade paths, etc.
Humans are very poor at reasoning about low probability events that have serious consequences, i.e. large earthquakes with very long return periods.
For this reason, it's difficult to assess the 'risk' of external code because risk management processes don't work very well when high uncertainty is involved.
If you decide to use external code that has a high degree of uncertainty, just make sure to have a generous allowance of time/money/effort for future problems.
You say this off-handedly like its not a big deal. No single person or company understands all the dependencies from the app all the way down to the electricity. That's the real reason we use dependencies - for things that aren't core to our business, that we don't want to have to understand.
With that being said, they claim you'll be able to migrate slowly to v4, so hopefully it's not so bad. If they just broke backwards compatibility without a migration strategy, I'd certainly feel much more frustrated. I went with RR largely because it's widely used. I also don't think it's an unreasonable expectation for a widely used library to provide a migration strategy. Not that they're under any obligation to do so, of course.
My personal bar for adding dependencies is asking myself: would I feel comfortable debugging and fixing this? I recognize that it's not a free ride.