Not sure how much of the Rails team browses HN comments, but an issue will certainly get their attention.
Also, Phoenix (ideological successor of rails) took this approach from the start
Shouldn't this have been shipped in 5.1, it might have very well been a bit too late.
ES6 love + webpack + Yarn in the asset pipeline.
If you're still in rails land this seems pretty cool.
If you're not already using yarn, go check it out right now. Migrating literally took our team less than 5 minutes, and we haven't had a single problem so far. Its default behavior is much more sensible than npm.
Writing a SPA that handles every edge-case is really challenging, so it's actually incredibly refreshing to write a fully server-side rendered app. Especially after spending a lot of time working on SPAs. It's pretty mind-bending how easily you can wire up a meaningful prototype, just by sticking to rails' guidelines. Even though my love for ruby has diminished after having tried other languages, I'll still happily reach for rails in many situations.
I think for many apps you'd strike the best balanced in complexity by mixing both server-side rendering and having some pages which are small javascript applications, instead of going for a full-blown SPA. Depending on what the app does, somewhere between 5k and 10k SLOC seems to be a sweet spot for an app to be useful while still being trivially easy to change and keep track of everything. Once you pass ~20k SLOC, I've found it starts to get harder to follow everything, and changes start to require a bit more effort. It's worth noting that those numbers are pretty arbitrary and are probably wrong :). Again, it depends on the application.
In my job we had a period where everything was handled by a SPA, and it was pretty painful. If you have a password-gated app, consider doing all the auth views with server-side rendering, and only load the SPA for logged-in users. This lets you drastically simplify how you handle things like resource caching and routing. Leverage the server more, stop trying to handle everything on the client! That's gotta be one of the most important lessons I've learned.
My really big complaint with rails is that it works so well until it grows enough, and then it gradually starts to fight you. How do you transition beyond rails MVC? I've looked at stuff like Trailblazer [0], but I'm uncertain if that's really the direction a growing rails app should take.
I, too, agree that the asset pipeline is clunky. But at my startup we went the other way: use npm/webpack but compile the resulting file _into_ the assets directory, so we can still use the asset pipeline. It sort of double processes the assets but it was the only way I could think of to take advantage of modern JS with imports and exports, and still get the cache-busting goodness and easy rails view interoperability.
Here's a blog post [1] explaining how to use webpack with rails.
[0] https://github.com/kossnocorp/assets-webpack-plugin
[1] http://clarkdave.net/2015/01/how-to-use-webpack-with-rails/#...
Webpack has built in support for this as it generates chunk hashes and can output filenames with them in it, then you just get that data and do the same as above.
How you put that in your Rails app is up to you - could be a simple helper like javascript_tag, that would be easy to do.
Rails really didn't have any magic to the asset pipeline in that regard.
* add that hash to the filename(s) of your generated assets;
* print out that hash to a file;
* fetch the hash from that file for use in your helpers/views, to call the correct assets.
The following blog post is a good place to get started.
http://pixelatedworks.com/articles/replacing-the-rails-asset...
Whether or not this is better or worse than your current setup, I'm not sure. It's probably equal levels of duct tape either way. Printing out the hash and rolling your own solution gives you a bit more power and flexibility though.
They should look at Django for some inspiration. Here's their latest beta release notes [1]. It's concise, everything links to relevant documentation with examples, and the entire project's changes are handled on the one page. There's even a list of things to check if you're upgrading.
I also can't find fault with giving some appreciation to the relevant people who contributed by actually naming them.
My main problem isn't the fluff, it's the lack of real documentation for new features. Or that the link doesn't go to the documentation, it goes to the merged PR on guthub. And that it's very incomplete (see Everything else).
This is not excellent, in that it could keep the things you like and still be much better.
Imo worth rationalizing and explaining to their community
https://github.com/mipearson/webpack-rails
It would be awesome if someone could write a migration guide for this switch. I'm sure a lot of Rails teams are using the webpack-rails gem and are planning to switch to the official gem.
That said, I love CoffeeScript and continue to use it to this day. It gets a lot of hate and I've been pushed to write a lot of ES6 these days, but CoffeeScript is still much more succinct.
Nowadays developers prefer to do SPA based approach where you have API (RoR - JS Stuff) and Front-End (Whatever front-end people decide to choose that will be outdated in 1 year).
Why can't RoR just focus on the back-end stuff instead of adding bloat that will be obsolete in 1 year?
With Capybara, I'm not keen on the default transactional rollbacks. The reason for this is, that it's handy to keep the data in the db after a test to see what the final state of it was. I always drop the data before a test runs - but then again, I prefer the use of factories over fixtures.
Can someone please explain the benefits of this encrypted secrets business?? I'm struggling to see the benefit - I understand that you need access to the code + the env var, but in all practicality, if someone has access to your env vars, then is it really that much more of a reach to figure out what the key in the code is?
That said, there are other good reasons not to commit configuration like this to your repo (configuration and code don't always change in unison; sometimes you need to use older code with more recent configuration, for example) but it's at least better than the current situation where careless developers wind up with thousands of dollars of charges against their AWS accounts after accidentally committing AWS keys.
I was surprised that they've chosen to use the 'shared database connection accross two threads' approach. While this was for a while popular approach amongst people trying to figure out the best way to use Capybara-based tests with Rails -- mostly because you get to keep transaction-based database cleanup -- it was largely determined through experience to be too flaky. ActiveRecord's architecture just wasn't designed for a Connection object (representing a single actual network connection) to be shared between two threads, it's not a thread-safe object.
José Valim originally proposed the approach in a gist, and you can see a long discussion among people having hard-to-debug problems with it here: https://gist.github.com/josevalim/470808
There are other such threads in various places from over the years.
Here's one blogger explaining the race condition issue, with the ActiveRecord architecture not actually being concurrency-safe in this use case: http://influitive.github.io/news/2015/09/02/dont-use-shared-...
I thought I remembered Valim actually posting somewhere himself "Yeah, turns out, that wasn't a great idea", but now I can't find it. (It may be that José deleted most of his tweets when he left twitter?)
So I'm surprised Rails took this approach. I can't tell for sure if the approach now in Rails just basically takes Valim's approach (violating AR's concurrency contracts and expectations), or actually takes a new approach that will truly be concurrency-safe, while somehow still sharing the connection (another lock somewhere? Are AR connections in general now safe for sharing between threads? Cause that'd be huge if it was true in the general case). https://github.com/rails/rails/pull/28083 Can anyone else?
Either way, reliable Capybara has been a real challenge to a lot of devs due to race conditions, it would be shocking if Rails gets it right on the first try, when many devs haven't managed to figure out a right way to do it in years of trying! I expect lots and lots of frustrated devs running into race conditions and filing Issues. It will be interesting to see if this ends up a maintained feature, or another Rails internal abandonware. If dhh and his team use it, and run into the race conditions, then it will certainly be addressed; otherwise, some 'difficult' Rails features seem to sometimes end up basically abandonware.
They are not surprising, because the concurrency guarantees of AR do not include sharing a connection object between threads, it is not meant to be thread-safe.
With postgres, the race conditions on sharing a connection between two concurrent threads sometimes look like "PG::Error: connection is closed" or "PG::UnableToSend: another command is already in progress", with mysql sometimes like "Mysql2::Error: This connection is in use by: #<Thread:...>"
While integration tests may ideally not use the database (although I find in practice, I think you often need or want to do some 'manual' setup first. But even if you don't, the _other_ tests do db setup and teardown, and the problems start happening when the Rails app is still busy doing something triggered by a test that the test thread considers 'over'. Easy to say "Well, don't do that", but in practice it is very difficult to diagnose, debug, and stop. Using fancy new front-end frameworks like React or Angular tends to make it orders of magnitude worse.
If you haven't run into problems, consider yourself lucky and I don't hold it against you, but many have.