In elixir you really get the full power of multi core and support for distributed computing out of the box.
Code that would have been beyond my pay grade or wouldn’t even imagine to write in Ruby or JavaScript is now easily reasoned about and maintained in projects. I can write succinct code that is easy to read, is fast, able to take advantage of multiple cores, less error prone, which I can scale to multiple machines easily.
The erlang scheduler is so damn powerful and it feels amazing to be able to execute your code on multiple machines with a simple distributed task which is built in as a standard functionality of the language.
I’ll end this note saying that, look at the problem you are trying to solve. If you need multi core and distributed features (which is generally more common than you think) elixir is truly your friend.
I can say without a shadow of a doubt the project I’m building right now would not be progressing as fast as it is if I picked anything other than Elixir. You get a lot of bang for your buck when it comes to productivity in the domain that elixir solves for.
Is it though? At least in my line of work I don't think I've ever run into this. I feel like I've always been able to distribute just fine with workers/queues. If I even suspected it would I'd look into it more, but generally I find distributing across systems to be a software architecture-level and not language-level work; perhaps I'm missing something, however.
The other part is that you can build more efficient systems by relying on this. If you have a machine with 8 cores, it is more efficient to start a single process that can leverage all 8 cores and multiplex on both IO and CPU accordingly. This impacts everything from database utilization, to metrics, third-party APIs, and so on.
The Phoenix web framework also has great examples of using distribution to provide features like distributed pubsub for messaging and presence without external dependencies.
However, when it comes building systems, then I agree with you and I would probably use a queue, because you get other properties from queues such as persistence and making the systems language agnostic.
I hope this clarifies it a bit!
Workers/queues in languages like Ruby have problems like,
* Require very specific ergonomics(for example, don't hand the model over, hand over the ID so you can pull over the freshest version and not overwrite)
* They require a separate storage system, like your DB, Redis, etc. This doesn't sound big, but when doing complex things it can turn into hell.
* They have to be run in a separate process, which makes deployment more difficult.
* They're slow. Almost all of them work on polling the receiving tables for work, which means you've got a lag time of 1-5 seconds per job. Furthermore, the worse your system load, the slower they go.
* You can't reliably "resume" from going multi-process. Lets say you're fine with the user waiting 2-3 seconds to have a request finish. With workers/queues, you either have to poll to figure out when something finished(which is not only very slow, but error prone), or you have to just go slow and not multi-process, making it into a 8-10 second request even though you've got the processing power to go faster.
So, you've got all that. Or in Elixir, for a simple case, you replace `Enum`(your generic collection functions) with `Flow` and suddenly the whole thing is parallel. I mean that pretty literally too- when I need free performance on collections, that's usually what I do. Works 95% of the time, and that other 5% is where you need really specific functionality anyway, and for those, Elixir still has the best solution to it I've ever seen.
As for distribution, again, this isn't necessarily exclusive, but the right primatives are there and work well to start with. You could have good reasons for a bigger separation between nodes as well.
Erlang has some warts too, of course. For me, the warts are usually about scale, oddly enough. BEAM itself scales very well, but some parts of the OTP don't, often because of the difference in expectations between a telcom environment and a large scale internet service. Two examples:
A) the (OTP) tls session cache is serviced by a single process and in earlier versions, the schema and queries were poorly designed so you could store multiple entries for a single destination, and the query would retrieve all and then discard all but the first. When you were making many connections to a host that issued sessions but didn't resume them, all of the extra data could overwhelm that one process, and resulting in timeouts attempting to connect to any tls host. This was fixed in a release after r18, I believe, to store only one session per cache key, and the cache was plugable before then, but it wasn't fun to find this out in production.
B) reloading /etc/hosts and querying the table it loads into weren't done in an atomic way. I believe this is fixed in upstream as well, but queries satisfied by /etc/hosts were actually two queries on the same table, and reloading the table was done by clearing and then loading, so the second query could fail unexpectedly. This led to the bundled http client getting stuck, despite timeouts.
One of the alternative languages you mention is single-threaded, and the other has a global interpreter lock (in its most common implementation). That Elixir is superior to them for parallel programming doesn't really say much.
> Memory efficiency is much better than most other languages (with the exception of Rust, but Elixir is miles better at Error handling than Rust, which is a more practical feature IMO
How exactly are arbitrary runtime exceptions better? Any elixir function you call has the potential to crash. Meanwhile with Rust, your function returns a `Result` if it can error, and callers are then forced to handle those by the compiler, either via pattern matching or ergonomic error propagation.
Rust has runtime panics, but those are for rare unrecoverable errors and are not at all used for conventional error handling, reserved usually for C FFI, graphics code, etc.
As you said, in Rust you are forced to handle errors by the compiler. In Elixir, you actually don't. In fact, we even encourage you to [write assertive code](http://blog.plataformatec.com.br/2014/09/writing-assertive-c...). This is also commonly referred as "let it crash". In a nutshell, if there is an unexpected scenario in your code, you let it crash and let that part of the system restart itself.
This works because we write code in tiny isolated processes, in a way that, if one of those processes crash, they won't affect other parts of the system. This means you are encouraged to crash and let supervisors restart the failed processes back. I have written more about this in another comment: https://news.ycombinator.com/item?id=18840401
I also think looking at Erlang's history can be really interesting and educational. The Erlang VM was designed to build concurrent, distributed, fault-tolerant systems. When designing the system, the only certainty is that there would be failures (hello network!) so instead trying to catch all failures upfront, they decided to focus on a system that can self-heal.
I personally think that Erlang and Elixir could benefit from static types. However, this is much easier said than done. The systems built with those languages tend to be very dynamic, by even providing things such as hot code swapping, and only somewhat recently we have started to really explore the concepts required to type processes. But a more humble type system could start with the functional parts of the language, especially because I think that other techniques of model checking can be more interesting than type systems for the process part.
I did not write the post for general consumption, more as a reply to the question from the person as indicated in the first paragraph of the thread ... I really did not expect it to end up on HN. ¯\_(ツ)_/¯
100% Agree that there is a lack of "critical evaluation" and it borders on "fanboy" ... It's not a scientific or statistical analysis because I did not find any data I could use to make a an argument either way.
My experience with Elixir, JavaScript, Ruby, Java, PHP, etc. is based on doing the work in several companies big and small and I don't consider myself an "expert" in any of these languages. I have felt the pain of having to maintain/debug several large codebases with incomprehensible/impenetrable and untested code over the years and I find Elixir to be the most approachable of the languages I am fluent with.
I wish there was an objective way of assessing the day-to-day experience of living with a language ... have you come across such a measure that isn't based on the opinions of, as you say, "fanboy" users?
You appear to have superior knowledge/experience of Rust. Have you written any tutorials or blog posts sharing that knowledge? I would love to read your work. Is this you: https://github.com/chmln ? If it is, https://github.com/chmln/asciimath-rs looks cool! (nice work! :-)
I didn't mean the "fanboy" remark to be personal on any level. I just thought that some particular comparisons were unfair.
There are numerous valid points in the piece and I don't see much wrong in sharing the joy of working with a language, even if it's a little biased.
> I wish there was an objective way of assessing the day-to-day experience of living with a language ... have you come across such a measure that isn't based on the opinions of, as you say, "fanboy" users?
At least the "scientific" comparisons of programming languages I've come across have been questionable at best. Each language has its strengths and weaknesses, big or small, so wholesale comparisons are complicated further. Thus people have to rely a lot on opinions and real-world experiences of themselves and others.
> You appear to have superior knowledge/experience of Rust. Have you written any tutorials or blog posts sharing that knowledge?
Thanks for the compliments, and that's indeed my profile. Unfortunately I haven't had the time to blog at all, but perhaps I will someday get around to it.
fileString = checkFile("sample.txt")
if(fileString == null){
//handle error
}
If I showed the above pattern to typical javascript, python, ruby or elixir programmers at any company, 99% of them won't be able to identify why this pattern is bad, they see it as a necessity and rely on the programmers skill to catch that potential null (or exception depending on the implementation).In fact, you dear reader, might be one of those programmers. You might be reading this post and not understanding why the above code is an unsafe and a bad style. To you, I say that there are actually compilers that automatically prove and force you to handle that potential null not as a logic error but more as if it was a syntax error.
That guy who advocates unit tests at your company doesn't understand that unit tests only verify your program is correct for a test case. These compilers provide PROOF that your program is correct and can eliminate the majority of the tests you typically write.
The code above is unsafe not because of the developer, it is unsafe because of the nature of the (made up) programming language.
In elixir, python and javascript you will inevitably have to follow this unsafe pattern.
It has been extremely stable, scaling has been a non-issue. Error reporting has become easier and easier, now that companies like Sentry and AppSignal have integrations for Elixir.
Elixir is VERY fault-tolerant. DB connection crashing? Ah well, reconnects immediately, while still serving the static parts of the application. PDF generation wonky? Same thing. Incredibly fast on static assets, still very fast for anything else.
I've had nothing but fun with the language and the platform. And the Phoenix Framework is just icing on the cake. I've been fortunate to have been to many community events, and meeting (among so many others) José and Chris at conferences has made me very confident that this piece of software has a bright future. The Elixir slack is also VERY helpful, with maintainers of most important libraries being super responsive.
I would not start another (side or production) project with anything else than Elixir.
I still don't understand this.
I don't think I've ever built a web server in any language where this wasn't true unless I specifically wanted hard failure.
The amount of fault tolerance would be a per-app design goal rather than something that seems to be a language feature. I've worked in apps in all languages that range from any failure being a hard failure to being impossible to crash, and this is due to business requirement.
For example, regarding your examples, just about every web server I can think of will automatically turn uncaught exceptions into 500 responses unless you opt otherwise.
In most languages, you achieve this behaviour by rescuing/catching exceptions. In Erlang/Elixir, we don't like to that, because exceptions are mechanism to signal that something went wrong and telling the system to continue despite of failures is not a good practice.
Instead, in Erlang/Elixir, you organize your software using separate entities (called processes), which are completely isolated. Therefore, by definition, if something fails, it won't affect other parts of your system. This also leads to other features like supervision trees, which allows you to restart part of your application, exactly because you know all of those entities are isolated.
When you have shared mutable state, it is much harder to have something like built-in supervisors, because you have no guarantee that a crashed entity did not also corrupt the shared state.
In a nutshell, I would say Erlang/Elixir makes you think more about failures and how things go wrong.
I know this sounds a bit handwavy but it is not that trivial to explain those details on text. I have also given talks on this called Idioms for Building Fault-Tolerant and Distributed Applications in case you are interested: https://www.youtube.com/watch?v=B4rOG9Bc65Q
The magic is in the supervisor pattern, explained here for erlang: http://erlang.org/documentation/doc-4.9.1/doc/design_princip...
It is hard to describe why this "feels different" in Elixir than it does in Express.js or a Tomcat running a Java application. It's all experiential for me, but maybe I can put the sentiment in words: I always KNOW that whatever part of my application may break, however much and for whatever duration, the scheduler and the supervisors will make sure that the rest of the system runs exactly as intended, and the broken part of the system will be back up eventually. I did not have this feeling (as strongly) prior to working with Elixir.
But I will admit this is a very subjective position. And I am not sure you'd experience it the same way were you in a similar situation.
In most runtimes, initialization like that is linear (think bash’s execfail switch); if something fails to initialize, the whole HTTP app daemon will crash out, get restarted by its init(8) process, and then try again.
In Erlang, you’ve got something more like “services” in the OS sense: components of the program that each try to initialize on their own, independently, in parallel, with client interfaces that can return “sorry, not up yet” kinds of errors as well as the regular kind—or can just block their clients’ requests until they do come up (which is fine, because the clients are per-request green threads anyway.) In Erlang, the convention is that these services will just keep retrying their init steps when they hit transient internal errors, with the clients of the component being completely unaware that anything is failing, merely thinking it isn’t available yet.
Certainly, Erlang still has a linear+synchronous init phase for its services—just like OSes have a linear+synchronous early-init phase at boot. But the only things that should be trying to happen in that phase involve acquiring local resources like memory or file handles which, if unavailable, reflect a persistent runtime configuration error (I.e. the dev or the ops person screwed up), rather than a transient resource error.
Indeed, any language runtime could adopt a component initialization framework like this; but no language other than Erlang, AFAIK, has this as its universal “all ecosystem libraries are built this way” standard. If you want this kind of fault-tolerance from random libraries in other languages, you tend to have to wrap them yourself to achieve it.
(You could say that things like independent COM apartments or CLR application domains which load into a single process are similar to this, but those approaches bring with them the overhead of serialization, cross-domain IPC security policy enforcement, etc., making them closer to the approach of just building your program as a network of small daemon processes with numerous OS IPC connections. Erlang is the “in the small, for cheap” equivalent to these, for when everything is part of the same application and nothing needs to be security-sandboxed from anything else, merely fault-isolated.)
The fault tolerance that I love about Erlang/Elixir is the actor model. Everything is (or can be) an actor, which is like a living and breathing instance of a class. So they can live and do their own stuff, and then if they fail at that and need to be recreated, they get recreated by something that supervises them.
Contrast this to for instance a Django or Rails app... if a vital service in the system dies the entire Ruby or Python runtime will (potentially) die and then respawn. It's cheap and we don't care, right? It will get restarted. The net result is similar, you don't get woken up in the middle of the night and customers are happy. But in systems where you want or NEED an entire system to remain on 24x7x365 it changes the game.
I've written large applications in Clojure/Clojurescript and I've seen/reviewed reasonably large code bases in Elixir, and while I would agree that Elixir is a very good solution for many problems, it is not a tool for everything.
I've seen third party dependencies churn on Elixir as well (packages that are no longer maintained or alternatives that are better) - I think it's an inherent problem with using dependencies and has nothing to do with the programming language in which those dependencies are written.
> As a developer I just want to get on with my work, not have to read another Hackernoon post on how everything from last week is obsolete because XYZ framework
My recommendation is that you don't read Hackernoon. This seems like a very ineffective way to level up your developer skills.
Edit: I agree that Elixir is very nice and would pick it over JavaScript for backend heavy applications without thinking. I just don't think this argument makes any sense in that context.
It's not completely true IMO for 2 reasons: 1- the nodejs standard lib is quite poor compared to say, Java's, Scala's or python's, so you generally need quite a lot of modules to do anything 2- the npm ecosystem is much more amateur. To do anything you have a ton of poorly supported by hobbyists or not supported at all modules. This can force you to change modules/libs regularly. This is to be compared to the Java ecosystem for example, were more people are working together to build well supported/high quality libs (Apache libraries for example)
Other issue is that things move fast and break, you are not sure that 3 months old tutorial will work in present.
Edit: I know I can and I did grabbed node and npm outside the repositories, but you do not see this issue with other languages where I must install latest stuff to get most libraries working.
Where as something like the Python community or Rust community (where I have had more experience) I have always found that even if there are packages that do the same thing, there usually aren’t nearly as many duplicates and often times the community has done a better job communicating the value of many of the packages. There is just less confusion around the whole thing
I have also found there to be relatively little overlap between the big packages, in my experience
I would never work on backend with JavaScript or any other interpreted language, due to error proness.
There is no connection, at all, between a language being interpreted and it being error-prone to write or run. You either mean something else or are mistaken.
* the syntax is well though-of (`with`, destructuring, `|>` are powerful
* message passing has great use-cases
And then it has problems that are not necessarily "elixir-y", but are there nonetheless:
* it's hard to model an application around the Actor model. It's very easy to abuse it.
* it's hard to maintain / refactor a large application without help from the compiler before run-time
* it's hard to maintain an application in a language with a young ecosystem and no "seamless" integration with a better established one (ports are not seamless.)
Quite frankly, I'm looking forward to writing a backend in Rust, to have a point of comparison.
As for Rust, do try it out. Haskell-esque type checking, the “anti-OO” interpretation of C-style conventions, and memory safety without garbage collection are a seriously potent set of features, but it can be frustrating when you find out yet again that your whole day of R&D leads somewhere incompatible with its philosophy, and is therefore a dead end. I’m building a Rust webservice framework as a hobby/learning project, but it wouldn’t be my first choice for a production API under active development. On the other hand I’m not aware of a better choice for an embedded daemon process or a stable microservice.
I realize that the tone of this GitHub post has been a bit fanboy-ish and biased but you have to understand that your comment here is biased as well. It's non-objective to dismiss a technology because somebody couldn't articulate it as well as Mark Twain would. Most people simply aren't that good at articulation -- me included. Doesn't mean that what they are trying to articulate is invalid, wouldn't you say?
As for "fundamental problems" -- it's a case of "pick your poison" as usual. There is no universally good language. If you frequented the official Elixir Forum you would know that most of us use other technologies every day. Many people in the forum have 10+ years of experience and are well-aware of the big picture. We are very realistic about when Elixir is a good fit and when it isn't. There's a plethora of posts where we straight up advise somebody not to use Elixir.
IMO practice critical thinking and don't judge by the tone of isolated articles.
As a final point, you should also consider why the language has so much fanboy-like articles. Maybe it is doing something good for real? Objective thinking demands consideration of all major possibilities.
The issue I see is carryover from other ecosystems taking paradigms that aren't necessary and don't fit into libraries and patterns. It feels like there are still conventions to settle on.
The programming world is strongly moving toward statically typed languages, because today, there's pretty much zero reasons to use a dynamically typed language.
How? Python and Javascript (not Typescript) are two of the most popular languages in the world and still growing very fast by many accounts. Which strongly typed languages are taking over?
def handle_data(%{
customer: %{
date_of_birth: %NaiveDateTime{} = dob,
account_balance: %Decimal{} = balance,
name: name,
count_purchases: purchases
}
}) when is_binary(name) and is_integer(count_purchases) do
# work with the data here
end
^ This both asserts on a particular data structure (a map with a "customer" key containing at least those four attributes) and asserts on the types of some of the attributes. I find it pretty handy and practical.---
But I concede that strong+static typing eliminates a class of bugs preliminarily. That is unequivocally true.
And this is anecdotal but with good pattern-matching and guards in Elixir, I can’t remember the last time I created a bug that would have been made impossible by static typing.
It's hard to pick one big draw, but I'd say the biggest for me is that everything I wanted to do in rails has been possible in Elixir and then additional functionality not easily possible in rails is trivial in Elixir. I often consider the distribution techniques as "enhancers" as you could work around them with global locks and data stores, but you don't need to.
I'm very bullish on Elixir and I'm curious to see where it will go. Looking forward to giving my talk about bringing Elixir into production (from a human and technical standpoint) at Lonestar Elixir conference.
I also noticed that every functionality I write in both Ruby and Elixir is both more concise (less code) in Elixir as well as 5-10x faster :)
Elixir is past 3 times - so I will check it out for sure! - but this article didn't seem to actually say anything (seemed more like a PR piece that was trying not to be technical, and the main argument appeared to be "well, it's not javascript!").
The part that actually talked about Elixir listed some Pros that didn't seem that unique. What's the "killer feature" of Elixir - or is it just a combination of "good features"?
Elixir "threads" are called "processes". It's a bit confusing at first, but there's a good reason for it. So from hereon in, when I say "process" think "thread".
Elixir processes, like OS processes, are fully isolated from each other. If Process A has data than Process B wants, Process A has to ask it (send it a message) and can only get a copy of the data (like real OS processes, hence why the name makes sense). The advantage to this is that data ownership is explicit, race conditions are eliminated, and code is less coupled (A can't have a reference to an object that points to B that points to C..., that anyone anywhere in the code can mutate).
At a high level, this allows you to get many of the benefits of microservices, without the high cost (deployment complexity, service discovery, lower performance).
We run an analytics on live sports which does various transformation on a stream of play by play data. There are very distinct features: boxscores, lineups, scores, time alignment, broadcast information...For each live game we start a process for each of these features. When a play by play event comes it, we pass that to each worker of the appropriate game, and each worker does its own thing.
The workers are isolated. Not just the physical code, but at runtime. This makes them easier to test, refactor and grasp.
There's some interaction between the workers. For example our boxscore worker needs to know the score at a given time. So it sends a message to the score worker for that game: {:get, time}. The score worker replies with the score. There's no need for explicit locking. A process handles a message at a time. There's no chance that the boxscore worker changes the score worker's data (it can change the copy of the data that it got, but that change is only on the copy).
Really, it's most of the benefits of microservices (and I mean, being able to have true MICROservices, not just rebranded SOA) with few of the downsides.
OTP (Open Telecom Platform) is a set of tools, libraries, and middleware that was built around Erlang. This has been in development since the 90s, and was originally developed by Ericsson for the telecom industry to handle massive numbers of concurrent connections without a single point of failure (I may be butchering this story, but this was the end result). OTP includes things like extremely lightweight processes, process supervision to quickly restart based on your desired behavior, multiple native real-time datastore options (both in-memory and/or persistent), hot deployments with no downtime, an extensive standard library, and other cool things that have stood the test of time. All of this just comes with the base Erlang/OTP installation.
Elixir essentially introduces a modern ecosystem around the Erlang/OTP system.
Elixir looks a little like Ruby, but that's where I'd end the comparison. There's almost know similarity in how can be used well.
The same could be said for things like data processing with Flow [2] or even things like Ecto (semi official database wrapper) or even third party libraries like say ex_money [3].
Then you start looking at the packages and language and see that there are rarely thousands of bugs or that the infrastructure (mix, hex, docs etc.) is really nice to use and that the language is really stable, yet still provides you with useful but clear abstractions. Or that you can spin off processes and tasks inline without too much worry, or that you can use 20+ years of Erlang libs transparently, or that it's immutable and has the best concurrency primitives of any system available, or that it allow you to supervise processes and let them crash if needed without bringing down your app, or that you can transparently get multi machine out of the box, or that message passing is build in as the default way to scale the system. Or pattern matching or |> or the amazing community.
[1] https://phoenixframework.org/blog/the-road-to-2-million-webs... and https://dockyard.com/blog/2016/03/25/what-makes-phoenix-pres...
Elixir makes the Erlang VM much more pleasant to use in my opinion (plenty of Erlangers will disagree with this, ymmv). It provides a developer-friendly, modern ecosystem, with doc support, testing support, a great macro language for building DSLs, package management, etc, but underlying it all is a battle-tested VM, that has been under active development for 30 years.
It's a nice ecosystem with a good culture in a language that promotes pretty good programming practices.
The only major downsides (if they even are for you) are that it's dynamic--not good for number crunching, though you can connect to compiled binaries--and is not strongly typed--which can lead to runtime bugs.
That said, part of the philosophy is to enable fast failure without taking down the whole application. In communications, it's not considered the end of the world to drop or fail on one connection, so it works well for web services, chat, etc.
Bonus: parallelism almost for free.
If only GenServers had a sensible interface instead of semi random handle_* functions that obfuscate what a given GenServer is implementing.
document.querySelectorAll('em').forEach(el => el.replaceWith(new Text(el.innerText)))I am building a quite involved video learning platform as we speak with Elixir and Phoenix. No regrets so far, and if anything as time goes on, I'm becoming more and more happy with the decision.
The community is really great and there's a lot of quality libraries available. Not just libraries, but entire production systems too. For example https://changelog.com/ is written with Elixir / Phoenix and their platform is open source'd at https://github.com/thechangelog/changelog.com. There's so much good stuff in that repo to learn from.
Also the Elixir Slack channel has 20,000+ people in it and the official forums at https://elixirforum.com/ are very active.
- Ship faster
- Write simple, readable, reliable and fast code
- Scale easier and with less resources
- Onboard and train new hires into the code base quicker
I know I'm making it out to be a panacea which to be clear it isn't as the deployment story still has some final pieces for the core team to work through but I will say I'll continue to use it to build in the future
I've been programming intensively in Elixir for the past two years and it's a wonderfully productive language, which allows one to write elegant systems that leverage the multi-core architecture of today's machines effectively.
In addition, the networking capabilities and fault tolerance of the VM make writing systems which spawn machines and services a breeze; not the mention the ecosystem only gets better by the day.
So yeah, Elixir is one of my main tools when I want to get things done elegantly and productively. And if for some reason I need to speed things up a bit here and there, I just add a little rust into the mix. [1]
But now I have a contract that would really benefit from the runtime. That being said the existing environment has a lot of python expertise and I don’t have enough production Elixir experience to have confidence in myself to deliver something of the right caliber.
It’s a damn shame. This system has to process hundreds of thousands of API calls for workloads against a half dozen third parties that all have different rate limits and failure modes. It’s the perfect job for Elixir. It needs to be as fast as possible while isolating failures to the smallest unit possible.
This isn't to say you'll write great Elixir from the beginning. I'm on a codebase now that was from back before the semantics of good Elixir were really well known(2016). It's not uncommon for me every week or two to rewrite a portion of it to look cleaner, and be more performant.
The crazy thing though? Holy shit did it scale. We're doing event processing for an application that is processing nearly 100m events per week. At times, it needs to process 1500 per second. These events need to check the DB multiple times, fan out to multiple services, and make discreet HTTP calls of their own to external servers.
We're still on one box. We still have plenty of the old, harder-to-read, less-performant code. And it still takes under 10 minutes to understand the deepest inner workings of any one feature in the system.
I think you'd be pleasantly surprised.
Frameworks make a huge difference here rather than language. Phoenix and Ecto have done a really great job with performance.
Ruby will deliver the same performance on a similarly light framework like Sinatra/Roda + Sequel but definitely not Rails.
Once you get a high performance service running on Phoenix + Ecto or Sinatra + Sequel, the gains from moving to compiled languages are a lot smaller unless you invest a huge amount of time in optimisation.
It's the first language I genuinely enjoy reading and writing even in my private life.
If I wonder about the internals of a library I use, I can just look into the code and kind of understand what's happening. Never had that with JS or anything.
I'm just a genuine fanboy.
Only drawback I feel is: Some libraries that would have been quite developed in JS are not that well developed in Elixir. Some libraries are quite dead and it's hard to find alternatives (mostly obscure stuff)
But on the other hand, it often seems manageable to just write it yourself, or fork it and move on.
Really, Go "is the choice if you need to 'sell it' to a 'Boss'" and the imperative programming style leads to more complexity? And Python/Django can only be used if you "don't need anything 'real time' and just want RESTful 'CRUD'".
I get it, you guys like Elixir, but painting the world using such broad strokes doesn't really sound like "kaizen learning culture" to me, but more like "Negative Nancy".
I'd say Elixir's killer feature in today's day & age is concurrency. I'd argue that using concurrency is appropriate in most programming situations IF your language's concurrency model isn't a pain in the ass to use. You can write completely non-blocking, async code in Elixir (and Erlang) without losing your mind. The preemptive scheduling is nice, too.
I love a lot of other stuff about Elixir, too. Pattern matching, process supervision, tooling, documentation, etc.
https://github.com/aws-samples/aws-lambda-elixir-runtime
The only downside is that the out of the box performance is subpar for http services but it is still acceptable.
The setup for the test:
- provision node A for being the server
- provision node B for being the client
- open X (16..16000) connections from node A and use http pipelining start to send requests to node B
I use wrk2 as the test client it is pretty amazing and looking at the latency distribution graphs.
Tools that clear winners of performance:
- https://github.com/valyala/fasthttp
Elixir/Cowboy/Plug is in the middle range, kind of like what Techempower[1] guys saw during their tests.
[1] https://www.techempower.com/blog/2018/10/30/framework-benchm...
The dynamic typing in Elixir/Erlang is a trade-off for Actor model message passing. You get a state of the art run-time for fault tolerance and concurrency, but the messaging aspect makes typing problem-prone. A co-dependency on a custom type is coupling you want to avoid when sending messages around. You don't want a long running process that knows about Type_v1 sent a message from a newer process messaging with Type_v2.
The Aeternity team is building blockchain systems with Erlang for nodes and infrastructure. However since smart-contracts necessitate so much type safety and formal verification - they're designing an ML flavor functional language just for that.
Anyone know whats going on there?
1. https://www.techempower.com/benchmarks/#section=data-r17&hw=...
This would be a more apt comparison:
https://www.techempower.com/benchmarks/#section=data-r17&hw=...
- better configuration (tweaked to this benchmark / hardware). Bigger communities will try harder to tweak their benchmark config. The Phoenix one is probably just the default one or slightly tweaked. Is the SQL connection pool size ideal? Are all default Phoenix-added Plugs ("middlewares") needed here? Is the BEAM virtual machine config / flags ideal? Would using OTP releases be beneficial?
- Elixir ecosystem is constantly improving the performance. The VM, language and packages versions should be updated. Elixir 1.6.5, Phoenix 1.3.2, and Ecto 2.1 are not recent.
- Abstractions used (and amount of them, so amount of work done at run-time). Compare the Fortunes "handler" implementation in vertx-postgres [1] to the Phoenix one [2]. Raw SQL vs generated query, almost bare DB connection pool vs Repository abstraction, and it's just the "handler". But you wouldn't want to maintain a lot of low level code.
- Typical functional language overhead. Copying (copying the conn when modifying it, Enum.sort on a list etc) is more expensive than modifying in-place. Again, functional code will be easier to maintain.
But overall I think in the particular benchmark you linked, Phoenix isn't that bad. Latency is pretty low and consistent (0.5 ms min and 9.9 ms max (!)) thanks to the BEAM pre-emptive scheduler. And BEAM will scale well vertically.
1. https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast... 2. https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...
At the same time, Elixir (and Erlang) are not meant for raw speed. It is best used for real time communication, lots of users & handling errors. At least that is what I have read.
Yup, many things.
It's worth reading this very long thread about exactly this, from 2016. Look for Sasa Juric and Chris McCord's comments in particular. [1]
tl;dr benchmarks are hard, not all benchmarks are implemented well, Elixir folks haven't heard back from Tech Empower re: details of errors rates etc. It's an unfair analysis, and not just for Elixir.
I think performance is better compared to Ruby and Python, but then again my experience with web applications is that the domains are best modeled using classes.
For writing networking code and protocols, the binary pattern matching is amazing, though. The Plug libraries are a pleasure to use also.
One thing that's trecherous is that rubyists can bring whatever they believe to be the right way to do things and assume everything should be exactly the same, especially with regards to ecto vs active record. Elixir isn't ruby. Ecto isn't rails active record. Not anywhere close. It just happens to look like ruby and there are some influences in the design but Ecto tells you not too implement STI like rails does for example so don't assume you're going to do it like you would in rails. I'd argue ruby is more like scala than it is like elixir as it has multiple paradigms. Elixir is squarely functional, just a very very simple functional language. The skill ceiling is pretty low and it should take very little time for someone to get up to speed which is important because you won't find a big pool of rockstars using it in the job market so you'll have to hire good people without experience and hope they will be okay using elixir and not jump ship to go work with strong typed languages.
Release less broken code to production? I am not sure about it either.
Elixir/Erlang/OTP: + Very mature, very well thought out. While the newer stuff may feel a still under construction (string handling, date handling) all the concurrency primitives are rock solid, and by rock I mean diamond. + Elixir is simply a great language and you can get quickly very productive in it once you grasp the actor/process model of BEAM + Has one very big advantage over akka that actors can receive messages in different order than they were sent. That of course can cause some headaches if not handled carefully, but 99% of the time straight up leads to nicer and simpler programs. Really a lot of akka code very often is just written to deal with the order in which the messages may arrive. + Truly resilient with very good error recovery design once you know how to work with it. I still don't know more graceful and productive way of recovering from failures in a running system.
- For doing any expensive computations it's slow and it's a fact. Not much can be done about it. - Library coverage is 7/10 or 8/10, but these few missing points can sometimes make a big difference.
Scala/Akka: +/- I love Scala language and static typing, but one must be honest that it's much more complex. You can learn Elixir (without macros) in an afternoon. One really needs to take some time and think it through to utilize Scala properly and a true mastery lies even further. To be fair proficiency with Elixir macros also requires a considerable effort, but one can go very far with Elixir without writing macros, while with Scala already the upfront cost is pretty high. + Up to my best knowledge Akka Streams is completely unique and completely amazing library that is gaining support throughout library ecosystem. This is one point where Scala/Akka completely outshines everything else. Streams are such a great abstraction and gave a huge productivity boost to most of the projects I was working on. Compared to that, elixir GenStage feels much less robust and polished. + Speed of JVM should be enough for 99% of applications, and with that respect it wins with BEAM. + Java/Scala library ecosystem is very deep and is simply much more comprehensive than Elixir/Erlang one.
- There are places around akka that still feel a bit immature/adhoc, but the library is steadily improving. It just is not as mature as BEAM/OTP. - Over small and mid size projects I think I was more productive with Elixir. Meaning, if I had the same amount of time, I could implement more features in less time using Elixir. But it could be just a personal thing.
Overall I think both are fantastic platforms and I'm happy to have both of them to choose for each project. If I were to chose what to select today:
- Choose Elixir/OTP for a system where we need to do a lot of io but not much computation and we're sure existing libraries cover our needs. Very big plus if we need it to be resilient. - Choose Scala/Akka if we need speed or call any existing JVM libraries. Very big plus if your project could use Akka Streams.
I think we'll see more Akka features built in Elixir through things like Genstage and Flow, but it's hard to argue with the mountain of exiting developer man-hours in the JVM.
But from a talent and recruiting perspective, I'm less enthusiastic. Elixir, yes, there's growing talent. But Ember, boy it seems like nobody is doing it, and I've had to convince potential candidates that it'll be worth their time for future employability to learn Ember.
NodeJS bought it to backend and the language itself weren't meant for it. Since then ECMA5 and stuff tried to fix these shortcomings. But you can't expect me to love javascript's weakly type versus elixir or python's strong type (strong not static, as in it doesn't implicitly type convert stuff like javascript). It's a nightmare and concurrency model in NodeJS in my opinion is subpar compare to Elixir's.
Only to find out they do not scale very well after your application is mature and used by a growing number of clients.
I guess the better question would be why is there not an easy, standard lib for doing this in any language in 2019?
%{key: "val", key2: [1, 2, 3]}
but the semantics here are actually something like this in JS: {Symbol("key"): new Int8Array(/*utf-8 encoded*/ "val"), Symbol("key2"): new LinkedList([1, 2, 3])}
you can get rid of the `Symbol()` part in the translation, but then the literal becomes: %{"key" => "val", ...}
so, basically, the gap between JSON and Elixir is wider, both syntactically and semantically, than it is in some other popular languages.Most of my current work is in Go, which is a fairly strict language, and I value perhaps more than anything the ability to verify my program at compile time -- for one, I can do large-scale refactoringest, safe in the knowledge that my program won't run until everything is again sound.
Go still leaves a lot to be desired, so I've been exploring options. I've started picking up Rust. I love the idea of zero-cost abstractions, though at the moment I find the mental overhead of a lot of the constructs (lifetime annotation, implicit operations that happen due to what traits you implement, the many baroque syntax choices, etc.) a little annoying. It brings to mind modern C++, which also has a lot of rules that you have to remember, from copy constructors to what the order of "const" in var/arg decls mean, to the awkward split between functional and imperative styles.
Modern C++ looks interesting, and I've used it for a few projects. What bugs me the most is the warts still not fixed by the "modern" iterations: Include files (leading to long compilation times), lack of modules, unsafe pointers, etc. While I appreciate and understand template mechanics, I'm not overly impressed with some developments -- Rust traits and Haskell typeclasses just seem so much less messy than the current situation with type traits and concepts. There's a tendency in "modern" C++ to offer multiple syntaxes for the same thing, none of which are very intuitive.
I've occasionally written small things in Haskell and OCaml, and I've considered doing a future project in OCaml now that multicore support is getting close. I looked at F# for a bit, too, but it comes across as having too much .NET/Microsoft flavour for me. Same with C#. I've looked at Nim, but it's too niche -- for the projects I'm going to work on, I'd have to write libraries for functionality that just isn't there yet (e.g. gRPC).
Back to Elixir, though; the problem is of course that none of these other languages offer anything like OTP. The closest may be Haskell, with it's Distributed Haskell project. But I'm not sure it's anywhere close to being as mature. Maybe Pony is comparable, but that also seems quite niche at this point.
Mind elaborating? F# seems like a decent fit from what you've said. I'm hoping the language will grow less stagnant as .NET Core matures.
However, it comes with the baggage of .NET Core, which is a rather big thing. And it's growing, as Microsoft is apparently porting over everything from the older, non-cross-platform .NET stuff. For one, .NET Core includes the CLR/CIL, i.e. the JIT VM and cross-language integration, which I'm not interested in at all; I just want an AOT compiler. The AOT support seems like a fairly recent addition, and it's unclear to me how optimized it is or how well-supported it is compared to the older CLR-based toolchain. As a standard library, CoreFX seems rather large, and contains things like GUI and SQL Server support, for some reason.
In short, .NET Core seems like something you'd love only if you were already heavily invested in Microsoft's tech stack. I'm not interested in it myself.
Why would a SW company invest in niche languages where the resources (software developers) are really expensive and really hard to get? Technologically it's all great but economically that's a nightmare.
We have found a few people who knew it already and were looking for a job, but that is fairly rare. Instead, we know we can bring people up to speed on it quickly and also it signals to people that we're willing to give them some language options (Ruby or Elixir) within some boundaries. Having these options is good for ownership of an area.
There are also companies that we really successful in hiring by reaching out to functional communities in general. And of course, there are also companies struggling to hire Elixir developers compared to other techs. YMMV.
If your status quo is working programmers as long as you can without a raise until they switch jobs, a niche language is a threat. If you re-evaluate your staff based on their experience gained, you can head off the churn.
As a counter-argument: I've experienced several times that "niche tech" companies offered non-competitive salary packages and perks, because they offered the cool tech instead; "sure, we can't match that other offer, but we built our stack on that language/tech that is so hot right now".
Not arguing about the quality of Elixir, just about the gatekeeping that happens in this thread.
The difference is, in Ruby, after 2-3 months, you'll be rewriting large portions of what you did to get there. And in another year or two, you'll find yourself being blocked by earlier designs not for a day or two, but by a month of work or better.
In Elixir, you simply will not experience that. Even if written poorly, it's easy to rework/change, and you probably won't need to anyway because it's just easier to get things done without painting yourself into a corner.
See http://www.paulgraham.com/avg.html for one answer.
I've dabbled with both - but really fell in love with both of them! I come from a C# background, so the static typing of F# is a big pull. OTOH, the simplicity of Elixir was an absolute delight - after just an afternoon, I felt like I had a decent grasp of it.
I'm conflicted, and would value some other opinions?
- multimedia streaming
- multiplayer game servers
Generally soft real-time systems
Mostly good old web stuff.
Now for someone starting new, maybe the Erlang eco-system might be a good bet, and Elixir an entry point.
Still, not everyone has Ericson scale problems to solve.
Anyway, what I wanted to say is that Erlang is first and foremost a fault-tolerant language and system, of which both distribution and concurrency are by-products. As an example of a "fault" that the creators of Erlang had in mind, Joe Armstrong often cites "being hit by lightning": the only way to ensure the system will still function after that is to have its copy running somewhere else, hence distribution. Another type of fault I think explicitly mentioned in "Programming Erlang" is dealing with hardware failures, sensors and outputs getting disconnected and reconnected, etc. - hence concurrency and per-process error isolation. Finally, "programmer errors" are also a kind of a fault (as impossible to completely avoid as lightning or flood), hence immutability, versioned rolling upgrades and rollbacks and live introspection into any node from anywhere in the system (among other things).
That is not to say that the by-products aren't important or nice to have, just that many of the design decisions in Erlang start making a bit more sense if you look at them from this angle. It also helps to decide whether Erlang is the right tool for you: it's going to save you many, many years of effort if you need a nine-nines guarantee for a system you'd otherwise have to write a few million loc of C; it can still give you a bit of an edge if you are able to make use of its unique features like a built-in distributed data-store or if the Actor-model with preemptive scheduling fits your app very well. Outside of these pretty specific use-cases (although, to be fair, I'm just giving examples - Erlang/OTP is a large (in terms of built-in functionality) system and Elixir adds even more stuff, so there are many more good use-cases for it) you may struggle to realize any positive outcome with Erlang: unfamiliar everything, no libraries, a runtime system always ready for connecting to remote nodes even if you're writing command-line script, immutability has a performance cost and overall performance is not impressive and so on, each of this things could potentially bring down your project if not carefully considered.
Elixir has totally spoiled me. The meta-programming ALONE is something I miss constantly when I have to use other languages.
Then one day, you'll need to store and mutate data in process. And then you'll learn about GenServers and Supervisors.
Then one day, you'll want to have some base functionality but for whatever reason, composition isn't a good fit, so you'll start to dig into macros.
Fundamentally, Go is much more Ruby-like than Elixir (Ruby and Go have shared heap, global GC, array-based data structures, same evaluation strategy, mutability, ...). Elixir is very different. But it's discoverable.
Rails has revolutionized web application development on Ruby, with Sinatra as the minimalist version and a lot of "me too" frameworks have been developed, and somehow I like them all.
* On Python, Django and Flask * On Elixir, Phoenix * On Crystal, Amber * On Javascript Express js for Sinatra. But on JS we didn't get a successful Rails clone, but a storm of front end frameworks, finally Vue JS/React and endless others.
I wouldn't pick Elixir because The world is elsewhere: My choice is on JavaScript ES6, Vue, and a simple Express.js, Sinatra, Flask for most projects.
The world is everywhere. Other people have pointed out Elixir is very good at taking advantage of multiple cores and writing distributed applications which are easier to reason about, less error prone and very efficient. I wouldn't say the same things about javascript.
People said the same thing about PHP when Rails was first on the scene and there are still way more PHP web apps out there. You could say the world still runs on PHP but that's not a good enough justification to choose it.
I never had any debugging issues in particular, but the dependency hell drives me nuts, too.
- Pipelines are hard to debug. You can’t just throw a debugger just before the line with the issue.
- Phoenix is very bad at serving static files. It was a nightmare to import a new CSS template requiring to convert everything to work with bower first, or dump the files in the /priv directory to make it work.
You absolutely can. Just change,
```elixir
users
|> send_email_with_money()
|> do_complex_thing_that_crashes()
```
into
```elixir
users = users
|> send_email_with_money()
require IEx; IEx.pry()```
I turned the `require IEx; IEx.pry()` into a snippet just to make life as easy as it is in Ruby land.
> - Phoenix is very bad at serving static files. It was a nightmare to import a new CSS template requiring to convert everything to work with bower first, or dump the files in the /priv directory to make it work.
Well, two sides to this. For one, Phoenix uses Webpack now(since finally the war to see what app bundler would win is over).
But even when you did use bower- you should've been able to just delete `phoenix.css`, copy your template in, and in `app.css` put `import "template_name_here"`.
> Well, two sides to this. For one, Phoenix uses Webpack now(since finally the war to see what app bundler would win is over).
We shouldn't force devs on Bower or Webpack. If I want to try out a new theme just bought on ThemeForest, it shouldn't take hours to make it compatible. Or to force to do the /priv/ hack that seems unelegant.
Then debugging a pipeline is as simple as dropping in IO.inspect between statements as it returns the content as well as printing.
thing
|> stage1
|> IO.inspect
|> stage2
Not that difficult!It takes forever if your assets are large. Just serving random static files shouldn't take long.
> IO.inspect
It's nothing like a real debugger.
You do this for legacy systems. New systems should not use Bootstrap, there is much better out there.
But every other word being emphasised in this article was tiring to read.
This is a design flaw on the part of the team who is using Node.js incorrectly and not a flaw of Node.js itself. There are many ways to implement error handling properly in Node.js so that a user cannot crash a whole server/process and there are a lot of frameworks which implement this by default.
Elixir is over-marketed and over-hyped. It's obvious that there is a big money machine behind it. The entire community is obsessed with evangelizing; they're not getting organic growth; they have very aggressive marketing but it's mostly founded on exaggerations and flat out lies.
In addition to what I've pointed out above, to say that someone can learn Elixir in just 1 week is another example of a lie. It takes years to fully understand the nuances of a language to the point that you can be good at it; there are always a lot of patterns to learn; especially for functional programming languages.
The Elixir ecosystem will never be as significant as that of Node.js because Elixir's ecosystem is founded on hype. Part of the greatness of Node.js is that reality tends to exceed expectations; so-called 'thought leaders' and 'bloggers' have been working very hard to discredit Node.js from the beginning but they failed (see https://news.ycombinator.com/item?id=3062271).
I'm not going to consider using Elixir while it's so clearly over-marketed and over-hyped.
In their Slack channel they orchestrate organized upvotes of such post like this one, they collectively downvote people like the parent and post fanboism through several accounts.
Elixir is a solution without a problem.
Web developers seem to follow trends; Perl -> DJango|RoR -> Node.JS -> Scala -> GoLang -> Elixir -> Something. Or, something like that. To me, it's like buying a $500 pencil and expecting that you should be capable of writing a better book.
If you get in bed with that crowd, don't expect that your program and 3rd party dependencies are going to be stable in 2 years.
Pretty unfounded comments regarding stability of packages long term. As with any community that makes it easy to publish packages, there will certainly be package churn over time. However, core libraries show 0 sign of this and Phoenix in particular has taken a very mature stance on new features.
I don't think anything negative of the language or core libraries.