I've used OCaml a bit and found various issues with it:
* Terrible Windows support. With OCaml 5 it's upgraded to "pretty bad".
* The syntax is hard to parse for humans. Often it turns into a word soup, without any helpful punctuation to tell you what things are. It's like reading a book with no paragraphs, capitalisation or punctuation.
* The syntax isn't recoverable. Sometimes you can add a single character and the error message is essentially "syntax error in these 1000 lines".
* Ocamlfmt is pretty bad. It thinks it is writing prose. It will even put complex `match`es on one line if they fit. Really hurts readability.
* The documentation is super terse. Very few examples.
* OPAM. In theory... I feel like it should be great. But in practice I find it to be incomprehensible, full of surprising behaviours, and also surprisingly buggy. I still can't believe the bug where it can't find `curl` if you're in more than 32 Unix groups.
* Optional type annotation for function signatures throws away a significant benefit of static typing - documentation/understanding and nice error messages.
* Tiny ecosystem. Rust gets flak for its small standard library, but OCaml doesn't even have a built in function to copy files.
* Like all FP languages it has a weird obsession with singly linked lists, which are actually a pretty awful data structure.
It's not all bad though, and I'd definitely take it over C and Python. Definitely wouldn't pick it over Rust though, unless I was really worried about compile times.
I couldn't agree more with the parent commenter about OCaml documentation. Functional programmers appear to love terseness to an almost extreme degree. Things like `first` are abbreviated to `fst`, which is just odd. Especially now that good IntelliSense means there is no real functional (heh) difference between typing `.fi` and pressing Tab, and typing `.fs` and pressing Tab.
The F# documentation is comparatively very spiffy and detailed, with plenty of examples[1][2][3].
[1]: https://learn.microsoft.com/en-gb/dotnet/fsharp/language-ref...
[2]: https://fsharp.github.io/fsharp-core-docs/
[3]: https://fsprojects.github.io/fsharp-cheatsheet/fsharp-cheats...
C++ 100 19.57s
Rust 96 20.40s
F# 95 20.52s
Nim 75 26.04s
Julia 64 30.40s
Ocaml 48 41.07s
Haskell 41 47.64s
Chez 39 49.53s
Swift 33 58.46s
Lean 7 278.88s
Tarjan, n = 10
Nyx - Apple M4 Max - 12 performance and 4 efficiency cores
n! * 2^n = 3,715,891,200 signed permutations
score = gops normalized so best language averages 100
time = running time in seconds
This had me briefly smitten with F#, till I realized the extent that rusty .NET bed springs were poking through. Same as the JVM and Clojure, or Erlang and Elixir. The F# JIT compiler is nevertheless pretty amazing.I nearly settled on OCaml. After AI warning me that proper work-stealing parallelism is a massive, sophisticated project to code properly, the 40 lines of OCaml code I wrote that beat available libraries is my favorite code file in years.
Nevertheless, once one understands lazy evaluation in Haskell, it's hard to use any other language. The log slowdown for conventional use of a functional data structure becomes a linear speedup once one exploits persistence.
I couldn’t believe this was an actual bug in opam, and I found it: https://github.com/ocaml/opam/issues/5373
I don’t think that’s an opam bug, it’s an issue with musl, and they just happened to build their binaries with it.
> * Ocamlfmt is pretty bad. It thinks it is writing prose. It will even put complex `match`es on one line if they fit. Really hurts readability.
I suggest configuring ocamlformat to use the janestreet profile for better defaults.
> * Optional type annotation for function signatures throws away a significant benefit of static typing - documentation/understanding and nice error messages.
People should be providing .mli files, but don't. That said, an IDE with type hints helps this enormously. The VS Code plugin for OCaml is the best experience for noobs, hands down.
> OPAM
yes
This made me chuckle. I've had that thought before, shouldn't the default be a vector on modern devices? Of course other collection types are available.
But you can have a data structure that is more like vector under the hood while still supporting efficient copy-with-modifications. Clojure vectors, for example.
just to give an idea how bad, until recently, you could not just go to ocaml.org and download ocaml for windows, you had to either download one for mingw or wsl
so for many it was just not installable, i.e. for many we didnt have ocaml for windows, until very very recently
On the other hand, you could get ocaml for Windows from Microsoft ever since 2005.
One of the things people often neglect to mention in their love letters to the language (except for Anil Madhavapeddy) is that it actually feels UNIXy. It feels like home.
* Compile time is only ok. On par with C++.
* Async has a surprisingly number of footguns and ergonomic issues.
* There's no good solution to self-borrowing or partial borrows.
* While using macros is fine, writing them is pretty awful. Fortunately you rarely need to do that. Relatedly it is missing introspection support.
* Sometimes the types and lifetimes get very complex.
But overall I still much prefer it to OCaml. The syntax is much nicer, it's easier to read, the ecosystem and tooling are much better, the documentation is much better, and it actively hates linked lists!
Just taking the first example I can find of some auto-formatted OCaml code
https://github.com/janestreet/core/blob/master/command/src/c...
It doesn't look more a soup of words than any other language. Not sure what's hard to parse for humans.
This was my problem as well, the object oriented related syntax is just too much. ML of which caml is a version of, has really tight syntax. The “o” in ocaml ruins it imo.
Considering it only impacts a fairly small subset of the language, could you explain how it supposedly ruins everything?
I agree. OCaml is a complex language with very beginner-unfriendly documentation. In fact, I would even say it's unfriendly to engineers (as developers). The OCaml community prefers to see this language as an academic affair and doesn't see the need to attract the masses. E.g. Rust is an example of the opposite. It's a complex language, but it's pushing hard to become mainstream.
Funny how tastes differ. I'm glad it has a syntax that eschews all the noise that the blub languages add.
* Windows support has improved to the point where you can just download opam, and it will configure and set up a working compiler and language tools for you[^1]. The compiler team treat Windows as an first tier target. opam repository maintainers ensure new libraries and library versions added to the opam repository are compiled and tested for Windows compatibility, and authors are encouraged to fix it before making a release if its reasonably straightforward
* debugger support with gdb (and lldb) is slowly being improved thanks to efforts at Tarides
* opam is relatively stable (I've never found it "buggy and surprising"), but there are aspects (like switches that behave more like python venvs) which don't provide the most modern behaviour. dune package management (which is still in the works) will simplify this considerably, but opam continues to see active development and improvement from release to release.
* the platform team (again) are working on improving documentation with worked recipes and examples for popular uses cases (outside of the usual compiler and code generation cases) with the OCaml Cookbook: https://ocaml.org/cookbook
There are other things I find frustrating or that I work around, or are more misperceptions:
* there isn't a builtin way to copy files because the standard library is deliberately very small (like Rust), but there is a significant ecosystem of packages (this is different to other languages which cram a lot into their standard library). The result is a lot of friction for newcomers who have to install something to get what they need done, but that's valued by more experienced developers who don't want the whole kitchen sink in their binary and all its supply chain issues.[^2]
* the type inference can be a bit of a love/hate thing. Many people find it frustrating because of the way it works, and start annotating everything to short-circuit it. I've personally found it requires a bit of work to understand what it is doing, and when to rely on it, and when not to (essentially not trying to make it do things it simply will never be able to do).[^3]
* most people use singly-linked lists because they work reasonably well for their use cases and don't get in their way. There are other data structures, they work well and have better performance (for where it is needed). The language is pragmatic enough to offer mutable and immutable versions.
* ocamlformat is designed to work without defaults (but some of them I find annoying and reconfigure)
Please don't take this as an apology for its shortcomings - any language used in the wild has its frustrations, and more "niche" languages like OCaml have more than a few. But for me it's amazing how much the language has been modernised (effects-based runtime, multicore, etc) without breaking compatibility or adding reams of complexity to the language. Many of these things have taken a long time, but the result is usually much cleaner and better thought out than if they were rushed.
[^1] This in itself is not enough, and still "too slow". It will improve with efforts like relocatable OCaml (enabling binary distribution instead of compiling from source everywhere) and disentangling the build system from Unixisms that require Cygwin.
[^2] I particularly appreciate that the opam repository is actively tested (all new package releases are tested in a CI for dependency compatibility and working tests), curated (if its too small to be library, it will probably be rejected) and pruned (unmaintained packages are now being archived)
[^3] OCaml sets expectations around its type inference ("no annotations!") very high, but the reality is that it relies on a very tightly designed and internally coherent set of language constructs in order to achieve a high level of type inference / low level of annotation, but these are very different to how type inference works in other languages. For example, I try and avoid using the same field name in a module because of the "flat namespace" of field names used to infer record types, but this isn't always possible (e.g. generated code), so I find myself compensating by moving things into separate modules (which are relatively cheap and don't pollute the scope as much).
You'd be right.
"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike 1"
"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical. – Rob Pike 2"
Talking as someone who wrote OCaml at work for a while, the benefits of functional programming and the type guarantees that it's ilk provides cannot be understated; you only start to reach them, however, once most developers have shifted their way of thinking rather extremely, which is a time cost that that designers of Go did not want new Googlers to pay.
>have shifted their way of thinking rather extremely
What could I read to shift my way of thinking?
The signals & threads episode about OCaml strongly piqued my interest, and not because I have any JS delusions (they would never, lol).
This, at least for me, brings the act of writing a specific piece of code more inline with how I think about the system as a whole. I spend less energy worrying about the current state of the world and more about composing small, predictable operations on relationships.
As for what you can read, I find it's just best to get going with something like OCaml or F# and write something that can take advantage of that paradigm in a relatively straightforward way, like a compiler or something else with a lot of graph operations. You'll learn pretty quickly what the language wants you to do.
I know there are great many Polish people in the world, but why it matters so much in this case? They could have been any nationality, even French!
If people start using the term AI, we better be living in I, Robot. Not whatever the hell this is.
Tangential rant. Sorry.
I think Richard Feldman [0] proposed some of the most reasonable theories as to why functional programming isn't the norm. Your language needs to be the platform-exclusive language for a widely used platform, have a killer application for a highly desired application domain, or be backed by a monster war-chest of marketing money to sway opinions.
Since Feldman's talk, Python has grown much faster in popularity in the sense of wide use in the market place... but mostly because it's the scripting language of choice for PyTorch and AI-adjacent libraries/tooling/frameworks which is... a killer application.
I like OCaml. I started evaluating functional programming in it by taking the INRIA course online. I spent the last four and half years working in Haskell. I've built some side projects in Zig. I was a CL stan for many years. I asked this question a lot. We often say, "use the best tool for the job." Often that means, "use the tool that's available."
I think languages like OCaml, Rust, Haskell etc can be "popular," but in the sense that people like to talk about them and want to learn them and be able to use them (at least, as trends come and go). It's different from "popular" as in, "widely adopted."
I would politely disagree. Torch started in Lua, and switched to Python because of its already soaring popularity. Whatever drove Python's growth predates modern AI frameworks
As far as I could tell, it had to do with two things. First, Python is notoriously dynamic and extensible, making it possible to implement "sloppy" syntax like advanced slicing or dataframes. But also, those guys had lots of pre-existing C and Fortran code, and Python had one of the easiest extensibility APIs to wrap it as high-level packages. And with IPython, you had a nice REPL with graphing to use all that from, and then of course notebooks happened.
It had boosts from Django... but it never had Rails' level of popularity. You kinda have to be first-to-market and that good to get the "killer app" effect.
It's also been integrated as the scripting language for several popular software pacakges (Blender comes to mind).
Machine learning and now... "AI"; seems to be a market cornered by Python quite a bit.
It hit the front page of Slashdot, Digg, Reddit, made the rounds on Hacker news, etc... (https://news.ycombinator.com/item?id=86246)
Django was also very popular at the time.
I had already learned Basic, C++, Java, and C#. I wanted to add a dynamic scripting language that was cross-platform under my belt.
A lot of my peers were in the same boat.
Python seemed at the time, to be the only general purpose scripting language that was easy to use on multiple platforms.
I had heard bad things about Perl being write only, and Ruby being tough to deploy, I also found it hard to read. (Which is a shame they are wonderful languages, though Ruby is dog slow, Python is slow too, but Ruby is worse somehow).
IIRC Google and some other large companies were pushing it as one of their official languages.
Right as Python was rocketing in popularity, Go came out, and I also heard a lot of good things about Clojure (they seemed neck and neck in popularity from my incorrect perspective at the time, lol).
Let's face it, syntax matters. We saw that with Elixir becoming much more popular than Erlang ever did. We saw it with TypeScript being able to introduce a fairly complex type system into JavaScript, and becoming successful among web devs by adapting to established ecosystem and offering a gradual approach, rather than forcing an entirely new, incompatible paradigm on it. The TypeScript story seems a little improbable in hindsight, but it was able to get there by offering an incremental path and making a lot of compromises (such as config options to allow less strict enforcement) along the way.
Personally, I think a new syntax for OCaml might actually be successful if done right. Sure, there have been multiple attempts (the "revised" syntax, Reason, etc.), but none of them are really willing to modernize in ways that would attract your average programmer. The toolchain also needs work to appeal to non-OCaml programmers.
If one can stand a language that is just a little bit older, there is always Standard ML. It is like OCaml, but perfect!
While it's not yet standard nearly all Standard ML implementations support what has become known as "Successor ML" [0]. A large subset of Successor ML is common to SML/NJ, MLton, Poly/ML, MLKit and other implementations.
That includes record update syntax, binary literals, and more expressive patterns among other deficiencies in Standard ML.
For me the two big remaining issues are:
1) There's only limited Unicode support in both the core language and the standard library. This is a big issue for many real-world programs including these days the compilers for which SML is otherwise a wonderful language.
2) The module system is a "shadow language" [0] which mirrors parts of SML but which has less expressiveness where modules cannot be treated as first-class values in the program. Also if you define infix operators in a module their fixity isn't exported along with the function type. (Little annoyance that gets me every time I am inclined to write Haskell-style code with lots of operators. Though maybe that's just another hint from the universe that I shouldn't write code like that.) Of course, the fix to that would be a fundamentally different language; not a revised SML.
[0] http://mlton.org/SuccessorML
[1] https://gbracha.blogspot.com/2014/09/a-domain-of-shadows.htm...
I certainly agree that SML isn't really a production language, though.
Ironically probably because it had the "O"bjects in it, "which was the style of the time"... something that has since dropped off the trendiness charts.
There might be some power in attracting all the people who happen to love ocaml, if there are enough of competent people to staff your company, but that's more a case of cornering a small niche than picking on technical merits
Because plenty of people have been shipping great projects in Ocaml since it was released so it doesn’t seem to be much of an issue to many.
I doubt Ocaml will be surpassed soon. They just added an effect system to the multicore rewrite so all things being considered, they seem to be pulling even more ahead.
Beginners face the following problems: there's multiple standard libraries, many documents are barely more than type signatures, and data structures aren't printable by default. Experts also face the problem of a very tiny library ecosystem, and tooling that's often a decade behind more mainstream languages (proper gdb support when?). OCaml added multicore support recently, but now there is the whole Eio/Lwt/Async thing.
I used to be a language nerd long ago. Many a fine hour spent on LtU. But ultimately, the ecosystem's size dwarfs the importance of the language itself. I'm sympathetic, since I'm a Common Lisp man, but I don't kid myself either: Common Lisp isn't (e.g.) Rust. I like hacking with a relic of the past, and that's okay too.
What applications are written in OCaml? All I can think of (which says more about me than it does about OCaml) is the original Rust compiler.
Even Haskell has Pandoc and Xmonad.
For example, there is no OAuth2 client library for OCaml [1]
I'd like to hear some practical reasons for preferring OCaml over F#. [Hoping I don't get a lot about MS & .NET which are valid concerns but not what I'm curious about.] I want to know more about day to day usage pros/cons.
Meanwhile, OCaml got rid of its global lock, got a really fast-compiling native toolchain with stable and improving editor tooling, and has a cleaner language design with some really powerful features unavailable to F#, like modules/functors, GADTs, effects or preprocessors. It somehow got immutable arrays before F#!
F# still has an edge on some domains due to having unboxed types, SIMD, better Windows support and the CLR's overall performance. But the first two are already in the OxCaml fork and will hopefully get upstreamed in the following years, and the third is improving already, now that the opam package manager supports Windows.
If a language makes "unboxed types" a feature, a specific distinction, and has to sell "removing global lock" as something that is a massive breakthrough and not table stakes from 1.0, it can't possibly be compared to F# in favourable light.
1. Interop with C# is great, but interop for C# clients using an F# library is terrible. C# wants more explicit types, which can be quite hard for the F# authors to write, and downright impossible for C# programmers to figure out. You end up maintaining a C#-shell for your F# program, and sooner or later you find yourself doing “just a tiny feature” in the C# shell to avoid the hassle. Now you have a weird hybrid code base.
2. Dotnet ecosystem is comprehensive, you’ve got state-of-the web app frameworks, ORMs, what have you. But is all OOP, state abounds, referential equality is the norm. If you want to write Ocaml/F#, you don’t want to think like that. (And once you’ve used discriminated unions, C# error-handling seems like it belongs in the 1980’ies.)
3. The Microsoft toolchain is cool and smooth when it works, very hard to wrangle when it doesn’t. Seemingly simple things, like copying static files to output folders, require semi-archaic invocations in XML file. It’s about mindset: if development is clicking things in a GUI for you, Visual Studio is great (until it stubbornly refuses to do something) ; if you want more Unix/CLI approach, it can be done, and vscode, will sort of help you, but it’s awkward.
4. Compile-times used to be great, but are deteriorating for us. (This is both F# and C#.)
5. Perf was never a problem.
6. Light syntax (indentation defines block structure) is very nice until it isn’t; then you spend 45 minutes how to indent record updates. (Incidentally, “nice-until-it-isn’t” is a good headline for the whole dotnet ecosystem.
7. Testing is quite doable with dotnet frameworks, but awkward. Moreover. you’ll want something like quickcheck and maybe fuzzing; they exist, but again, awkward.
We’ve been looking at ocaml recently, and I don’t buy the framework/ecosystem argument. On the contrary, all the important stuff is there, and seems sometimes easier to use. Having written some experimental code in Ocaml, I think language ergonomics are better. It sort of makes sense: the Ocaml guys have had 35 years or so to make the language /nice/. I think they succeeded, at least writing feels, somehow, much more natural and much less inhibited than writing F#.
Bigger native ecosystem. C#/.net integration is a double edged sword: a lot of libraries, but the libraries are not written in canonical F#.
A lot of language features F# misses, like effect handlers, modules, GADTs etc.
As for missing language features, they can also be a double-edged sword. I slid down that slippery slope in an earlier venture with Scala. (IIRC mostly implicits and compile times).
# let foo x = x#frob;;
val foo : < frob : 'a; .. > -> 'a = <fun>
F# is often called "OCaml for .NET", but it is a misrepresentation. It is an ML-family language for .NET, but aside from that ML core they don't have much in common.Whether those features are more valuable to you than the ability to tap into .NET libraries depends largely on what you're doing.
Check out the most popular music today. Like the top ten songs currently. Do you think those are really the best songs out there?
Popularity is mostly driven by either trends or momentum.
Yes! To add to that, the question itself is wrong. We should be asking, how is OCaml able to be so good without being popular? People get the whole thing backward.
The popular languages are typically popular first, then get good later as a result of that popularity. They have to work, they have to be good, they're too big to fail.
This is what happened with Java. It was marketed like crazy at first and only later got refined in terms of tooling and the JVM itself. The R programming language was a mess for data wrangling, but once it was popular, people built things like the tidyverse or the data.table library. The Python ecosystem was disaster with all different testing packages, build tools, and ways to create and manage virtual environments, until the relatively recent arrival of uv, more than three decades after the creation of Python itself. And then there's javascript that's had more money, blood, sweat, and tears poured into it to be improved in one way or another because that's what practically anything running in a browser is using.
Many of the most popular languages are also the most hated. Many of the more niche languages are viewed the most favorably.
It is easy to dislike something yo are familiar with, and easy to be overoptimistic about something you don't know as well.
"the grass is always greener ... "
The reason for why OCaml is not more popular, thus, is that this subset is small. The reason for this may be either (a) habit or (b) it's not that much better than other languages. I'm gravitating to (b). OCaml guys seem to be quite dogmatic for the wrong reasons.
Also it didn’t employ a marketing team to work on outreached and write fancy comments here and some people who have used it for 10 minutes are apparently offended by the Pascal-like syntax and can’t stop discussing it on every Ocaml discussion making every actual users tired.
It has basically all of the stuff about functional programming that makes it easier to reason about your code & get work done - immutability, pattern matching, actors, etc. But without monads or a complicated type system that would give it a higher barrier to entry. And of course it's built on top of the Erlang BEAM runtime, which has a great track record as a foundation for backend systems. It doesn't have static typing, although the type system is a lot stronger than most other dynamic languages like JS or Python, and the language devs are currently adding gradual type checking into the compiler.
"Type Providers" are an example of such negligence btw, it's something from the early 2010's that never got popular even though some of its ideas (Typed SQL that can generate compile-time errors) are getting traction now in other ecosystems (like Rust's SQLx).
My team used SQL Providers in a actual production system, combined with Fable (to leverage F# on the front end) and people always commented how our demos had literally 0 bugs, maybe it was too productive for our own good.
I always wanted to learn Elixir but never had a project where it could show it strengths. Good old PHP works perfectly fine.
Also corporations like their devs to be easily replaceable which is easier with more mainstream languages, so it is always hard for "newer" languages to gain traction. That said I am totally rooting for Elixir.
Talk about "immutable by default". Talk about "strong typing". Talk about "encapsulating side effects". Talk about "race free programming".
Those are the things that programmers currently care about. A lot of current Rust programmers are people who came there almost exclusively for "strong typing".
In 2025, Elixir is a beautiful system for a niche that infrastructure has already abstracted away.
Do you mean Kubernetes?
My mental model of Erlang and Elixir is programming languages where the qualities of k8s are pushed into the language itself. On the one hand this restricts you to those two languages (or other ports to BEAM), on the other hand it allows you to get the kinds of fall over, scaling, and robustness of k8s at a much more responsive and granular level.
That's like complaining that unsafe{} breaks Rust's safety guarantees. It's true in some sense, but the breakage is in a smaller and more easily tested place.
The throughput loss stems from a design which require excessive communication. But such a design will always be slow, no matter your execution model. Modern CPUs simply don't cope well if cores need to send data between them. Neither does a GPU.
Interactive Elixir (1.19.0) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> x = 1
1
iex(2)> x = 2
2
iex(3)>
What's immutable about elixir? It's one of the things which I MISS from Erlang -- immutability.Data is immutable and thats much more important than whether local variables can be modified imo.
{
const int x = 1;
{
const int x = 2;
}
}
which is to say, there are two different `x` in there, both immutable, one shadowing the other. You can observe this if you capture the one that is shadowed in a closure: iex(1)> x = 1
1
iex(2)> f = fn () -> x end
#Function<43.113135111/0 in :erl_eval.expr/6>
iex(3)> x = 2
2
iex(4)> f.()
1I don't understand why this isn't more popular. For most areas, I'd gladly take a garbage collector over manual memory management or explicit borrow checking. I think the GC in D was one of it's best features, but it's downfall nonetheless as everyone got spooked by those two letters.
Nobody wants to effectively learn a lisp to configure a build system.
I would love to spend more time but even though Microsoft gives it plenty of support (nowhere near as much as C#), the community is just too small (and seems to have gotten smaller).
Looking at https://www.tiobe.com/tiobe-index/ numbers fall off pretty quickly from the top 5-7.
Guessing this is the same for OCaml, even if the language as such is nice.
Community is an interesting thing, and for some people I guess it is important. For me language is just a tool having coded for quite some time and seen communities come and go; don't care about being known or showing an example per se. If the tool on the balance allows me to write faster code, with less errors quicker and can be given to generic teams (e.g. ex Python, JS devs) with some in house training its a win. For me personally I just keep building large scale interesting systems with F#; its a tool and once you get a hang of its quirks (it does have some small ones) quite a good one that hits that sweet spot IMO.
My feeling however is with AI/LLM's communities and syntax in general is in decline and less important especially for niche languages. Language matters less than the platform, ecosystem, etc. Its easier to learn a language then ever before for example, and get help from it. Any zero cost abstraction can be emulated with more code generation as well as much as I would hate reviewing it. More important is can you read the review the code easily, and does the platform offer you the things you need to deliver software to your requirements or not and can people pick it up.
I don't know if AI can change that but when using python, there is a feeling that there is an awesome quality library for just about anything.
I'm still surprised it can do so many things so well, so fast.
I've never used it so can't speak from any experience, and unfortunately it doesn't seem particularly active (and doesn't mention a current status anywhere), and doesn't have a license, so shrug. When it's been posted here (https://news.ycombinator.com/item?id=40211891), people seemed pretty excited about it.
I feel a new simple ocaml like language that just compiled to Go would be really popular, really fast. And it would not even need to build a ecosystem, as Go already have all the things you need.
Something like what Gleam is for Erlang.
Hashtbl.add table key value
Precedence, nominal inheritance, HKTs, incoherent typeclasses make Scala much less aesthetically pleasant but much more productive.
Hashtbl.add table key value
What's your point with that?There's nothing inherently wrong with using Jane Street's stdlibs if you miss the goodies they provide, but be aware the API suffers breaking changes from time to time and they support less targets than regular OCaml. I personally stopped using them, and use a few libraries from dbunzli and c-cube instead to fill the gaps.
15% are people trying to sell their own language of choice sometimes with the argument that "it’s less scary, look".
I would be shocked if a mere 5% is actual engagement with the topic at hand sometimes while pointing flaws which are very real.
From there, I gather two things, the main one being: maybe Meta was right actually. People are that limited and the syntax should have been changed just to be done with the topic.
{Ecosystem, Functors} - choose 1
F# is not stagnant thankfully, it gets updates with each new version of dotnet (though I haven't checked what is coming with dotnet 10), but I don't recall anything on the level of the above Ocaml changes in years.
> Fewer abstractions, and an easy to understand runtime
> Strong static guarantees
> Functional programming constructs. Especially pattern matching and sum types.
> Good performance
> Good documentation
I feel this is also Elm!
Languages have features/constructs. It's better to look at what those are. And far more importantly: how they interact.
Take something like subtyping for instance. What makes this hard to implement is that it interacts with everything else in your language: polymorphism, GADTs, ...
Or take something like Garbage Collection. It's presence/absence has a large say in everything done in said language. Rust is uniquely not GC'ed, but Go, OCaml and Haskell all are. That by itself creates some interesting behavior. If we hand something to a process and get something back, we don't care if the thing we handed got changed or not if we have a GC. But in Rust, we do. We can avoid allocations and keep references if the process didn't change the thing after all. This permeates all of the language.
I personally love ML languages and would be happy to keep developing in them, but the ecosystem support can be a bit of a hassle if you aren't willing to invest in writing and maintaining libraries yourself.
OCaml has some high profile use at Jane Street which is a major fintech firm. Haskell is more research oriented. Both are cool, but wouldn't be my choice for most uses.
ML is a family of languages and we have StandardML with different implementations, OCaml with official path and JS path, F# and whatnot.
This is the problem for Lisp, too as there are many Lisps.
OCaml did become popular, but via Rust, which took the best parts of OCaml and made the language more imperative feeling. That's what OCaml was missing!
It has no dogmatic inclination towards functional. It has a very pragmatic approach to mutation.
The similarities are fairly superficial actually. It’s just that Rust is less behind the PL forefront that people are used to and has old features which look impressive when you discover them like variants.
There is little overlap between what you would sanely use Rust for and what you would use Ocaml for. It’s just that weirdly people use Rust for things it’s not really suited for.
I'm not saying that Rust feels like Ocaml as some are interpreting, I said Rust is more imperative feeling, they're not the same. The reason Rust has had success bringing these features to the mainstream where Ocaml has not, I believe, is because Rust does not describe itself as a functional language, where as Ocaml does, right up front. Therefore, despite Rust having a reputation for being difficult, new learners are less intimidated by it than something calling itself "functional". I see it all the time. By the time they learn Rust, they are ready to take on a language like Ocaml, because they've already learned some of the best parts of that language via Rust.
Note my comment about their similarities is not at the level of borrow checkers and garbage collectors.
sounds superficially similar to Common Lisp
What?