I don't think this is an accurate representation of how the Haskell community itself views the state of Haskell documentation. A large section of the community views the state of documentation as very not best-of-class and strongly advocates for improving it (and views type signatures as a poor replacement for at least examples). See e.g. https://www.reddit.com/r/haskell/comments/2ory86/there_is_a_...
Likewise "Otherwise, toss a coin!" as a response to Cabal or Stack makes me sad. Tooling can often depend on exclusively either one or the other (see e.g. https://plugins.jetbrains.com/plugin/8258-intellij-haskell) and the fact that from a capability point of view, they're converging on the same thing makes it an unnecessary point of friction for new users, although I understand the reasons for why they both came to be.
EDIT: I should detach myself from the usual curmudgeonly HN comment to say that the organization of the article is very well done. I would like to see how non-Haskellers approach it because I think stuff like "What language idioms are available?" is a good way to approach a new language.
EDIT 2: The original Reddit link was far older than I thought it would be. Here's a more extreme (more than I personally agree with) take on the situation, but is a lot more recent: https://old.reddit.com/r/haskell/comments/or93z3/what_is_you...
Almost every time (though it's quite rarely) that I end up on a Haskell documentaiton page, I facepalm and move on. Yes, there's a dump of inpenetrable types. But... how do I actually use those types? :)
1) a simple instance of use of the language feature
2) a few typical instances of use
and
3) did NOT use towers-of-hanoi, prime-sieve, mathematical smartness, pointless tricks unrelated to the core functionality, dubious features under dispute..
The best UNIX man pages for systems commands do exactly this. They show all the commandline options in the sort-of BNF, they list the meaning/intent of the options (hopefully alphabetically) and under EXAMPLES they show you the 3 cruicial commands you really came looking for, exactly as the worked when the manual page was written.
I know I used man -s 1 or -s 8 to exemplify how to do man -s 2 and man -s 3 pages, but the point is they obey some simple (BNF) norms, some simple (nroff -man) format and EXAMPLES exemplify real-world use. Not "I am smart, watch me put a fruitloop in my nose" but real world, applicable instances.
99 out of 100 times someone will land on this page and already know what regexes are. So basically 1/3 of the page is useless. After a while you find you need to use the `match` method. Great! Now I see I get a match object. How do I deal with those? There is not a clickable link here. In the example it's an opaque object. So I scroll further maybe ctrl+F to find match object. Here I find what I'm looking for. So this take me pages of scrolling and then manually searching for the object a method returns. That SUCKS. Everytime I use the re library and need to refresh my knowledge I'm in pain.
Compare this to Haskell: https://hackage.haskell.org/package/regex-compat-0.95.2.1/do...
Great. I instantly see how to construct a regex. The next thing I see is the `matchRegex` function. The bread and butter of using regexes. And what does it return? "Maybe [String]". Literally couldn't be easier then that.
Of course this doesn't hold for every Haskell library and Python library. But this is my general experience.
That said, Haskell documentation tends to be really lacking on some of the basics that other ecosystems get right, like where to start, shielding beginners from functions for more niche usecases, examples of how to build or deconstruct values, or end-to-end case studies. Some of the nicest things to _use_ have an API that primarily comes from typeclasses in base, and don't provide any documentation beyond a list of typeclasses. My impression has been that most Haskellers seem to be on the same page that those things need work, though - I'm optimistic about the situation improving, but it's still hard to recommend to anyone expecting the experience of working with popular libraries from more mainstream ecosystems.
[1] https://bartoszmilewski.com/2014/09/22/parametricity-money-f...
0. https://wiki.haskell.org/Introduction#Quicksort_in_Haskell
Several years ago I tried to learn Haskell, and while I got to the point where I could write Haskell code, I found reading it nearly impossible. Even code I'd written a day or two earlier I found extremely difficult to decipher. While I learned some interesting things from the time I spent trying to learn Haskell, I eventually gave up on ever actually using it.
The expressions may not be simple, because what is happening may just not be that simple, but (purely a feeling) compared how the code achieving the same would come out on other "mainstream" languages, Haskell still tends to come out on top for me.
Make it clash, which is a (strict, real) Haskell subset replacing VHDL and Verilog[1], and the difference is orders of magnitude. In my hobby use, I plain refuse to write anything else than simple glue logic in VHDL or Verilog now.
[1] Usually compiling to them.
Can I get moving quickly with Haskell or will it take a while to get accustomed to it switching from other languages? My fear is that, like when I started picking up rust, it'll be a bit of a slog.
Can other people understand what is happening? Can people with less expertise understand what is happening? Can you yourself understand what is happening 6 months from now?
I think that's one of the things that tends to take a long time to learn: Haskell imposes a lot of limitations up front on what you can do. The rules (and in particular the barriers between pure and impure code) aren't quite as rigid at they seem. There are ways to work around them without actually breaking any of the soundness rules.
(To be fair, there is one well-known foot gun, which is that if you lazily read from a file and close the file, the consumer of the file's contents will often get truncated results. So don't do that. Maybe it's been fixed by now, or that API's been deprecated? I haven't really kept up on current events in the Haskell world.)
I think the API is still there, but lazy IO is so heavily discouraged that no one is using it. It's not relevant anymore and considered a mistake.
Beautiful language though, wish I could use it at work.
Sure some people go off the rails with some things, but it's possible to write mostly vanilla Haskell and get a lot done.
The CI for a medium-sized Haskell-using company should have multiple instances of:
Project repo -> CI compilation -> tests -> publish library to company's internal stackage.
That way, when someone inevitably publishes something that breaks things, the other teams will have an easy mitigation / known-good-version to fall back on.
In other codebases (especially if they use the typical popular dynamic languages), you might start breaking things up into microservices sooner than you would with Haskell. But given that Haskell runs much, much faster than those other languages, and isn't quite the memory hog that the JVM is (when we're comparing to Scala), you might find that it'd help you scale if you broke up the codebase into libraries, rather than breaking it up into microservices. This is especially the case if your company / project is bottlenecked on ops (which many companies are, not just ones using Haskell). Multiple libraries is much easier to manage than microservices and forestalls the point at which you'll need serious dedicated ops.
To me it's always been hard to reconcile that you have to deal with so many restrictions in the language that save you from ending up in some ill-behaving program state, yet a wrong fold can basically blow your entire program up.
Laziness does have real benefits sometimes though. It's nice to be able to define infinite data structures, for instance.
For one, it makes reasoning about runtime (as in "how long") harder: E.g. an innocent looking "take 5" might be what actually executes half of the code you've written... which is also harder to reason about than if it would be "all of it". This can also make debugging harder: Values are only evaluated when they are needed, and a "value" itself may not be just, say, a single number, but a complex data structure consisting of many values, so that "evaluated" has many different stages between "a thunk, not evaluated at all" and "weak head normal form". So code does not get executed when you wrote so in your program, but potentially much much later. Semantically that should not make a difference (the results should be the same), but when you want to debug program flow...[1]
You can see how "thunks" in general add a certain dimension to everything you do. Throw in interaction with garbage collection, and things aren't always obvious in resource usage.
Also, there is maybe an argument that if you have infinite data structures that you only tend to evaluate partially (which is what makes laziness really fun, actually), then accidentally trying to fully evaluate such a data structure makes your program not terminate anymore. But I'm not sure how that's really a problem coming from laziness. Laziness allows you to handle infinite data structures in the first place, mishandling them is not much different than accidentally writing an infinite loop in a strict language, in my opinion.
That being said, I still enjoy writing Haskell very much when I get to. It makes me feel much more productive, and I do lament having to use other languages at work. I must say using Haskell was always on private projects, and I have never tried doing so in a team, though.
[1] On the flip side, I find the pureness and strictness of Haskell to make debugging easier, or just less necessary.