This is currently not the case. It's sometimes not avoidable at all. Compile times are a real concern right now. A GHC maintainer has even confessed to it being a very significant issue.
https://www.reddit.com/r/haskell/comments/45q90s/is_anything...
In particular we avoid "academic" features of the language like functors[2] and first class modules, because they obscure the flow of the code and are a headache for ordinary programmers to understand. I often get requests from people from the OCaml academic community asking if we have any available positions, and the answer has so far always been no.
[1] https://github.com/libguestfs/libguestfs/tree/master/v2v
If you're hiring OCaml programmers who can't understand (or learn to understand) functors, you're probably doing something wrong. Even using the stdlib Hashtbl requires an understanding of functors.
In OO land, they're just a way to ensure that an argument satisfies an interface, exposing certain public functions and properties — just like an object in Java has to implement the Comparable interface in order to be sorted in a collection.
There isn't anything exactly analogous in Java. Functor application can introduce new type definitions at compile time and enforce static type safety. You need this so that you can statically require that you can only take unions of sorted collections whose types came from the same ordering module applied to the same ordered set module.
I don't understand what you're saying. Are those applicants to Red Hat, or developers asking to contribute to libguestfs that you say no to.
Also, according to this[1], There are basically three main contributors, including you, so I fail to see how training would be an issue ...
[1]: https://github.com/libguestfs/libguestfs/graphs/contributors
* stack is pretty solid at multi-package builds, save everything in one single git repo for easier snapshotting. See yesod for reference
* use stack with stackage LTS unless you have a really good reason not to
* TH is nice to avoid, but you also miss out on great libraries like Persistent. Seems reasonably hard to dodge that one if you're sold on the conveniences of the yesod ecosystem
* We've been overall pretty happy with classy-prelude as Prelude replacement. Can throw off beginners at first, but is quite convenient to work with.
The part that most worried me was If we look at the history of programming, there are many portents of the future of Haskell in the C++ community, another language where no two developers (that I’ve met) agree on which subset of the language to use.
I also perked up when he talked about the difference between pipeline-paradigm coding and coding involving monadic trees.
I'd love to dive into the language, but with F# I'm happy enough that I code as little as possible and create lots of value. While I think that would be even more true in Haskell, it doesn't seem worth the switch -- yet.
Having said that, I'm definitely seeing some parallels between my own experience in F# and this author's experience in Haskell, especially the part about multiple ways to solve a problem. Are there no good books about common FP best practices? Seems like much of what you would consider a "best practice" would apply whether it's F#, OCAML, or Haskell.
I think the Haskell world would benefit from talking a bit more about "design patterns." I've seen the concept get a fair bit of abuse because of the claim that Haskell's abstraction facilities are good enough to eliminate the need for Visitor Pattern style boilerplate—which has truth to it, but loses the baby with the bathwater.
Design patterns as in "common ways to structure programs and parts of programs" are fundamental for people who are learning to program productively. They constitute the repertoire of coding on the level above syntax. They're the principal structures of the idiomatic lexicon. The good ones seem obvious once you understand them, but a poor grasp of them causes vague confusion and big missteps.
In Haskell (and pure FP in general), there are a lot of useful design patterns that beginners pick up from the ambient culture if they're lucky:
- the reader/state/IO monad transformer stack;
- domain values as algebraic structures e.g. monoids;
- functor composition;
- free monad interpreters;
- Xmonad-style pure core with I/O wrapper;
- explicit ID values for representing identity;
- recursion schemes;
- and so on and so on.
Sure, Haskell can represent a lot of these things as formal abstractions, and that's wonderful—but people still need to learn when and how to use them and adapt them for their domains.What I'm describing is far from a Haskell-specific problem, actually—there aren't that many resources in general that focus on these kinds of patterns.
When I was a beginner programmer, I was lucky to find Ward Cunningham's original wiki, the Portland Pattern Repository, which was all about spreading and discussing this kind of cultural conceptual knowledge.
There was a strong humanistic influence (from Christopher Alexander, through the early agile movement before it became a high-octane consultancy buzzword) and Haskell's culture has more DNA from mathematics and logic, and so treats patterns differently...
Still, I would love to see more discussion about functional patterns, at all scales from function implementation to application architecture.
Also, stuff like "strictness annotation" is not a function.
I think this happened to Ruby for some time, not anymore probably. Maybe the next language to get the benefits of the "experienced early adopters effect" will be Elixir.
I don't think you can beat other languages yet, except in some very niche areas. There's not even incremental compilation!
Hell, even python can infer function types these days!
Rust has been around much longer than Swift for example, and yet there are many very more production Swift apps. Whereas rust seems limited to play stuff.
Of course, given another 2-5 years it should catch up in many ways. Comparing it to mature languages, or languages like Swift (from the worlds largest company paying some of the worlds best language developers) is not really fair.
Definitely a play ground for enthusiasts at this point. Which I guess is what you meant.
Conversion is part of the hassle, the other part is not having common functions (like, say, "splitPrefix") that will work across all string-like types.
For this, I recommend the monoid-subclasses package which, among other goodies, offers the TextualMonoid typeclass, that has instances for many string-like types.
http://hackage.haskell.org/package/monoid-subclasses-0.4.2/d...
I feel there must be some sort of story or interesting thing to learn here. Or is it just the usual str vs widestr type problems?
Also was interested to see the comment about huge records and the memory pressure it can cause. Seems like that's really an issue with immutability. I was expecting the author to provide some sort of advice or workaround for it, but apparently not.
The original Haskell strings were linked lists of characters. This was simple and elegant and worked well with the functional programming approach of the time (1980s, by the way, so maybe Haskell in its origins isn't quite so modern as you think). Nobody was much concerned about high performance string operations in Haskell at the time.
Inevitably, later people wanted to add more performant string types. But should they be lazy or strict? And do you want an abstract representation of Unicode, or do you want something more immediately suitable for arbitrary binary data? Enter four more string types. And now here we are with 5 string types in common use.
For me, Haskell has been a huge productivity booster. It forces me to write good code and 90% of mistakes that I normally make are all caught at compile time. I use servant for creating my API server and it is pleasure to use (being able to create APIs declaratively is very easy for me to understand as a non professional). I recently had to write some front-end code in JavaScript and I noticed how simple mistakes can take 1/2 hour to debug (you pass in an array of objects instead of an object and you will wonder why `for (k in obj)` logic is not working.
I refactor huge parts of my code - can gut a particular API endpoint response and rewrite it without worrying about introducing more bugs. This allows me to start writing code without a lot of planning and refactor as I go instead of spending a lot of time upfront thinking through the implications.
Using Hoogle or Hayoo to search for functions based on types is just simply awesome. If you want a function that returns the index of a list that satisfies a predicate, you don't need to come up with search words instead you just look for it based on the types like so: http://hayoo.fh-wedel.de/?query=%28a+-%3E+Bool%29+-%3E+[a]+-.... This does not mean you can solve all problems this way but it is very quick to get easy answers than Googling or putting it on SO.
Most developers write their APIs and need to spend more time documenting it. With servant I just derive documentation directly from the types so my documentation moves in lock step with my code.
That said, using Haskell has its challenges (especially as a beginner).
* Compile-refresh cycle is frustrating. With Python or Go, you make a change and refresh the page, it is there. With Haskell, the 30s wait can become painful. My code is organized into modules but it is hard to avoid dependency related compile issues -- I tend to put most of my data records (mostly related by business logic) in a module and if you add anything to this module, you will end up recompiling everything that depends on it (which is most of the code). I am sure a pro-Haskeller would do it differently but whatever suggestions available online is not steering me in the right direction.
* Starting out with a fresh framework is not trivial for a beginner. servant is really nice because it came with very good documentation and also a tutorial by someone on how to use it with wai (http server) and persistent (an ORM). Without it, I would not have been able to get to where I am now.
* Not all libraries are created equal -- some of the common use cases are well documented or has enough documentation that you will be ok. But once you venture beyond common use cases, trying to use a library is not easy. Most of the time, types help and being able to test it in the REPL is nice but it can only take you so far. I recently looked at a library for creating a MIME string but got lost in some undocumented library.
* Haskell's community is great. You will always find someone to help you. However, as the author was saying, you want to get some work done and you cannot wait 2 days for an answer on Reddit or SO. IRC is great for specific libraries (say servant or Yesod) but general #haskell can be daunting -- 90% of the people seem to be discussing Lenses or Free Monads and your stupid type error is not going to get any attention.
* Easy parts of Haskell are very easy to pickup and the hard parts seem impossible to fully comprehend (I will think I understand Monad Transformers and will try to implement some logic but will get stuck in type check hell and quickly give up).
Btw, I did not know about fieldLabelModifier until now and it will cut down a lot of boiler plate. That said, I cannot wait for GHC 8 and its Duplicate Record Fields! Records are really annoying for writing business logic driven API servers.
http://chrisdone.com/posts/haskell-repl
http://chrisdone.com/posts/making-ghci-fast
The Haxl team seem to use this strategy:
"Haxl users at Facebook do a lot of development and testing inside GHCi. In fact, we’ve built a customized version of GHCi that runs code in our Haxl monad by default instead of the IO monad, and has a handful of extra commands to support common workflows needed by our developers."
http://simonmar.github.io/posts/2016-02-12-Stack-traces-in-G...
https://github.com/ndmitchell/ghcid http://neilmitchell.blogspot.de/2014/09/ghcid-new-ghci-based...
In regards to changing data records I've suffered the same problem as well. At best I've isolated what I can into separate modules so the impact is smaller after a small change, but obviously isn't universally applicable.
Note that there is also a #haskell-beginners IRC channel where it's sometimes easier to get a responses to certain questions than the much broader #haskell channel. Both channels are full of incredibly nice people for sure.
I too did not know about fieldLabelModifier and am excited about the GHC 8 changes as well!
Generics is also not without its faults (it inflates compile time/memory, for example, see problems surrounding aeson 0.10). If the article is correct when it says that /[TH is] a eternal source of pain and sorrow/, then you're trading one pain for another.
Besides, data outlives code. By orders of magnitude. You seem to be in love with Haskell today, chances are you'll be using a different language in ten years but the data you'll be working with will probably have been around for much longer than that.
Don't fall in love with programming languages, it's a waste of emotional energy.
Of course one can make computation on those types, but it is so un-natural that it scares me :
* BigDecimal for currency, let me laugh * Date/timestamp without proper casting rules * Types towers with inheritance, generics, etc. Pfff... * Still no fine library to represent an address * Representing mutable ordered lists in SQL databases is still quite painful (possible, sure, but there's so boiler plate code to write)
So the data representation/manipulation problem, which goes with the data longevity you observe, that's something to learn about...
There's no reason to love current programming languages :-( (but I do love Python :-) 3 of course :-))
Actually, as silly as it sounds, I write most my hobby code in C++ because of this. Well, not for a thousand year span but for longevity anyway.
Question 2: How the F#*$ are you a software developer in Mountain View without a job?