I'm a devout Clojure developer. I think it delivers on the promises he outlines in his talk, but I also have no small appreciation for Haskell as an outrageously powerful language. Everyone robs from Haskell for their new shiny language, as they should. Unfortunately, not a night goes by where I don't ask God to make me smart enough to understand how a statement like "a monad is just a monoid in the category of endofunctors" can radically change how I implement marginably scalable applications that serve up JSON over REST. Clojure talks to me as if I were a child.
Rich Hickey is selling the case for Clojure, like any person who wants his or her language used should do. His arguments are mostly rational, but also a question of taste, which I feel is admitted. As for this writer, I'm glad he ends it by saying it isn't a flame war. If I had to go to war alongside another group of devs, it would almost certainly be Haskell devs.
You seem to be in the camp of gradual types. Which Clojure falls more into, though experimentally. Racket, TypeScript, Shen, C# or Dart are better examples of it.
make me smart enough to understand how a statement like "a monad is just a monoid in the category of endofunctors" can radically change how I implement marginably scalable applications that serve up JSON over REST.
That's the thing, it doesn't radically change it. Static types are not powerful enough to cross remote boundaries. Also, monads don't need static types, and fully exist in Clojure. Haskell is more then a language with a powerful static type checker. Its also a pure functional programming language. It will help you if you don't complect static types with functional programming. There's more design benefits from functional programming then static types. Learning those can help you write better code, including Json over rest api style applications.
Clojure and Haskell are a lot more similar then people think. Clojure is highly functional in nature, more so then most other programming languages. So is Haskell. Haskell just adds a static type checker on top, which forces you to add type annotations in certain places. Its like Clojure's core.typed, but mandatory and better designed.
?
Is this related:
https://ocharles.org.uk/blog/guest-posts/2014-12-23-static-p...
There is no fun left if you know all the overarching principles of a language, and you realize it still doesn't solve your problem. This happened to me when learning Python, this is also why I don't really look at Go or Rust. They're good languages, I might use them at a workplace someday, but you can get to the end of their semantics, but be still left with the feeling that it's not enough.
(EDIT: minor wording improvement)
That said, Python is also smarter than me. The possibilities with monkey patching and duck typing are endless. But differently from Haskell, Python is not a good teacher, so I tend to only create messes when I go out of the way exploring them.
I tend to think of Haskell as an eccentric professor.
Sometimes it's brilliant and what it's developed lets you do things that would be much harder in other ways.
Sometimes it just thinks it's clever, like the guy who uses long words and makes convoluted arguments about technicalities that no-one else can understand to look impressive, except that then someone who actually knows what they're talking about walks into the room and explains the same idea so clearly and simply that everyone is left wondering what all the fuss was about.
A functor is a container that you can reach into to perform some action on the thing inside (e.g. mapping the sqrt function on a list of ints). The endo bit just tells you that the functor isn't leaving the category (e.g. an object, in this case a Haskell type, when lifted into this functor context is still in the Haskell 'category'). A monoid is something we can smash together that also has an identity (e.g. strings form a monoid under concatenation and the empty string as an identity). So, in other words, monads are functors ('endofunctors') that we can smash together using bind/flatMap, and we have an identity in the form of the Id/Identity functor (a wrapper, essentially - `Id<A> = A`).
Once I discovered the Monad thing in Haskell has pretty much nothing to do with the Monad in Category Theory, everything made much more sense. As a bonus I now (sort of) understand Category Theory. Much the same as Relational Databases have not very much to do with Relational Algebra.
> the Monad thing in Haskell has pretty much nothing to do with the Monad in Category Theory
to "the Monad thing in Haskell is a very simple special case of the Monad in Category Theory". Thinking you have to "learn category theory" before you can use a Monad in Haskell is like thinking you have to learn this
https://en.wikipedia.org/wiki/Function_space#Functional_anal...
before using a function.
Hopefully we all agree that static types and dynamic types are useful. Those who use hyperbole are attempting some form of splitting. I think the point where we disagree is what the default should be. The truth is this discussion will rage on into oblivion because dynamic types and static types form a duality. One cannot exist without the other and they will forever be entangled in conflict.
I am not sure what you mean when you talk about the duality of static and dynamic types. One can exist without the other and most statically typed languages either forbid or strongly discourage dynamic typing.
The author here is missing the rhetoric. The rhetoric is not about the programming language but about how we should be doing information processing. Except that the author isn't missing that point:
> In Haskell we typically “concrete” data with record types, but we don’t have to.
Great. That is the dichotomy. And it's not a "false" one. This is the question: should we be "concreting"? That's the whole dichotomy/point that is being made. By encoding EDN/Clojure in Haskell the author has gone through a cute intellectual puzzle but hasn't contributed to the crux of the discussion. (Indeed, he's tried to dismiss it as "false".)
The ergonomics that he ends up with are fairly lean (at least in the examples he's shown), though the Clojure expressions are a little leaner. But that's probably because Clojure has actually taken a stance/belief/opinion on the very real question/dichotomy at hand.
https://github.com/bos/aeson/blob/master/Data/Aeson/Types/In...
> though the Clojure expressions are a little leaner
Yes they are. The price is complete lack of type safety. And the benefit is an insignificantly small reduction in boilerplate code.
The number of bugs I've seen where somebody would "get" a number that turned out to be a string or string that turned out to be a number...
At some point your program has to do some specific task over some specific kind of data. Maybe you wanted to ask when should we be concreting?
This post by DeGoes makes the same point. http://degoes.net/articles/kill-data
The default model of algebraic data types is too inflexible.
There are different extensions which work with this issue. Good record system, Extensible cases, using free monads etc. We can have concise syntax for automatically declaring an interface for a algebraic data type based on some field values (ie, customizable deriving statements). Namespace qualified keywords, so we have rdf like attribute based semantics.
The post doesnt respond to the issue but suggests that if you want to do the same thing in clojure with error handling etc, you will need to think about this stuff.
Also, Hickey's mention of Monads was again not about static types. Monad laws are not typechecked. Their motivation is purity. The only slight inconvenience in a dynamic context is that you dont have return type polymorphism, so you have to type IOreturn instead of return.
This is not true and it's important to clear up this misconception lest anyone thing "the only good reason to use monads in Haskell is because it's a pure language".
* The use of monads in functional programming arose purely technically as an innovation in denotational semantics
* Then someone noticed you could use it to wrap up IO purely in Haskell
* Then it was noticed you could use it for all sorts of other stuff besides dealing with IO in a pure language.
Monads are only a little bit related to purity.
My point is that this doesnt have much to do with static typing vs dynamic typing per se, as we dont check them statically. They are just important examples of an interface with implementations for many data types, which can be useful even in a dynamic language like Clojure. People who write a parser in a dynamic language might benefit from learning about distinction between applicatives and monads.
Data is just information, it doesn't care about it's implementation and is portable across platform and language.
You can attach your monads to an #error value when you parse it. Or not, if you happen to parse it in Python.
Yet I really do like Clojure, F#, and PureScript. There's an experimental C++ back-end to PureScript now [0]. I wonder if that will ever be a viable production target?
Anyway, one of the things I like about PureScript is the row-types. Does anyone know if there's a plan to get row-types into Haskell?
Obligatory reminder to anyone enjoying PureScript so much they want to compile it to executable binaries for their backend work (instead of Node or such) --- I'm still hacking along on my PureScript-to-Golang trans/compiler (GH to follow in profile if interested). Unlike most alternative backends (to date) it's not a parallel fork of the purs compiler but works off the official purs compiler's `--dump`ed intermediate-representation files. Seemed more tractable to me to do it that way.
The proposal to remove the Eff type (going back to IO) from purescript is telling
I don't know if we will ever invent the perfect static type system, but I do know that having the ability to specify some types in a pretty good type system, is better than not being able to specify any types.
I'm convinced that a language with a progressive type system is strictly better than one without. Therefor, any debate that compares static vs dynamic, instead of static vs progressive is not interesting to me.
(map identity "foo") ;; a seq for String is its chars
;=> (\f \o \o)
(map identity {:foo :bar}) ;; a seq of a Map is the
;; pairs of key/values
;=> ([:foo :bar]) cljs.user=> (get [:x :y :z] 1)
:y
cljs.user=> (get 5 :x)
nil.
.
.
r/clojure on JSON vs EDN: https://www.reddit.com/r/Clojure/comments/6gytlf/json_vs_edn...
Transcript of Rich Hickey EDN talk which OP obviously hasn't seen: https://github.com/matthiasn/talk-transcripts/blob/master/Hi...
Transcript of Rich Hickey talk OP linked, C-f "edn": https://github.com/matthiasn/talk-transcripts/blob/master/Hi... Perhaps OP had his fingers in his ears while he watched it. This blog post should be retracted with an apology.
> Transcript of Rich Hickey talk OP linked, C-f "edn":
What do you mean? There are these three occurences of "edn", none of which is enlightening.
* That's great, I'll start shipping some edn across a socket and we're done.
* How many people ever sent edn over wire? Yeah
* So the edn data model is not like a small part of Clojure, it's sort of the heart of Clojure, right? It's the answer to many of these problems. It's tangible, it works over wires
It sounds like he mostly cares about edn because of wires.
http://hyperfiddle.net/ (my startup) is an example of a data driven system. Hyperfiddle itself is implemented as a large amount of data + 3000 loc to interpret it. If the system is only 3000 loc, you're really not at the complexity scale where all that category theory gymnastics really pays off.
Seems like critiques of a programming language or paradigm are usually made by someone imagining a very bad codebase from their past.
It is extremely easy to use haskell in "dynamic mode". Just use `ByteString`(or Data.Dynamic for safety/convenience) for all your data. Types just present a way to encode some statically known guarantees about the structure of your data/code. You are free to not encode any properties if you want to.
But it is very rare that the data you are working with requires the full generality of `ByteString`. You usually have some sort of structure rather than just working with strings of zeros and ones.
While technically true, saying this is about as useful as saying "You can do anything in any Turing-complete programming language."
Doing "dynamic" typing in a static language requires me to add all of 5 characters, e.g. ": Any"
Doing static typing in a dynamic language requires me to write a type checker.
These are nowhere near the same.
With dynamic types(or just one type), you don't even have the option to do this.
You still have types; they're just not checked at compile time.
Why is it only a marginal improvement? It adds considerably more semantic information.
"Utilizing EDN also promotes a lot of invisible coupling. Some may tell you that dynamic types don’t couple, but that is incorrect and shows a lack of understanding of coupling itself. Many functions over Map exhibit external and stamp coupling."
Coupling implies a bidirectional connection. Functions rely on data types, but not vice versa.
You need either Dynamic or existentials because Clojure enables you to pass data structures between two functions expecting collection elements of differing capabilities without either A) whole program / inter-module analysis or B) an O(N) type translation.