Future, which is really a souped up promise abstraction, is the main star, and for-comprehensions are nice syntactic sugar. Being a "Monad" doesn't help you do anything with it though. And Future technically doesn't even truly obey the Monad laws, because of exceptions.
It's useful that a couple of container classes in scala use implement this trait:
trait ChainableContainer[A] {
def map(fn: A => B): ChainableContainer[B]
def flatten(nested: ChainableContainer[ChainableContainer[A]]):ChainableContainer
}
But beyond making it a little easier to guess what's going on, the underlying math isn't useful in actual programming.Future also doesn't obey the monad laws for not being RT and that causes a lotttt of complexity.
Most all the theorist/logician/algebraist-turned-programmer folks I know all feel more-or-less the same way.
Just explain the pattern in english. Might as well call it a FooDeBar. Giving axioms and such is usually a waste of The Man's money and my time.
That said, I think it's extremely valuable to see this sort of thing in an academic or side project/enrichment activity setting. The mindset and mental model are fantastically helpful. See dshnkao's post, for example.
Monads should never have escaped Haskell.
"To get an adequate definition of g [<] f we must therefore consider all possible algorithms for computing f."
You see, I was reading this paper because I'm currently looking into various ways of that the symmetries of Turing machines are modeled (I.e., the fact that a TM can emulate any other TM would seem to imply that the concept of an 'algorithm' must have fundamental features that are preserved across this kind of virtualization. Closely related to the concepts of an Oracle, a Non-deterministic TM, and recursion.)Now, this passage stood out to me because it seemed to be a set-theoretic, indirect phrasing of the exact symmetry I'm talking about. We have to consider all algorithms for computing f in order to recover the features that must be true for all f, and thus give identity to the notion of a general algorithm for f. Which in turn is used to establish theorems about such a general algorithm notion; (critically, its "relative complexity".)
My point being that I would not have expressed this notion in set theoretic language because my intuition wants it expressed in algebraic language. Things are identified by their symmetries, and building theories around symmetries directly is more efficient and insightful than building theories around how those symmetries are embedded in set theory. And when I sit around thinking about how to design a good API or an efficient program, I want to do it by stating what must always be true about the problems I'm solving, in a high-level, abstract way. (I'm not so good at it in practice.)
Set theory may be a simple model of many things, but it is only really intuitive for things that can be easily described in terms of unions and intersections. It is often just extra work elsewhere, at least if you have a frame of mind to see other ways.
[1] https://www.cs.toronto.edu/~sacook/homepage/rabin_thesis.pdf
The author slams imperative programming, it isn't backboned on a robust compositional system, as is often done by FPers
BUT :)
there's lots of good programs in both styles....to me, this implies that there are more important things that matter to programming, the design to achieve a purpose. It's nice to express your designs in a robust system ( and be influenced by it), but what matters most is the design choices you make. When I look at well programmed things it's always the design choices that impress me.
One great method is to write your program using a nice functional language (Haskell, CL/racket/scheme, etc.), and where you want optimizations, replace functions with lower level imperative language implementations (C/C++, rust, etc.) and FFI.
It's a little like saying, yes, fine we can reference the hyperreals to derive infinitesimal calculus in a beautiful way, but it's mostly abstract and highly involved, even to just understand the foundational aspects (which one may argue is the case for normal calculus, but the notion of local linearity I believe is much more obvious than an extension to the reals). Though it's a nice academic exercise, which may even yield some insights, it's rather obscure and unintuitive for almost all purposes.
This is the way I view category theory and functional programming; though I'd love to be corrected if I'm wrong. There's no need to invoke almost all of it, since most things can be expressed quite clearly in the language of set theory need there be rigour.
Now consider categories with various types of constructions (products, limits, exponentials, etc.) and you'll notice they correspond to requiring certain features in your language.
Given that, after all, it's just posting likes we're talking about. You'd think that with such a bold proposition as "Why category theory matters -- no really, it does!" this would be about hot-synching whole data centers, or getting drones to deliver medicine in Africa, or something like that. You know, something perhaps unthinkable (or maybe just far less tractable) with procedural (or just less sophistical functional) code.
But no, in this case apparently it's about... updating your FB likes.
> 9 lines of...functional
First of all, those numbers are more like 1000 vs 200, but lines of code isn't a useful metric here.
The thing functional programming does for us is not hide the functionality of our code, it is to make it more consistent.
Category theory shows us ways that our code can be more consistently organized, so that we don't need to reason about the entire codebase when writing the code that guess it together.
This means we can create modules that are so compatible, all it takes to compose them is a simple map or fold.
While it may take more effort to understand the functional language itself, you may find that effort is comparatively less than learning what someone's imperative code does.
I always know right away what map and foldl will do, but I can never know what a loop does without reading through that loop.
This is kind of a false dichotomy as well written fp, type code is tends to have fewer bugs. Also your argument that more people understand it doesn't mean much as few people understand something they haven't been exposed to.
> hot-synching whole data centers
It actually does. There was a podcast with Paul Chiusano who is a big proponent of this type of programming http://futureofcoding.org/episodes/10-unisons-paul-chiusano-.... IIRC Spark is based on Algebird which is somewhat close to this type of programming. Commutativity is a life saver for distributed computing.
Nice to know, and I wish I understood these issues, and these kinds of arguments better.
I'm just saying - the original blog post didn't come anywhere near to making that kind of an argument.
I tentatively agree but I would caution you against making such claims in public because they are hard to rigorously justify and can end up making functional programmers just look arrogant.
Anyone put off by some of the negative responses in this thread, note that among the people saying "FP isn't necessary/helpful in real-world programming" the same people are saying "but understanding this stuff is very helpful to your development as a programmer".
-- Oscar Wilde
Although apparently some coders desperately want to think their glorified bit-shuffling is some deeply abstract mathematical endeavor.
That is actually what Facebook is doing with their spam detection code - their purely functional architecture makes it easier and more reliable for them to hot-swap out parts of their codebase when the need to update it.
https://code.facebook.com/posts/745068642270222/fighting-spa...
category* category * -> *proceeds to add layers of complexity
Why not just write a function to compose them? Argue against the competition, not strawmen.
Essentially, it's DRY at a higher level of abstraction.
The article doesn't give me a reference for what syntax I'm trying to parse, and while most of it is easy to guess, it would be helpful to have an explicit reference.
1.Man I must be dumb I can't even begin to understand what matter
2.Can't we achieve/explain the same goal by keeping it simple stupid
We should keep it "as simple as possible but no simpler". Sometimes "as simple as possible" is actually somewhat challenging ...