To be fair, GHC and the Haskell ecosystem is far more complex than Clojure and its ecosystem/standard library. Nevertheless it's pleasant how easy Clojure was to upgrade (although of course this stability is nothing special: more conservative languages like Go and Java generally break almost nothing on upgrade).
The fundamental difference is that Clojure gets distributed as source and not as compiled .class files. Being an interpreted language that gets compiled on the fly does have some advantages. But it also has drawbacks. Startup performance suffers, the Java interoperability story is worse, many tools such as Android's toolchain expect bytecode, etc ...
The problem that Scala has (and also Clojure, as soon as you do AOT) is that Scala's idioms do not translate well into Java bytecode, as Java's bytecode is designed for, well, Java. Therefore even small and source-compatible changes, like adding another parameter with a default value to a method or like adding a method to a trait, can trigger binary incompatibilities.
The plan for Scala is to embed in the .class, besides the bytecode, the abstract syntax tree built by the compiler and then the compiler can take a JAR and repurpose it for whatever platform you want. This is part of the TASTY and the Scala-meta projects. If you think about it it's not that far from what Clojure is doing, except that this is done in the context of a static language that doesn't rely on the presence of an interpreter at runtime. Of course, LISPs are pretty cool. And of course, you'll still need to recompile your projects, but at least then the dependencies won't have to be changed.
Making an API in Clojure using Swagger gives you a full, interactive UI and documentation for your API, while also having a schema which makes sure you know what is submitted and that it validates (i.e. is that a string or a number?)
As a note making a Swagger app with Luminus is as simple as:
lein new luminus myapp +swagger
cd myapp
lein run
once the server starts, browse to http://localhost:3000/swagger-ui/index.html to see your Swagger API
The biggest thing holding me back from learning Clojure is that I fear it will take me a decade to become remotely competent in it.
Say you have `(map inc [1 2])`. You can run that, and get `'(2 3)`.
A transducer is the `(map inc)` part of that call (slightly confusingly, this isn't partial application or currying). You can apply it to something like `[1 2]`, but you can also compose with it, by combining it with say, `(filter even?)` to get something that represents the process of incrementing everything, then removing odd numbers. Or you can put in things that aren't collections, like asynchronous channels, and get back a new channel with the values modified accordingly.
That's pretty much it.
What I think I love most about Clojure is that there are fantastic, esoteric, academic ideas that, when I read about them in a context like this for the first time, I a) do not understand them, and b) have no idea how they would be useful. Then I read an example or two, and suddenly it's apparent that the tool is really as simple as it can be--there's very little accidental complexity--and is extremely useful.
I do remember looking into them before and translating them into Haskell and they ended up not being identical to functions in the trivial sense that you suggest, but I forget how.
That said the concept is pretty cool and we shouldn't fear learning new things. After all, the whole point of learning a new programming language is to be exposed to new ways of solving problems, otherwise why bother?
Try Carin Meier's Living Clojure if you feel up for an intro.
Its possible to implement most other functions (map, filter, take etc.) using reduce. Transducers take advantage of that.
Unfortunately, if you try to do that, you'll notice that the implementation is sometimes tied with the way you build the entity: e.g. for vectors, map implemented in terms of reduce would start with an empty vector then append to it.
That sucks. We want operation chains that are independent of the data structure they operate on. We want them to work on vectors, lists, channels, whatever - anything that can have a reduce-like operation (anything reducible)
However, it turns out you don't necessarily have to recreate the entity (e.g. list) when chaining. All you need to know how to do is invoke the next operation in the chain, i.e. the next reduction function. For example:
* map can apply the function on the element to get a new result, then call the next reduction function with the accumulator and that new result.
* filter can check the element and either return the old accumulator or apply the next reduction function with the element, etc.
Therefore, transducers simply take an additional argument: the next reduction function that should be applied. This lets you build up a chain of operations (e.g. map filter etc) that doesn't care about the kind of entity they're operating on. Only the last "step" must be a reduction function that needs to know how to build the result.
I wrote a conceptual starter guide a little while back ( https://yogthos.github.io/ClojureDistilled.html ), that covers all the basics that you need to know to get up and running.
It's easy to overstate the necessity of advanced features, ones very few programmers probably actually use.
If you keep this in mind, perhaps re-reading http://clojure.org/transducers won't be as intimidating.
As a general comment, to those new to Clojure, when you find something in Clojure that seems strange or different, I encourage you to ask "What does [function X] care about? (e.g. need to know)" and "What does [function X] not need to know?". This relentless drive to simplify design and responsibilities mean that functions are small and "opinionated", but in ways that are driven by constraints, not arbitrary decisions. So these choices make a lot of sense -- I'd argue they flow pretty naturally.
IMHO.
There will perhaps come a time, where you want to know about and transducers, but by that time the concepts won't seem intimidating.
What is interesting is that these transducing operations can be written quite generically - code reuse is huge.
But, usage of transducers is similar to Java 8 Streams, or Haskell stream fusion, only that the implementation is fully generic, not dependent on the container/source.
http://blog.cognitect.com/blog/2014/8/6/transducers-are-comi...
[0] and big for practical purposes like Common Lisp.
[1] you can get at all of Java perhaps better than you can from Java using the REPL.
[2] like lexical scope and continuations in Scheme.
The addition of transducers is an unfortunate case of Clojure "pulling a Haskell", valuing an elegant abstraction over ease of understanding and learning. Indeed, your comment alone shows that doing them (and especially giving them a high profile) was a mistake. Just because you can abstract something elegantly doesn't mean you should. No beautiful abstraction is worth scaring people away. Fortunately, Clojure doesn't make many such mistakes, and it usually tries to err on the side of pragmatism. I hope transducers aren't the beginning of a trend.
But just don't use transducers until you feel comfortable enough with the language. They're not an essential feature.
I imagine most applications aren't very sensitive to the performance gain, but it's good to know for when you need it. In addition it's a nice and testable, compos-able pattern. I've only done backend work, no cljs yet, but I'd call it reasonable practice to think in terms of compositions/transducers for most data transformation pipelines in the future though, throughout the stack, whether performance matters or not.
(reduce + 0 (filter odd? (map inc (range 1000))))
Conceptually, this takes an input sequence of 1000 elements (range 100), then creates an intermediate sequence of (map inc), then creates an intermediate sequence of (filter odd?), then reduces that with +. (I am glossing over a number of internal optimizations - chunking and transients, but the idea holds.)Transducers let you represent the transformation parts as an independent (reusable) thing:
(def xf (comp (map inc) (filter odd?)))
You then apply that composite transformation in a single pass over the input: (transduce xf + 0 (range 1000))
transduce combines (a) input iteration (b) transformation application and (c) what to do with the results - in this case apply + as a reduce. Other functions also exist that make different choices about how to combine these parts: into collects results into a collection, sequence can incrementally produce a result, eduction can delay execution, core.async chans can apply them in a queue-like way.There are a number of benefits here:
1. Composability - transformations are isolated from their input and output contexts and thus reusable in many contexts.
2. Performance - the sequence example above allocates two large intermediate sequences. The transduce version does no intermediate allocation. Additionally, some collection inputs are "internally reducible" (including range in 1.7) which allows them to be traversed more quickly via reduce/transduce contexts. The performance benefits of these changes can be dramatic as you increase the size of input data or number of transformations. If you happen to have your data in an input collection (like a vector), into with transducers can be used to move data directly from one vector to another without ever producing a sequence.
3. Eagerness - if you know you will process the entire input into the entire output, you can do so eagerly and avoid overhead. Or you can get laziness via sequence (although there are some differences in what that means with transducers). The upshot is you now have more choices.
4. Resource management - because you have the ability to eagerly process an external input source, you have more knowledge about when it's "done" so you can release the resource.
The problem is that there are datatypes, such as streams and channels, which are fundamentally sequence abstractions but for one reason or other don't implement the 'sequence' interface. You could wrap them in a lazy sequence but that can be complex and will introduce a performance penalty. When they wrote core.async, they the authors found themselves reimplementing 'map', 'reduce' etc, for channels.
Transducers are a way to provide sequence operations to any kind of collection split the part that interfaces with the collection from the operation. This way we can dispense with the old 'map', 'filter', etc and just use transducers for everything. Because they're implemented with reducers, they're faster as well.
I believe one of the arguments was also that these couldn't be written in statically typed languages. Although, I do not know if this turned out to be true.
It's not true, they were just abstrusely defined and there's some weirdness around the implicit effects that you can choose to either ignore or incorporate into the equivalent.
Fusion is one of the main things Haskell is known for and an API that gets close enough to what one would use transducers for is `Control.Lens.Fold`.
Others have made more faithful attempts at making it explicit, such as in this post: http://jspha.com/posts/typing-transducers/
https://github.com/brandonbloom/fipp/commit/b83dfb3b3ac7c90d...
My use cases are 1) to emulate yield / a pipeline of generators 2) to avoid intermediate object creation. My code is an indirect port of some Haskell code, so it's a tad awkward, but overall Transducers were a small perf win over reducers (which was a big perf win over lazy seqs) and also provide a good effectful-ish model for my pipeline.
If you're familiar with Python Leiningen takes the place of both pip and virtualenv. Every project has a project.clj file where you declare your project dependencies and running "lein deps" from your project root handles it from there. This includes libraries and the version of Clojure you're targeting. When you add/delete/change a dependency in project.clj you simply run "lein deps" again. You never have to run "pip freeze" or make a requirements.txt file because project.clj serves that purpose as well.
As an example, Leiningen's own project.clj: https://github.com/technomancy/leiningen/blob/master/project...
For distributing your app, you just compile it to a JAR, and the user only needs plain old java to run it.
Clojure is really a simple language to use, which is one reason why I like it a lot (and use it for most of my projects). Compared to Scala and Haskell, Clojure is fast to learn. Java interop with Clojure is also really simple. I often use existing Java code by including it in my Clojure project and adding the magic Java build stuff to my project.clj file. Some purists might like to have just Clojure in their projects, but being able to pull in my old Java code (and other open source libraries) as needed is a large time saver. If Java libraries are small, I like to have them in my project. I use IntelliJ for both Clojure and Java development and having all the code in one place is often nicer than pulling in JAR dependencies.
It's way more complicated to have apt-get for some things, lein for other ones, pip install, quicklisp, the ruby gems, emacs packages, and every other environment reimplementing something that should be possible to reuse.
From the outside, and without researching a lot, it seems that the problem is that authority/responsibility is not easily delegated in the right way.
Perhaps someone can create something like a federated-containerized(contained?)-buzzword-github-like-blockchain-social-aptget-buzzword-thingie. Only half-joking here, but perhaps there's something a la git/dropbox, a product that you don't see coming until you turn the corner and They Just Got It Right. Something created on top of http://ipfs.io , for example, which looks very promising!
EDIT: Perhaps a simpler solution is to enable apt to delegate specific domains to packages marked as package managers, and have them talk through a specified protocol...
That's a win if you switch around a lot.