I tend to think of code in a cellular sense, as in, biology. Outside the cell is a big scary world that you don't control. There's a cell wall where you only permit certain things in. Inside the cell, the cell trusts that only good stuff has been let in. It may also have to package certain things on the way out to the next cell.
In this case, the observation I'd make is about that big scary external world. You don't get to impose anything on it, or at least, your control is a great deal less rigid than our code would like. Even if you think you control your internal network, hackers might explain otherwise to you in the worst possible way. You can't impose compliance with functional paradigms, imperative paradigms, security levels, soundness of data, whether or not a bit was flipped during transit, that the metadata matches what was sent, or anything.
Obviously you can't fully write your code that way (even real cells get it wrong sometimes too), but that's the principle I try to view the world through. Even within an application where every individual component is, say, compliant with functional programming, the interactions still can't be counted on to have any particular properties that you don't check somehow.
FP, OO, data-driven design, all that sort of stuff, that's for what you do inside the cells, and maybe how you choose to structure the code implementing your cell wall. But you almost always end up forced to treat the outside world as bereft of any structure you don't rigidly check for and enforce yourself, if not outright hostile (anything with security implications).
Almost no one actually cares how a particular program was written or how it understands it's input and output -- we care that it works with some level of quality. How one gets that result is irrelevant in the end. It could be written directly in twisty assembly code. Do not care[1]
Parts of these paradigms have useful tools for building working programs, but a great majority of the contents of these paradigms are simply about organization. This shows up most clearly in OO, and of course, functions are a way to organize code. This isn't a bad thing -- certainly helpful for humans working on a code base -- but it isn't actually relevant to the task the program itself performs or how fast or well it performs it.
So, of course, the input and output of a program aren't really conformant to any paradigm, because the paradigms are about organizing programs, not about performing a particular task.
[1] (it might even be more reliable, in some cases, because you would be forced to be careful and pay attention and all those little details you want to ignore are right there in your face (see: async) :-))
I don't think anyone is making a "more correct" or even "more performant" argument here; maybe, a "more reliable" argument - but only by extension of "better organized, so less likely to include certain classes of bugs".
I think you very much can say that "at the boundaries, applications are procedural" (i.e., they do side-effecting things sequentially).
That's not incompatible with FP. On the contrary, FP used properly lets you push all that procedural code to smallish kernels at the "boundaries" so that all the rest of the code can be pure (thus easily tested).
Well, side-effecting, yes, because that's literally how we define boundaries.
Sequentially? Not so much; concurrency (whether asynchrony or true parallelism) is important largely because simple sequential behavior doesn't capture what happens naturally at the boundaries well. (I suppose on an individual boundary, defined in the right way, there is likely to be a sequencing constraint, but not in aggregate across the boundaries of the system.)
You may be able to; I can't. I have a number of incoming event streams that are not necessarily ordered.
Now, like I said, you don't implement all code everywhere for all possible missteps, so you may have specific apps that get away with assuming orderedness. But it is not a general thing you can rely on.
Really, since the boundary is where we push all the awful stuff - that boundary (depending on the application) can be any sort of terrible.
The next and last paragraph does not then explicitly make that statement, instead ending with:
> Functional programming offers an alternative that, while also not perfectly aligned with all data, seems better aligned with the needs of working software.
I think that's right. OOP is just a disaster, but FP is not. FP is not about making all of a program pure, but, rather, about isolating all the bits that can be pure (thus making them easy to test) and collecting all actual impurity into as small a bunch of code as possible.
The impure code you end up having will look very procedural.
In Haskell, you do this by running all side-effect-having code "in the IO monad", and monadic code looks procedural in the same way that PROGN loops procedural in Lisp though PROGN is [or can be] a macro that turns your procedural statements into one singular expression.
So it's completely fair to say that "at the boundaries, applications are procedural", because, well, it's patently true!
FP helps by helping the programmer push impure code as much as possible towards that boundary, leaving the rest to be as pure (and thus easily-tested) as possible.
For example, if you have code that uses the time of day for externally-visible effects, then pass it the time of day so as to make it more pure and easier to test. This one is counter-intuitive because we like to just get-the-current-time, but I've done this to make code that does epochal cryptographic key derivation deterministic and, therefore, testable.
Why does nobody appear to be having runaway success with it, if it is the superior paradigm?
John Carmack says that FP is a hinderance when rendering graphics, working with buffers, etc. So in respect to presenting data, UI and graphics will likely always be in an imperative lang.
Rich Hickey's (creator of Clojure) company Datomics (i.e. proprietary) uses his design for an immutable database which has only been around since 2013. He says that disk storage was so costly in the past but is now so cheap that the existing server industry is built atop this legacy of old ideas. So FP regarding storing data is likely in infancy.
"Superior" doesn't always mean "winner".
Once you have recorded events, figuring out what side-effects (often causing new events in other systems) should be triggered from the set of all events input into the system can be coded using whatever flavor of functional/relational/reactive programming.
I think the combination of event sourcing and functional programming, and databases that support this way of working better than today and doesn't have OOP as their main target audience, is the future. (And I absolutely don't mean Kafka. SQL comes at least closer; to efficiently work with events and implement functional business logic on top of events relations and structure is important.)
In OO complexity is hidden. So you don't have to deal with the complexity of the internal state of an object while using the object. It's a divide and conquer approach.
In FP complexity is constrained. Pure functions and immutable data make it easier to reason about the code. This allows you to see all the workings and not get overwhelmed.
Or maybe it is easy to find fault with programming languages you actually use and to idealize programming languages you don't use.
I imagine a world where Common LISP won and reddit/programming would be kvetching all the details CL got wrong while asking "How can I get a COBOL Job?", "Did you know that PHP syntax is based on Chomsky's generative grammar?"
I worked on a project that involved building a stream processing engine in Scala that heavily used Monads.
I remember being told by the manager what the error handling strategy was and thinking "This is like that Amway presentation where they 'draw circles' showing how 8 people get a cut of the $7 tube of toothpaste they sell you and then ask 'How can we beat the prices in the supermarket?', they strike the presentation board with a pointer and say 'By eliminating the middleman!'"
Now, they could have handled errors correctly with monads just as they could have handled errors correctly with exceptions, except that they didn't. That manager approved code review after code review where error handling was absent.
Functional programming is a convenient fantasy, a highly restrictive and controlled environment that allows us to make large assertions about bodies of code - "no network IO can take place here"; "your inputs will most assuredly be numbers that can be added together."
It's the equivalent of assuming the cow is a sphere [0]. A useful mental model, that ultimately breaks down upon contact with the "real world."
Hence the imperative glue code / monadic actions wiring all of the pretty, perfect abstractions together.
That is, you can make chunks of your application functional - they just can't be chunks that touch the exterior. It's not a "mental model" - it's something you construct in the code.
Now, you may not be able to do that with all the "interior" code, either. Parts may have too much intrinsic state for functional programming to be a useful approach. But for other interior parts, hey, you like functional? Make it so.
Sort of like test vectors for cryptographic functions.
You still have to be careful to test all the edge cases (assuming you can't test the full domain of each function), naturally. But the fact that the functional core doesn't need setup means the tests of it have less startup and teardown overhead and so will generally run faster (unless they take so much time that setup overhead is in the noise).
As for the "imperative shell", you may be able to mock everything w/o having to change it, though you could also set up a test environment with all the external things it needs.