If i programmed enough in lisp I think my brain would adjust to this, but it's almost like I can't full appreciate the language because it reads in the "wrong order".
I’m not certain how true that really is. This:
foo(bar(x), quux(y), z);
looks pretty much identical to: (foo (bar x) (quux y) z)
And of course if you want to assign them all to variables: int bar_x = bar(x);
char quux_y = quux(y);
return foo(bar_x, quux_y, z);
is pretty much the same as: (let ((bar-x (bar x))
(quux-y (quux y)))
(foo bar-x quux-y z))
FWIW, ‘per se’ comes from the Latin for ‘by itself.’One of the things that sucks about LISP is - master it and every programming language is nothing more than an AST[0].
:-D
(let (bar-x (bar x))
(quux-y (quux y)))
(foo bar-x quux-y z)
Why is the second set of parens necessary?The nesting makes sense to an interpreter, I'm sure, but it doesn't make sense to me.
Is each top-level set of parens a 'statement' that executes? Or does everything have to be embedded in a single list?
This is all semantics, but for my python-addled brain these are the things I get stuck on.
let bar_x = x.bar()
let quux_y = y.quux()
return (bar_x, quux_y, z).foo() (progn
(do-something)
(do-something-else)
(do-a-third-thing))
The only case where it's a bit different and took some time for me to adjust was that adding bindings adds an indent level. (let ((a 12)
(b 14))
(do-something a)
(do-something-else b)
(setf b (do-third-thing a b)))
It's still mostly top-bottom, left to right. Clojure is quite a bit different, but it's not a property of lisps itself I'd say. I have a hard time coming up with examples usually so I'm open to examples of being wrong here. (define (start request)
(define a-blog
(cond [(can-parse-post? (request-bindings request))
(cons (parse-post (request-bindings request))
BLOG)]
[else
BLOG]))
(render-blog-page a-blog request))
https://docs.racket-lang.org/continue/index.htmlhttps://github.com/hipeta/arrow-macros
The common complaint that Common Lisp lacks some feature is often addressed by noting how easy it is to add that feature.
I don't understand why you think this. Can you give an example?
The parenthesis do really disappear, just like the hieroglyphics on C influenced languages, it is a matter of habit.
At least it was for me.
Plus, if syntax errors can easily take several minutes to fix, because if the syntax is wrong, auto format doesn't work right, and then you have to read a wall of text to find out where the missing close paren should have been.
Language shapes the way we think, and determines what we can think about.
- Benjamin Lee Whorf[0]
From the comments in the post: Ask a C programmer to write factorial and you will likely
get something like this (excuse the underbars, they are
there because blogger doesn't format code in comments):
int factorial (int x) {
if (x == 0)
return 1;
else
return x * factorial (x - 1);
}
And the Lisp programmer will give you:
(defun factorial (x)
(if (zerop x)
1
(* x (factorial (- x 1)))))
Let's see how we can get from the LISP version to something akin to the C version.First, let's "modernize" the LISP version by replacing parentheses with "curly braces" and add some commas and newlines just for fun:
{
defun factorial { x },
{
if { zerop x },
1 {
*,
x {
factorial {
- { x, 1 }
}
}
}
}
}
This kinda looks like a JSON object. Let's make it into one and add some assumed labels while we're at it. {
"defun" : {
"factorial" : { "argument" : "x" },
"body" : {
"if" : { "zerop" : "x" },
"then" : "1",
"else" : {
"*" : {
"lhs" : "x",
"rhs" : {
"factorial" : {
"-" : {
"lhs" : "x",
"rhs" : "1"
}
}
}
}
}
}
}
}
Now, if we replace "defun" with the return type, replace some of the curlies with parentheses, get rid of the labels we added, use infix operator notation, and not worry about it being a valid JSON object, we get: int
factorial ( x )
{
if ( zerop ( x ) )
1
else
x * factorial ( x - 1 )
}
Reformat this a bit, add some C keywords and statement delimiters, and Bob's your uncle.0 - https://www.goodreads.com/quotes/573737-language-shapes-the-...
Batch programs are easy to fit in this model generally. A compiler is pretty clearly a pure function f(program source code) -> list of instructions, with just a very thin layer to read/write the input/output to files.
Web servers can often fit this model well too: a web server is an f(request, database snapshot) -> (response, database update). Making that work well is going to be gnarly in the impure side of things, but it's going to be quite doable for a lot of basic CRUD servers--probably every web server I've ever written (which is a lot of tiny stuff, to be fair) could be done purely functional without much issue.
Display also can be made work: it's f(input event, state) -> (display frame, new state). Building the display frame here is something like an immediate mode GUI, where instead of mutating the state of widgets, you're building the entire widget tree from scratch each time.
In many cases, the limitations of purely functional isn't that somebody somewhere has to do I/O, but rather the impracticality of faking immutability if the state is too complicated.
I have respect for OCaml, but that's mostly because it allows you to write mutable code fairly easily.
Roc codifies the world vs core split, but I'm skeptical how much of the world logic can be actually reused across multiple instances of FP applications.
* Encapsulation? What's the point of having it if's perfectly sealed off from the world? Just dead-code eliminate it.
* Private? It's not really private if I can Get() to it. I want access to that variable, so why hide it from myself? Private adds nothing because I can just choose not to use that variable.
* Const? A constant variable is an oxymoron. All the programs I write change variables. If I want a variable to remain the same, I just wont update it.
Of course I don't believe in any of the framings above, but it's how arguments against FP typically sound.
Anyway, the above features are small potatoes compared to the big hammer that is functional purity: you (and the compiler) will know and agree upon whether the same input will yield the same output.
Where am I using it right now?
I'm doing some record linkage - matching old transactions with new transactions, where some details may have shifted. I say "shifted", but what really happened was that upstream decided to mutate its data in-place. If they'd had an FPer on the team, they would not have mutated shared state, and I wouldn't even need to do this work. But I digress.
Now I'm trying out Dijkstra's algorithm, to most efficiently match pairs of transactions. It's a search algorithm, which tries out different alternatives, so it can never mutate things in-place - mutating inside one alternative will silently break another alternative. I'm in C#, and was pleasantly surprised that ImmutableList etc actually exist. But I wish I didn't have to be so vigilant. I really miss Haskell doing that part of my carefulness for me.
C# has introduced many functional concepts. Records, pattern matching, lambda functions, LINQ.
The only thing I am missing and will come later is discriminated unions.
Of course, F# is more fited for the job if you want a mostly functional workflow.
What exactly does this mean? Haskell has plenty of non-deterministic functions — everything involving IO, for instance. I know that IO is non-deterministic, but how is that expressed within the language?
Functional programming simply says: separate the IO from the computation.
> Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display.
Every useful program ever written takes inputs and produces outputs. The interesting part is what you actually do in the middle to transforms inputs -> outputs. And that can be entirely functional.
My work needs pseudorandom numbers throughout the big middle, for example, drawing samples from probability distributions and running randomized algorithms. That's pretty messy in a FP setting, particularly when the PRNGs get generated within deeply nested libraries.
So a program it's a function that transforms the input to the output.
What about managing state? I think that is an important part and it's easy to mess it.
Can you please elaborate on this point? I read it as this web page (https://wiki.c2.com/?SeparateIoFromCalculation) describes, but I fail to see why it is a functional programming concept.
Can you actually name something? The only thing I can come up with is working with interesting algorithms or datastructures, but that kind of fundamental work is very rare in my experience. Even if you do, it's quite often a very small part of the entire project.
But most functions in Common Lisp do mutate things, there is an extensive OO system and the most hideous macros like LOOP.
I certainly never felt constrained writing Common Lisp.
That said, there are pretty effective patterns for dealing with IO that allow you to stay in a mostly functional / compositional flow (dare I say monads? but that sounds way more clever than it is in practice).
It's less about what the language "allows" you to do and more about how the ecosystem and libraries "encourage" you to do.
Erlang is a strictly (?) a functional language, and the reason why it was invented was to do network-y stuff in the telco space. So I'm not sure why I/O and functional programming would be opposed to each other like you imply.
First and foremost Erlang is a pragmatic programming language :)
I also wrote a toy resource scheduler at an HTTP endpoint in Haskell[2]. Writing I/O in Haskell was a learning curve but was ultimately fine. Keeping logic separate from I/O was the easy thing to do.
Functional core, imperative shell is a common pattern. Keep the side effects on the outside. Instead of doing side effects directly, just return a data structure that can be used to enact the side effect
What I will add is look up how the GHC runtime works, and the STGM. You may find it extremely interesting. I didn't "get" functional programming until I found out about how exotic efficient execution of functional programs ends up being.
So this only really mean:
Purely Functional Programming by default.
In most programming languages you can write
"hello " + readLine()
And this would intermix pure function (string concatenation) and impure effect (asking the user to write some text). And this would work perfectly.
By doing so, the order of evaluation becomes essential.
With a pure functional programming (by default).
you must explicitely separate the part of your program doing I/O and the part of your program doing only pure computation. And this is enforced using a type system focusing on I/O. Thus the difference between Haskell default `IO` and OCamL that does not need it for example.
in Haskell you are forced by the type system to write something like:
do
name <- getLine
let s = "Hello " <> name <> "!"
putStrLn s
you cannot mix the `getLine` directly in the middle of the concatenation operation.But while this is a very different style of programming, I/O are just more explicit, and they "cost" more, because writing code with I/O is not as elegant, and easy to manipulate than pure code. Thus it naturally induce a way of coding that try to really makes you conscious about the part of your program that need IO and the part that you could do with only pure function.
In practice, ... yep, you endup working in a "Specific to your application domain" Monad that looks a lot like the IO Monad, but will most often contains IO.
Another option is to use a free monad for your entire program that makes you able to write in your own domain language and control its evaluation (either using IO or another system that simulates IO but is not really IO, typically for testing purpose).
There is world, and there is a model of the world - your program. The point of the program, and all functions, is to interact with the model. This part, data structures and all, is pure.
The world interacts with the model through an IO layer, as in haskell.
Purity is just an enforcement of this separation.
Functional React follows this pattern. The issue is when the programmer thinks the world is some kind of stable state that you can store results in. It’s not, the whole point is to be created anew and restart the whole computation flow. The escape hatches are the hooks. And each have a specific usage and pattern to follow to survive world recreation. Which why you should be careful with them as they are effectively world for subcomponents. So when you add to the world with hooks, interactions with the addition should stay at the same level
Where have you ever heard anyone talk about side-effect free programs, outside of academic exercises? The linked post certainly isn't about 100% side-effect/state free code.
Usually, people talk about minimizing side-effects as much as possible, but since we build programs to do something, sometimes connected to the real world, it's basically impossible to build a program that is both useful and 100% side-effect free, as you wouldn't be able to print anything to the screen, or communicate with other programs.
And minimizing side-effects (and minimizing state overall) have a real impact on how easy it is to reason about the program. Being really carefully about where you mutate things, leads to most of the code being very explicit about what it's doing, and code only affects data that is close to where the code itself is, compared to intertwined state mutation, where things everywhere in the codebase can affect state anywhere.
(* Yes, you can technically write it procedurally like a good C programmer, sure.)
One is its the implicit function calls. For example, you'll usually see calls like this: `(+ 1 2)` which translates to 1 + 2, but I would find it more clear if it was `(+(1,2))` where you have a certain explicitness to it.
It doesn't stop me from using Lisp languages (Racket is fun, and I been investigating Clojure) but it took way too long for the implicit function stuff to grok in my brain.
My other complain is how the character `'` can have overloaded meaning, though I'm not entirely sure if this is implementation dependent or not
In theory ' just means QUOTE, it should not be overloaded (although I've mostly done Common Lisp, so no idea if in other impl that changes). Can you show an example of overloaded meaning?
If you want to use commas, you can in Lisp dialects I’m familiar with—they’re optional because they’re treated as whitespace, but nothing is stopping you if you find them more readable!
;)
No thanks
(defun foo (x)
(declare (type (Integer 0 100) x))
(* x
(get-some-value-from-somewhere-else)))
And then do a (describe 'foo) in the REPL to get Lisp to tell me that it wants an integer from 0 to 100.Take default values for function arguments. In most languages, that's a careful consideration of the nuances of the parser, how the various symbols nest and prioritize, whether a given symbol might have been co-opted for another purpose... In LISP, it's "You know how you can have a list of symbols that are the arguments for the function? Some of those symbols can be lists now, and if they are, the first element is the symbolic argument name and the second element is a default value."
I personally have used LISP a lot. It was a little rough at first, but I got it. Despite having used a lot of languages, it felt like learning programming again.
I don't think there's something special about me that allowed me to grok it. And if that were the case, that's a horrible quality in a language. They're not supposed to be difficult to use.
Five exclamation marks, a sure sign of an insane mind
That's what I think about five closing parentheses too... But tbh I am also jealous, because I can't program in lisp at all> Lisp is easier to remember,
I don't feel this way. I'm always consulting the HyperSpec or googling the function names. It's the same as any other dynamically typed language, such as Python, this way to me.
> has fewer limitations and hoops you have to jump through,
Lisp as a language has incredibly powerful features find nowhere else, but there are plenty of hoops. The CLOS truly feels like a superpower. That said, there is a huge dearth of libraries. So in that sense, there's usually lots of hoops to jump through to write an app. It's just I like jumping through them because I like writing code as a hobby. So fewer limitations, more hoops (supporting libraries I feel the need to write).
> has lower “friction” between my thoughts and my program,
Unfortunately I often think in Python or Bash because those are my day job languages, so there's often friction between how I think and what I need to write. Also AI is allegedly bad at lisp due to reduced training corpus. Copilot works, sorta.
> is easily customizable,
Yup, that's its defining feature. Easy to add to the language with macros. This can be very bad, but also very good, depending on its use. It can be very worth it both to implementer and user to add to the language as part of a library if documented well and done right, or it can make code hard to read or use. It must be used with care.
> and, frankly, more fun.
This is the true reason I actually use Lisp. I don't know why. I think it's because it's really fun to write it. There are no limitations. It's super expressive. The article goes into the substitution principle, and this makes it easy to refactor. It just feels good having a REPL that makes it easy to try new ideas and a syntax that makes refactoring a piece of cake. The Lisp Discord[1] has some of the best programmers on the planet in it, all easy to talk to, with many channels spanning a wide range of programming interests. It just feels good to do lisp.
Which Common LISP or Scheme environment (that runs on, say Ubuntu Linux on a typical machine from today) gets even close to the past's LISP machines, for example? And which could compete with IntelliJ IDEA or PyCharm or Microsoft Code?
- truly interactive development (never wait for something to restart, resume bugs from any stack frame after you fixed them),
- self-contained binaries (easy deployment, my web app with all the dependencies, HTML and CSS is ±35MB)
- useful compile-time warnings and errors, a keystroke away, for Haskell levels see Coalton (so better than Python),
- fast programs compiled to machine code,
- no GIL
- connect to, inspect or update running programs (Slime/Swank),
- good debugging tools (interactive debugger, trace, stepper, watcher (on some impls)…)
- stable language and libraries (although the implementations improve),
- CLOS and MOP,
- etc
- good editor support: Emacs, Vim, Atom/Pulsar (SLIMA), VScode (ALIVE), Jetbrains (SLT), Jupyter kernel, Lem, and more: https://lispcookbook.github.io/cl-cookbook/editor-support.ht...
What we might not get:
- advanced refactoring tools -also because we need them less, thanks to the REPL and language features (macros, multiple return values…).
---
For a lisp machine of yesterday running on Ubuntu or the browser: https://interlisp.org/
But Lispworks is the only one that makes actual tree-shaken binaries, whereas SBCL just throws everything in a pot and makes it executable, right?
> good editor support: Emacs, Vim, Atom/Pulsar (SLIMA), VScode (ALIVE)
I can't speak for those other editors, but my experience with Alive has been pretty bad. I can't imagine anyone recommending it has used it. It doesn't do what slime does, and because of that, you're forced to use Emacs.
Calva for Clojure, however, is very good. I don't know why it can't be this way for CL.
IDEs provide such environments for the most common languages but major IDEs offer meager functionality for Lisp/Scheme (and other "obscure" languages). With a concerted effort it's possible an IDE could be configured to do more for Lisp. Thing is the amount of effort required is quite large. Since AFAIK no one has taken up the challenge, we can only conclude it's not worth the time and energy to go there.
The workflow I've used for Scheme programming is pretty simple. I can keep as many Emacs windows ("frames") open as necessary with different views of one or several modules/libraries, a browser for documentation, terminals with REPL/compiler, etc. Sort of a deconstructed IDE. Likely it does take a bit more cognitive effort to work this way, but it gets the job done.
It's also extremely fun, you go from building Eliza to a full pattern matcher to a planning agent to a prolog compiler.
That's why I keep rekindling my learn-lisp effort. It feels like I'm just scratching the surface re: the fun that can be had.
https://github.com/sideshowcoder/core-logic-sudoku-solver/bl...
I find these types of comments extremely odd and I very much support lisp and lisp-likes (I'm a particular fan of clojure). I can only see adding the parenthetical qualifier as a strange bias of throwing some kind of doubt into other languages which is unwarranted considering lisp at its base is usually implemented in those "other general purpose languages".
If you can implement lisp in a particular language then that particular language can de facto do (at least!) everything lisp can do.
Any code that runs on a computer (using the von Neumann architecture) boils down to just a few basic operations: Read/write data, arithmetic (add/subtract/etc.), logic (and/or/not/etc.), bit-shifting, branches and jumps. The rest is basically syntactic sugar or macros.
If your preferred programming language is a pre-compiled type-safe object oriented monster with polymorphic message passing via multi-process co-routines, or high-level interpreted purely functional archetype of computing perfection with just two reserved keywords, or even just COBOL, it's all going to break down eventually to the ops above.
But even so
> it's all going to break down eventually to the ops above.
That's not true either. Different runtimes will break down into a completely different version of the above. C is going to boil down to a different set of instructions than Ruby. That would make Ruby incapable of doing some tasks, even with a JIT. And writing performance sensitive parts in C only proves the point.
"Any language can do anything" is something we tell juniors who have decision paralysis on what to learn. That's good for them, but it's not true. I'm not going to tell a client we're going to target a microcontroller with PHP, even if someone has technically done it.
There are special forms in LISP, but that is a far cry from the amount of magic that can only be done in the compiler or at runtime for many languages out there.
For a famous example, see Clasp: https://github.com/clasp-developers/clasp
(I do really like Lisp).
Well, that might be true for Scheme, but not for CL. There are endless forms for loops. I will never remember all of them. Or even a fraction of it. Going through Guy Steel’s CL book, I tend to think that I have a hard time remembering most of the forms, functions, and their signatures.
because you don't have money to waste on doctors?
Many things can be viewed as coordination problems. All of life can be viewed as being about coordination between tasks.
But I want to engage in good faith and assume you have some way of making this productive. What angle are you going for?
I think Haskell and ML had lambda expressions since like 1990.
In particular, it implies a coherent design around scope and extent. And, much more indirectly, it points to time. EVAL-WHEN has finally made a bit of a stir outside Lisp.
until -> since
> until -> since
I think "only since recently" is not standard English, but, even if it were, I think it would change the intended meaning to say that they were not available in Lisp until recently, the opposite of what was intended. I find it clearer to move the "only": "were available only in Lisp until recently."
Lisp is on my list of languages to learn someday, but I’ve already tried to pick up Haskell, and while I did enjoy it and have nothing but respect for the language, I ultimately abandoned it because it was just too time-consuming for me to use on a day-to-day basis. Although I definitely got something out of learning to program in a purely functional language, and in fact feel like learning Haskell made me a much better Ruby programmer.
You really should try lisp. I liked clojure a lot coming from ruby because it has a lot of nice ergonomics other lisps lack. I think youd get a lot out of it.
Ruby has all those features but (to my personal taste) makes it less obvious that things are that wilding.
(But in both languages I get to play the game "Where the hell is this function or variable defined?" way more often than I want to. There are some advantages to languages that have a strict rule about modular encapsulation and requiring almost everything into the current context... With Rails, in particular, I find it hard to understand other people's code because I never know if a given symbol was defined in another file in the codebase, defined in a library, or magicked into being by doing string transformations on a data source... In C++, I have to rely on grep a lot to find definitions, but in Ruby on Rails not even grep is likely to find me the answer I want! Common LISP is similarly flexible with a lot of common library functions that magick new symbols into existence, but the codebases I work on in LISP aren't as large as the Ruby on Rails codebases I touch, so it bites me less).
At first glance, it looks like your strong-typing tool. And it can be. You can build a static analyzer that will match, as best it can, the type of the form to the value-type and throw errors if they don't match. It can also be a runtime check; the runtime is allowed to treat `the` as an assert and throw an error if there's a mismatch.
But what the spec actually says is that it's a special operator that does nothing but return the evaluation of the form if the form's value is of the right type and the behavior is undefined otherwise. So relative to, say, C++, it's a clean entrypoint for undefined behavior; it signals to a Lisp compiler or interpreter "Hey, the programmer allows you to do shenanigans here to make the code go faster. Throw away the runtime type identifier, re-represent the data in a faster bit-pattern, slam this value into functions without runtime checks, assume particular optimizations will succeed, paint it red to make it go fasta... Just, go nuts."
Haskell is weird. You can express well defined problems with relative ease and clarity, but performance can be kind of wonky and there's a lot more ceremony than your typical Lisp or Scheme or close relative of those. F# can give you a more lispish experience with a threshold about as low as Haskell, but comes with the close ties to an atrocious corporation and similar to Clojure it's not exactly a first class citizen in the environment.
Building stuff in Lisp-likes typically don't entail the double programming languages of the primary one and a second one to program the type system, in that way they're convenient in a way similar to Ruby. I like the parens and how they explicitly delimit portions that quite closely relate to the AST-step in compilation or whatever the interpreter sees, it helps with moving things around and molding the code quickly.
- LLM well trained on it. - Easy for human team to review. - Meets performance requirements.
Prob not lisp?