List comprehensions, to me, are what makes Python a productive (or the most productive) prototyping language. It allows me to think and program in mathematical relations without much fuss. Add to that nice sets and dicts (needed for asymptotic efficiency), and I can easily forgive it that it is built on an everything-is-an-object paradigm (there are few things I detest more than OOP).
I've never truly understood why you need the meta-capabilities of LISP. I much more need a syntax that does not hide what happens (procedure call, list indexing or at least "indexing"). Macros amount to code generation. Code generation is mostly bad, it typically means the problem wasn't thought through, and that there is a lack of clearly defined building blocks. So far I've only ever generated a few C structs and enums, but I'm not sure it was an entirely good idea. It was super easy to generate from Python, anyway.
No, you need to structure your data in a way that there is almost no boilerplate, and only very little glue code (which effectively amounts to "the macro code", with the difference that it's only executed once per data object).
> Do you think those are bad because those are code generation that don't look the way you think they ought to?
Good point, but I have a strong belief that the abstractions offered by C (function calls and good syntax for arrays and records) are sufficient and going up is detrimental for larger systems. I don't code in assembler because typically there is too much redundancy (function call convention) and it's too specific (architecture dependent) and probably too verbose. That said, I'm not strongly against it, but haven't really tried.
Macros allow us to extend the language. It is not just C-like code generation. The loop constructs of common lisp, or the for loops of racket (or my own reimplementation for guile scheme) are macros.
With lisp macros we can express zero cost abstractions that in other languages would have a lot of overhead, and make them feel like a regular language construct (CLOS started like that)
My own racket-like loops for guile generate code that the optimizer can then turn into optimal code: https://bitbucket.org/bjoli/guile-for-loops/overview
The downside being that macro expansion slows down compilation. My workstation can expand about 1000 loop macros/s with guile and a couple of orders of magnitude more using my chez scheme prototype (somewhere above 100k)
To be precise, it will generate something like 'JUMP_IF_FALSE' which is not exactly a shitload. Not saying the execution will be fast. After all, it's an interpreted language.
> Macros allow us to extend the language.
We're on the same page. What I was saying is: if you need more than function calls and records, chances are you're doing something wrong... the program architecture becomes intransparent. Macros and other conveniences allow for hacks put on other hacks, until you don't understand your program anymore and development eventually stalls.
I once added cooperative multithreading by switching stacks in a C program. It was very easy to do and helpful for that project, but small hacks like this are rarely needed. (And in that instance it could have possibly been avoided, but I wasn't in a position to change the dataflow architecture).
You can of course provide libraries that "extend" the language, but sometimes you will never be able to lose the feeling of it being bolted on, because the semantics of the la guage doesn't lend itself to that certain way of doing it.
Lisp macros solve that, at least for me. We use them like regular procedures and as long as you keep the simple rule that you sbouldnt make a macro of something you want to compose with procedures, there really isnt much that gets in the way of understanding a program, because most of the time whether something is a macro or not does not matter.
I added some basic ones to common lisp in a couple of minutes.
(defun read-comprehension (stream char)
(declare (ignore char))
(destructuring-bind (expr for var in list &optional if filter-exp)
(read-delimited-list #\] stream)
`(loop ,for ,var ,in ,list
,@(when filter-exp `(,if ,filter-exp))
collect ,expr)))
(set-macro-character #\[ #'read-comprehension)
(eval (read-from-string "[(+ x 1) for x in '(1 2 3) if (> x 2)]"))
;; => (4)A "real" implementation would do more processing of the forms in the list read by read-delimited-list and perform appropriate checks. This is more to just give you a sense of how trivial it is to extend the language.
I don't actually know python well enough (I had to look up "list comprehension") to say, but I would estimate that it would only take a page or so of code to give you something fully-featured and robust.
In practice, I would never abuse the reader like this when typical "loop" expressions are enough to do these kind of things (and more).
Implementation of a "Lisp comprehension" macro by Guy Lapalme http://rali.iro.umontreal.ca/rali/sites/default/files/publis...
Simple and Efficient Compilation of List Comprehension in Common Lisp by Mario Latendresse http://www.ai.sri.com/~latendre/listCompFinal.pdf
Yes sir!
> (procedure call,
80febf0: 08
80febf1: c7 44 24 04 01 00 00 movl $0x1,0x4(%esp)
80febf8: 00
80febf9: 89 3c 24 mov %edi,(%esp)
80febfc: 89 44 24 0c mov %eax,0xc(%esp)
80fec00: e8 cb c8 f4 ff call 804b4d0 <__fprintf_chk@plt>
> list indexing or at least "indexing").Gotcha covered there too!
80fd7eb: 8d 7c 0e ff lea -0x1(%esi,%ecx,1),%ediEach of those is actually hiding what really happens. E.g. for a procedure call the runtime compares the arity of the call and the called function, inserts each argument into a data structure of some sort, perhaps grafts together environments or does other things on a language-specific basis, performs a jump of some sort, then the called function takes over, and finally stashes its return value(s) in a data structure of some sort and performs another jump.
Even list indexing in a language like Python is relatively complex, involving a lookup of the list length, bounds-checking &c.
> Code generation is mostly bad, it typically means the problem wasn't thought through, and that there is a lack of clearly defined building blocks.
Proper syntactic abstraction is all about building the right blocks for the problem at hand. At the end of the day, all we have are bits of silicon transferring electron: everything above that is an abstraction. A sufficiently-powerful programming language enables the programmer to develop the abstractions he needs for the problem he has.
It's nice when these important properties are indicated by a visual clue (square brackets).
> A sufficiently-powerful programming language enables the programmer to develop the abstractions he needs for the problem he has.
Sure. And that programming language still happens to be C for many, many purposes.
C macros (or "C code generation" if you prefer) is very, very different from Lisp macros.
Lisp macros are built into the language, the whole language itself is designed around this capability.
S = {x² : x in {0 ... 9}}
V = (1, 2, 4, 8, ..., 2¹²)
M = {x | x in S and x even}
Python (from that page) S = [x**2 for x in range(10)]
V = [2**i for i in range(13)]
M = [x for x in S if x % 2 == 0]
I did not see 10 or 13 in the original definitionsPerl 6 closest syntax to original
my \S = ($_² for 0 ... 9);
my \V = (1, 2, 4, 8 ... 2¹²); # almost identical
my \M = ($_ if $_ %% 2 for S);
Perl 6 closest syntax to Python my \S = [-> \x { x**2 } for ^10];
my \V = [-> \i { 2**i } for ^13];
my \M = [-> \x { x if x % 2 == 0 } for S];
Perl 6 more idiomatic my \S = (0..9)»²;
my \V = 1, 2, 4 ... 2¹²;
my \M = S.grep: * %% 2;
Perl 6 with infinite sequences my \S = (0..*).map: *²;
my \V = 1, 2, 4 ... *;
my \M = S.grep: * %% 2;
---Python
string = 'The quick brown fox jumps over the lazy dog'
words = string.split()
stuff = [[w.upper(), w.lower(), len(w)] for w in words]
Perl 6 my \string = 'The quick brown fox jumps over the lazy dog';
my \words = string.words;
my \stuff = [[.uc, .lc, .chars] for words]
I wouldn't say Python's list comprehensions are really that big of a selling point. I mean based on what I've seen to understand it the biggest difference is that feature in Python runs from the inside out. noprimes = [j for i in range(2, 8) for j in range(i*2, 50, i)]
range(2, 8)
for i in
range(i*2, 50, i)
for j in
j
[ ]
Whereas Perl 6 is either left-to-right my \noprimes = (2..^8).map: { |($^i*2, $i*3 ...^ 50).map: { $^j }}
(2..^8)
.map:
{ |($^i*2, $i*3 ...^ 50) }
.map:
{ $^j }
or right-to-left my \noprimes = ({ $^j } for ({ |($^i*2, $i*3 ...^ 50) } for 2..^8))
2..^8
for
{ |($^i*2, $i*3 ...^ 50) }
( )
for
{ $^j }
( )
Here are a couple other translations using Set operators primes = [x for x in range(2, 50) if x not in noprimes]
my \primes = (2..^50).grep: * ∉ noprimes;
my \prime-set = 2..^50 (-) noprimes; # a Set object
my \primes = prime-set.keys.sort;
Also I'm fairly certain Python doesn't have a way of using them on a Supply concurrently. # create a prime supply and act on it in the background
Supply.interval(1).grep(*.is-prime).act: &say;
say 'main thread still running';
# says 'main thread still running' immediately
# deadlock main thread,
# otherwise the program would terminate
await Promise.new;
# waits 2 seconds from the .act call, says 2
# waits 1 second , says 3
# waits 2 seconds, says 5
# waits 2 seconds, says 7
# waits 4 seconds, says 11
# and so on until it is terminated
Basically the only selling point of that feature in Python over say Perl6 is that is at times a bit closer to what a mathematician would normally write.I'm not a mathematician.
To me it seems a bit verbose and clunky. It probably also only works as a single line, which seems odd in a line oriented programming language. Basically it seems like a little DSL for list generation, sort of like Perl's regexes are a DSL for string operations.
Perhaps you may want to reread the last paragraph from the original article. Maybe in a few years the world will have passed you by. Then again maybe not.