x(,/{@[x;y;]'(!10)^x*|/p[;y]=p,:,3/:-3!p:!9 9}')/&~*x tar -cf - . | gzip | base64 | wc -l
IE "how much does it compress?"Looking at APL -- I'm reminded of what happens if I accidentally send the gzipped output to my tty...
I'm impressed that there's anyone who can follow along (can you find the bug?) to code like
p←{(↑⍵)∘{(⍺∨.=⍵)/⍳n×n∘}¨,⍵},(n*÷2){⍵,⍺⊥⌊⍵÷⍺}'⍳n n←⍴⍵
It really feels like compressed binary data where everyone's got a copy of the dictionary already...
The compiler's whole state is a bunch of integer vectors, and •Show [a,b,c] prints some equal-length vectors as rows of a table, so I usually use that. The relevant code is usually a few consecutive lines, and the code is composed of very basic operations like boolean logic, reordering arrays with selection, prefix sum, and so on, so they're not hard to read if you're used to them. There are a few tricks, which almost all are repeated patterns (e.g. PN, "partitioned-none" is common enough to be defined as a function). And fortunately, the line prefaced with "Permutation to reverse each expression: more complicated than it looks" has never needed to be debugged.
Basically, when you commit to writing in an array style (you don't have to! It might be impossible!) you're taking an extreme stance in favor of visible and manipulable data. It's more work up front to design the layout of this data and figure out how to process it in the way you want, but easier to see what's happening as a result. People (who don't know APL, mostly) say "write only" but I haven't experienced it.
[0] https://github.com/mlochbaum/BQN/blob/master/src/c.bqn
[1] https://mlochbaum.github.io/BQN/implementation/codfns.html#i...
Probably the biggest readability concern of overly-golfed expressions really is just being dynamically typed, a problem shared with all dynamically-typed languages. But array languages have the problem worse, as nearly all operations are polymorphic over array vs number inputs, whereas in e.g. JS you can use 'a+b' as a hint that 'a' and 'b' are numbers, & similar.
If you want readable/maintainable code, adding comments and splitting things into many smaller lines is just as acceptable as in other languages.
I'm sure each of those languages makes some guarantee about the sorts of errors that can be introduced - as opposed to C (let me pick on it) where the errors you know you can introduce, and the errors that are introduced aren't a large union. However i have a hard enough time typing english consistently, so the various "symbol-y" languages just glaze my eyes, unfortunately.
It almost "feels" like these languages are an overreaction to the chestnut "they must get paid by LoC".
> can you find the bug?
Several stand out immediately:
- Two syntax errors: unclosed single quote in '⍳n n←⍴⍵ and no right operand in the second use of Jot (∘). It's not clear how those could have snuk in naturally by accident, but I'll just assume cosmic rays and that they should be simply elided.
- n n←⍴⍵ is setting n twice, which is a bit surprising, though it signals that you probably expect ⍵ to have rank 2. In such cases _ n←⍴⍵ or n←⊃⌽⍴⍵ may be more natural, depending on intent.
- However, Decode (⊥) will error if ⍴⍵ returns anything other than a single integer (or an empty vector), so n n←⍴⍵ is equivalent to just n←⍴⍵ and doubly confusing.
- Which means that (n*÷2){⍵,⍺⊥⌊⍵÷⍺}⍳n n←⍴⍵ can only return a vector, i.e. 1..n with a number tacked on the end: the value of (1-x^n)/(1-x) evaluated at sqrt(n), which is a bit of a strange data structure IMHO. Something to do with geometric series of n^2?
- The second use of Ravel (,) in ,⍵ is redundant, and given the constraints we know above, so is the first use: ,(n*÷2)...
- It also means that (↑⍵) is the same as just ⍵
- But then (⍺∨.=⍵) is always just 1
- Meaning that the whole code is essentially equivalent to p←(n+1)⍴⊂⍳n×n←⍴⍵. I.e. it just outputs n+1 vectors of the integers 1 to n^2.
- Which, without context, is hard to guess intent, but that data structure feels a bit strange. Instead of a vector of uniform-length vectors, a matrix would be more efficient: (n+1)(n*2)⍴⍳n×n←⍴⍵. But that's just a matrix with rows that are all the same, so maybe we could just use the single vector (⍳2*⍨⍴⍵) directly?
Really, despite looking strange, once you learn the symbols and basic operations, APL is surprisingly straightforward. If you're on HN, then you're already smart enough to learn the basics easily enough.
Admittedly, though, becoming proficient in APL does take some time and learning pains. Once there, though, it does feel like a superpower.
But -- (and forgive me if I'm totally wrong) -- this isn't just "non-english" but "non-phonetic" which is a smaller set of written languages, and the underlying language is ... math.... so understanding the underlying grammer itself relies on having decades of math education to really make it jive.
If this code is just a final result of "learn math for 2-3 decades, and spend years learning this specific programming language" -- my statement stands. Interacting with this kinda binary blob as a programming language is impressive. I think I read somewhere that seymour cray's wife knew he was working too hard when he started balancing the checkbook in hex...
> Advocates of the language emphasize its speed, facility in handling arrays, and expressive syntax.
Indeed.A much better measure would be the number of nodes in a parse tree, of semantically meaningful non-terminals like "a constant" or "a function call".
An even better measure would also involve the depth and the branching factor of that tree.
A one line solution takes up very little visual real estate. That matters a lot when you are working on some more complex problem. Flitting your eyeballs around a screen takes orders of magnitude less effort than scrolling around and navigating files. Cognitive load is important.
We really need to burn this vague "only semantics matter" scourge that's creeped into our programmer values these days. I'm sorry, but I care about things like incentives against over-engineering, ease of directly thinking in the problem domain, and simplicity of the encompassing ecosystem.
A terse one-line solution tells me there is virtually no room for over-engineering. Even without knowing K, I can see obvious constants side-by-side, telling me it's likely using a direct data representation of the problem in its code. Does K culture encourage code like that? Does programming in K bias you towards directness and simplicity? Then please, I want some of that special sauce on my team.
</rant>
When I work on some more complex problem, I like to think about the problem, not spend energy decoding condensed text. Scrolling a bit more verbose, but clear code, is faster for me.
It seems like parent’s metric (size of parse tree) would easily optimize for terseness and penalize bloat, regardless of language, so maybe your reaction was too reflexive. UX of a language does matter a bit, and one that’s too terse incurs development friction and technical debt when used in larger projects. Just study the history of Perl and why it’s not widely used.
What a one liner looks like is more or less the worst possible metric to use for large software projects. In any language, the style of code changes the larger the codebase, and cleverness and terseness become a liability. https://www.teamten.com/lawrence/writings/norris-numbers.htm...
Any language that add complexity at that layer loses me, and APL, even with crude visuals is not far from that.
[1] https://en.wikipedia.org/wiki/Algorithmic_information_theory
The parse tree approach is trying to get at a fuzzy notion of useful information and useful density of information.
For example, when working with arrays of data it certainly is easier to think and write “avg a+b” to add two arrays together and then take the average.
In a non-array programming language you would probably first need to do some bounds checking, then a big for loop, a temporary variable to hold the sum and the count as you loop over the two arrays, etc.
Probably the difference between like 6ish lines of code in some language like C versus the 6 characters above in Q.
But every language has features that help you reason about certain types of problems better. Functional languages with algebraic data types and pattern matching (think OCaml or F#) are nicer than switch statements or big if-else-if statements. Languages with built-in syntactic sugar like async/await are better at dealing with concurrency, etc.
A very real example of this is Julia. Julia is not really an array-oriented programming language, it's a general language with a strong type system and decent functional programming facilities, with some syntactic sugar that makes it look like it's a bit array oriented. You could write any Q/k program in Julia with the same complexity and it would not be any more complex. For a decently complex program Julia will be faster, and in every case it will be easier to modify and read and not any harder to write.
There are limits, of course, and it’s not without downsides. Still, if I have to code in something all day, I’d like that “something” be as expressive as possible.
As a quant, I used kdb+/q quite a bit for 5+ years for mid-frequency strategies, but as I moved towards higher frequency trading that required calculations on the order book that couldn't be easily or efficiently vectorized, then continuing to use array-focused languages would have only complicated reasoning about those problems.
https://youtu.be/PlM9BXfu7UY?si=ORtwI1qmfmzhJGZX&t=3598
This particular snippet was in the context of compilers, but the rest of the talk has more on Dyalog and APL as a system of mathematical notation. The underlying theme is that optimizing mathematical expressions may be easier than optimizing general code.
http://johnearnest.github.io/ok/index.html
if it's something you're interested in trying i'd be happy to point you toward more resources, and i'm sure there are plenty of other arraylang tinkerers reading this thread who could help, too
It's a useful thing to learn though. And dare I say it, fun. Even if there was zero benefit to it, it'd still be fun. As it turns out, there really are benefits.
For me, the biggest benefit is when I'm working with data interactively. The syntax allows me to do a lot of complex operations on sets of data with only a few characters, which makes you feel like you have a superpower (especially when comparing to someone using Excel to try to do the same thing).
(KQED is the Bay Area PBS partner. PBS is the US public television org.)
One line solutions are incredible, and tacit is mind-bendingly cool. To use the unique compactness of a glyph-based language as a way to efficiently describe and perform functional programming - then to do that all over arrays!? - whoever had these ideas [0] is utterly genius.
But as someone trying to make time to write a program ground up in APL, knowing that I won't be able to make it just a set of really good one liners, that example is also significant for me.
You can ofcourse removethe capability to do thatand you'll effectively force the programmer to write more venous code, but then its strength as an interfacing tool is very much reduced.
The Iversonian languages has the capability to write incredibly terse code which is really useful when working interactively. When you do, your code truly is write-only because it isn't even saved. This is the majority of code that at least I write in these languages.
When writing code that goes in a file, you can choose which style you want to use, and I certainly recommend making it a bit less terse in those cases. The Iversonian languages are still going to give you organs that are much shorter than most other languages even even it's written in a verbose style.
So I do love APL and arraylangs, and learning them was really helpful in a lot of other languages.
But they never became a daily driver for me not because of the symbols, which were honestly fine if you stick with it long enough, but after about 3-4 years of dabbling on and off I hit a wall with APL I just couldn't get past.
Most other languages I know there is a "generic-ish" approach to solving most problems, even if you have to cludge your way through suboptimally until you find "the trick" for that particular problem and then you can write something really elegant and efficient.
APL it felt like there was no cludge option -- you either knew the trick or you didn't. There was no "graceful degredation" strategy I could identify.
Now, is this actually the case? I can't tell if this is a case of "yeah, thats how it is, but if you learn enough tricks you develop an emergent problem solving intuition", or if its like, "no its tricks all the way down", or if its more like, "wait you didn't read the thing on THE strategy??".
Orrr maybe I just don't have the neurons for it, not sure. Not ruling it out.
Even today, after having worked in these languages for years, I am still put off a bit by the walls of code that some array programmers produce. I fully understand the reasoning why it's written like that, but I just prefer a few spaces in my code.
I've been working on an array language based on APL, and one of my original goals was to make "imperative style" programming more of a first-class citizen and not punish the beginner from using things like if-statements. It remains to be seen how well I succeeded, but even I tend to use a more expressive style when terseness doesn't matter.
Here's an example of code I've written which is the part of the implementation that is responsible for taking any value (such as nested arrays) and format them nicely as text using box drawing characters. I want to say that this style is a middle ground between the hardcore pure APL style found in some projects and the style you'll see in most imperative languages: https://codeberg.org/loke/array/src/branch/master/array/stan...
That said, it's also really not a limitation with the languages either. In my experience, punching past that wall is exactly the process of making the paradigm click. It took me a good 500 hours hacking on my YAML parser prototype over the course of a year before the puzzle pieces began to click in place.
Those lessons are still percolating out, but it feels like some combination of 1) data-driven design principles, 2) learning how to concretely leverage the Iversonian characteristics of good notation [1] in software architecture, and 3) simple familiarity with idioms and how they express domain-specific concepts.
Feel free to contact me if you'd like to chat directly about this and overcoming the wall.
[0]:https://dyalog.tv/Dyalog23/?v=J4cg6SV92C4 [1]:https://www.jsoftware.com/papers/tot.htm
https://www.youtube.com/watch?v=DmT80OseAGs
You can try the solution at https://tryapl.org/
https://codegolf.stackexchange.com/questions/tagged/sudoku?t...
Ultimately though,they are a proxy to a more relevant but difficult to determine attributes such as
Given a reasonably proficient engineer, the amount of time it would take them to resolve a bug in code written by someone else or alternatively extend its functionality in some way.
sudoku(Rows) :-
length(Rows, 9),
maplist(same_length(Rows), Rows),
append(Rows, Vs), Vs ins 1..9,
maplist(all_distinct, Rows),
transpose(Rows, Columns),
maplist(all_distinct, Columns),
Rows = [As,Bs,Cs,Ds,Es,Fs,Gs,Hs,Is],
blocks(As, Bs, Cs),
blocks(Ds, Es, Fs),
blocks(Gs, Hs, Is).
blocks([], [], []).
blocks([N1,N2,N3|Ns1], [N4,N5,N6|Ns2], [N7,N8,N9|Ns3]) :-
all_distinct([N1,N2,N3,N4,N5,N6,N7,N8,N9]),
blocks(Ns1, Ns2, Ns3).
While not one line, to me it is pareto optimal for readable, elegant, and incredibly powerful thanks to the first class constraint solvers that ship with Scryer Prolog.If you want to learn more about it or see more of Markus's work:
https://www.metalevel.at/sudoku/
More about Scryer Prolog (a modern , performant, ISO-compliant prolog written mostly in rust)
Nebulous1:
Here is the line, it is written in K. K is a language created by the same person (Arthur Whitney) based on APL and Scheme. x(,/{@[x;y;]'(!10)^x|/p[;y]=p,:,3/:-3!p:!9 9}')/&~x
Most programmers would agree the ‘/’ symbol is at least as clear as writing ‘divideBy’. The question is how often the symbols are used and if their frequency in code justifies learning them.
def solve(grid):
def find_empty(grid):
for r in range(9):
for c in range(9):
if grid[r][c] == 0:
return r, c
return None
def is_valid(grid, num, pos):
r, c = pos
if num in grid[r]:
return False
if num in [grid[i][c] for i in range(9)]:
return False
box_r, box_c = r // 3 * 3, c // 3 * 3
for i in range(box_r, box_r + 3):
for j in range(box_c, box_c + 3):
if grid[i][j] == num:
return False
return True
def backtrack(grid):
empty = find_empty(grid)
if not empty:
return True
r, c = empty
for num in range(1, 10):
if is_valid(grid, num, (r, c)):
grid[r][c] = num
if backtrack(grid):
return True
grid[r][c] = 0
return False
backtrack(grid)
return gridIt's not clear why the poster prefers that other implementation, or that they understand APL or array programming.
So as a result the comment reads as "it's in a language I don't know. I'd prefer it in a language I do know." Which is a fairly useless comment.
If that's not what they intended, it would be helpful for them to add some context to their comment.
AFAICT AI cannot replicate this, yet, will be interesting when that day comes.
Not sure where I got that from.
Why array languages seem to gravitate to symbol soup that makes regex blush I'll never know.
Variant sudokus on the other hand are a lot of fun. They often have very elegant solve paths and there are many neat tricks you can discover and reason about.
Some fun ones, if you'd like to try:
- https://logic-masters.de/Raetselportal/Raetsel/zeigen.php?id...
- https://logic-masters.de/Raetselportal/Raetsel/zeigen.php?id...
- https://logic-masters.de/Raetselportal/Raetsel/zeigen.php?id...
The last puzzle has no fewer than 9 custom rules, in addition to the regular Sudoku rules, and then it also says “every clue is wrogn [sic]” implying there is some meta out-of-the-box thinking required to even understand what the rules are. That is more a riddle than a logic puzzle.
By contrast, the charm of classical Sudoku is that the rules are extremely simple and straightforward (fill the grid using digits 1 through 9, so that each digit occurs exactly once in each row, column, and 3x3 box) and any difficulty solving comes from the configuration of the grid.
As for solvers, it’s a very elegant, well-formed problem with a lot of different potential solutions, many of which involve useful general techniques. I used to dabble clumsily in chess engines and honestly it’s the only time I’ve ever ended up reading Knuth directly for various bit twiddling hacks, so it’s always educational.
1: https://musgravepencil.com/products/600-news-wood-cased-roun...
They're all artificial problems, but your brain likes a challenge and you get a dopamine hit when you solve it, I suppose.
Sudoku, crosswords, Simon Tatham's puzzles etc. are an excellent way to pass the time while keep training the mind. Sports are their equivalent for the body.
Finally, writing solvers for a problem, be it real or artificial, for many is just another variety of puzzle to engage in.
Puzzles in commercial collections don't usually have that problem, but those from other sources sometimes do.
Solvers also make for a nice craft exercise, as here. Simple but not trivial, you can approach them in a lot of different ways and thereby work through different techniques or constraints you mean to explore.
I would argue that puzzles in commercial collections are more likely to have that problem than ones made freely available by hobbyists, as commercial enterprises inevitably cut corners on things like labour costs for an actual human setter.
I have seen dozens of commercial puzzle games and applications that do not make any attempt to verify the (auto-generated) puzzles as solvable, but I don't think I've ever had the same problem on a site like LMD.
I enjoy running simulation after simulation after simulation, studying possible outcomes and optimising everything. Everyone is different :)
While I myself found an opportunity to reply to the GP and didn't down vote them, their comment only engaged with the article in a shallow way and only then, seemingly, to just dismiss the concept of solver altogether.
It wasn't a offensive comment, but it didn't really contribute to the site in the way many people digging into deep technical walkthroughs like this expect to see.
Some downvotes weren't guaranteed, but they're not surprising and they're probably helping new readers stay engaged with more topical and technical alternatives.
It's not the end of the world to get a few downvotes, and it's almost never personal. It certainly isn't here.