To be perfectly pedantic, the essence of make is that it is an expert system for traversing a directed acyclic graph (DAG).
It works by constructing a graph of “filenames” (vertices) and the commands required to create those files (directed edges). Using that as it’s knowledge base, make performs a reverse topological sort on the target filename to determine the graph traversal necessary to create it’s target file, and runs each command in the traversal so as to arrive at that target. Since it has a list of all intermediate vertices that must exist before it’s target can exist, make is further able to determine whether it needs to run a command or whether it can use the cached result.
I don’t think the author is doing us any favors by trying to fit make’s behavior in to a constructive logic system. It’s missing operators for doing that, as the author states: “Note that the form of compound propositions allowed is extremely restricted, even by the standards of logic programming.”
I find the author’s conclusions then a mixed bag. They would be good conclusions if make was a constructive logic system, but it’s not, it’s an expert system for traversing a DAG. I think of make as a “workstate” system: do what you need to do to put my build in a particular state, whereas most of the author’s conclusions center around “workflow:” move this unit of work through it’s lifecycle.[1] Make only performs that work incidentally to putting your build in a particular state.
1: The workflow/workstate distinction is not my idea. It’s explored in “Adaptive Software Development” by James A. Highsmith III.
[I originally made this comment at https://lobste.rs/s/rqf3f5/propositions_as_filenames_builds_...]
I found these structures cropping up alongside Horn clauses, and Context-Free Grammars in interesting ways. In this case, viewing make rules as Horn clauses allows us to use a variant of Horn-SAT to find the minimum set of files we have to produce to build a rule, which is interesting to think about, and reason about.
This article felt as though it was food for thought and not really trying to push any particular idea too hard. In any case, I found it interesting as it found parallels to the sorts of structures I found when exploring various algorithms for Context-Free Grammars (such as pruning, or detecting erasable non-terminals, which can both be described in terms of Horn-SAT).
It's very nice, but the cost of changing from Make means that most people don't use it. The Ninja design docs reference Tup, so they're aware it exists.
Solving the Latex problem of “Label(s) may have changed. Rerun to get cross-references right.” is hard. The assumption of make is that the only state is in the file system. i.e. running the same inputs should produce the same outputs. The above message shows that Latex violates this constraint.
The simple solution is to (ugh) write a wrapper script which runs Latex, and creates another output file if the labels are wrong. Only after the labels are right does the output file stop changing.
Which means the dependencies are a circular loop. Which Make doesn't handle. <sigh>
The limitations of 1970s design is apparent. But Make is so astonishingly powerful that anything new has to be much better than Make. So far that doesn't seem to have happened.
make --no-builtin-rules --no-builtin-variables
at least as of: GNU Make 3.82I think that at this point, with 37-some years worth of Makefiles on the planet, a competitive replacement for make with a different syntax or format would need to provide tools to process or translate old & new formats to the other format, as Perl did with the (uni-directional) a2p and s2p tools for awk and sed scripts, so that people considering a transition could make the leap more easily and with greater confidence.
The other alternative is to treat the rerunning of latex until it reaches a fixed point as a single build step, instead of multiple ones.
"keep re-running LaTeX until the labels stop changing" is relatively easily done. If nothing else, use a loop detection algorithm.
Sure, theoretically it may "never" stop. But you already have that problem.
To be fair I haven't worked hard to make this very usable, but I do use it myself a lot for all sorts of interesting things:
The tool itself: http://git.annexia.org/?p=goaljobs.git;a=summary
A subset of recipes I use regularly: http://git.annexia.org/?p=goals.git;a=summary
Works great, is very lightweight, does away with the antiquated idea of integral runlevels, and is very easy to debug. Plus you can probably set it up in a single afternoon and then just boot with init="/bin/make -f /etc/init.mk multi_user".
Edits: typos.