I almost got fired from a teaching position in Sweden because I told my 3rd year bachelor students that many did not bother to add their names or the assignment number, or using paragraphs and in some cases even basic interpunction on their assignments. And that this was well below the level required to to pass secondary school, and that I expect better from them.
This was apparently too confrontational, and a few upset students later I got chewed out and almost fired. Meanwhile, from my point of view, I was just doing my job and already sugarcoating it by Dutch standards.
Having said that, yes, even by Dutch standards I would say that Dijkstra liked to troll people a bit.
PS: I really like the following insight from Dijkstra's review: But whereas machines must be able to execute programs (without understanding them), people must be able to understand them (without executing them).
a) Dijkstra is a bit of a personal "hero" of mine (hero in quotation marks, because I don't like to think of myself as a hero worshipper), because it was due to him I learned of textbooks written by guys like Eric Hehner and Roland Backhouse, which ended up changing my life significantly.
b) I do not really grasp the technical issues being discussed in the correspondences between Dijkstra and Backus
Given this disclosure:
Regardless of the cultural issues (which I suspect you are right about), I found Backus' first response to Dijkstra full of ad hominems, and very light on technical rebuttals. If we subtract out the ad hominems, I suspect the technical rebuttals would take less than a page (which is a significant reduction, considering that the full letter is 4 (typewritten!) pages).
Calling Dijkstra arrogant seems to have been a popular insult (as Backus himself admits), so I wonder if Backus was simply jumping onto the bandwagon as a defensive reaction to the arguments presented in EWD 692.
I think Dijkstra's responses were indeed trollish. I wish he wouldn't have said anything further after he noticed the gratuitous personal attacks against his character in Backus' response, but he couldn't help but say something back. Oh well.
Can you comment on this observation?
I don't know how you can defend that.
Would you like to say something about those?
I really liked it.
I also liked R. Backhouse's approach to it, encapsulated in two books:
1) less playful: http://www.amazon.com/Program-Construction-Calculating-Imple...
2) more playful: http://www.amazon.com/Algorithmic-Problem-Solving-Roland-Bac...
If you are looking for motivation, you might also like: http://research.microsoft.com/en-us/um/people/lamport/pubs/p...
-----------------
I found Dijkstra's stuff to be easier to read after I had read books by Hehner/Backhouse, even though chronologically, Dijkstra came first.
The whole argument Dijkstra makes boils down to "if everyone was as technically competent as you and I, then I wouldn't need to do this. But the sad truth is that the incompetent are not only the majority, they decide what gets funded and researched, so I have to." I think he is right about that, and probably even acted correctly on it, but even so: positioning himself like he's taking on a sacrificial trollish saviour role and just doing this to save programming research is pretty arrogant, so Backus is right as well!
And that's just the politics of it, saving public face hasn't even entered the equation here. Don't forget, Backus' reputation also directly affects the respect and by extension funding that he receives for research that he wants to do. With that in mind I can very easily see how Backus must have been furious after reading EWD692, especially considering that it contained a few insinuations about him, which again I would partially account to Dutch culture, which (used to) have a severe case of tall poppy syndrome[1]. But from Backus' point of view, if they had been friends and colleagues before, it might have felt like backstabbing, both politically and as friends.
So I think I understand Backus' ad hominems - I suspect he was sincere about trying to criticise Dijkstra from the position as a long-time friend; his writing style boiling down to: "Look, I respect you and all, but I have to point out that you're being a dick; oh and have a taste of your own medicine on the side, because two can play that game."
Of course, it looks like he played right into the hands of Dijkstra, because Dijkstra doesn't appear to be as emotionally invested in saving public face, and seems to rather enjoy puzzling Backus a bit longer.
In general, I think it speaks for both of them that they managed to resolve these differences to what appears to be mutual satisfaction!
[0] https://en.wikipedia.org/wiki/Face_(sociological_concept)
That said; I admire Dijkstra for having such strong opinions and striving to create solid software no matter what. He was used to write software for machines that didn't exist yet[1] and I for one wish I had the time/budget to write software proofs. Especially with the modern tools (not the ones I learnt using from Dijkstra's works, but that was a good theoretical foundation which gave me understanding) we have now it's very possible but only for very expensive systems with very generous deadlines. I could say we will not deliver without that but I would be living under a bridge...
[1] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/E...
quote: “Object-oriented programming is an exceptionally bad idea which could only have originated in California.” – Edsger Dijkstra
Backus, on the other hand, is a genuinely lovely man. If you haven't read/seen his interview with CHM [1], then I urge you to do so. (Pay attention to his closing remarks!)
[1]: http://www.computerhistory.org/fellowawards/hall/bios/John,B...
http://coldcomfortgames.com/blog/object-oriented-programming...
https://www.quora.com/Why-did-Dijkstra-say-that-%E2%80%9CObj...
It seems no one is able to find the source of the quote, it's in fact much more likely he did not say anything like that, and that the quote is a fabrication.
'Truths as hard as fists', is one way I've seen it put in another language (sortof).
> which could only have originated in California
Flattery will get you anywhere. UTexas, in this case ...
G. K. Chesterton
Got a reference for that? The Wikipedia discussion suggests otherwise: https://en.wikipedia.org/wiki/Object-oriented_programming#Hi...
Dijkstra got a lot of attention from his essay, "Goto Considered Harmful." Even though he did not come up with that title, he learned from it that a hyperbolic saying can grab attention, and be used for good.
So thereafter, he spent a while when he would say things in the most flame-bait way possible, to gain attention, because it worked.
Well, luckily my story had a happy ending: my colleagues (who were checking the assignments with me) stood up for me, and after the course other students defended me by saying "finally, a teacher willing to call out the freeloaders! It's so demotivating to see them get away with not doing anything while I put in so much effort and get nothing in response."
The reason my former boss had reacted like an overprotective mother was because the complaining students had made me look like someone who motivated through bullying, intimidation and fear. He takes his responsibilities of creating a safe space for students to learn and overcome their insecurities very seriously, and I respect that. Meanwhile, I have high expectations of my students precisely because I believe they are capable of so much more than they give themselves credit for, and he now understands and respects that, and has given me some pointers in how to get that across in ways that Swedes aren't allergic to. So we resolved our issues too.
But yeah, that was a pretty strong culture clash; in the end it was educational for everyone involved I think.
I'm going through the motions myself, at an institution that shall remain nameless, knowing how kindly they take to criticism here.
Autistic persons are a bit like that too. They have a very simple optimization scheme, so pointing at suboptimal things directly is seen as a gift when most people will have a broad view on being told, shame, disrespect etc etc.
But it is definitely also a cultural thing. The contrast became very obvious when we had Russians in the seminars. The Swede says things like "I don't think I really understand how you mean, could you maybe elaborate that point a little bit further". The Russians: "You are wrong and it's very obvious to even a high school student what your mistake is. And besides, all this has already been done in the Soviet Union 40 years ago."
I do think that a good deal of the internet--the great unwashed both correspondents refer to--remember Dijkstra the insult comic better than Dijkstra the pioneer of computing. It doesn't improve discourse on the internet, I guess, or worsen it much either. I don't think it just to Dijkstra's reputation, though that is no responsibility of mine.
In his biography of Alexander the Great, Peter Green quotes the Athenian Demades's reproach to Philip:"King Philip," he said, "Fortune has cast you as Agamemnon; but you seem determined to act the part of Thersites."
Woah, I do that too! Maybe I should move to Netherlands.
If it isn't possible to work with somebody while disagreeing with them, then you're the problem. If you spend long enough talking to anybody you'll eventually come across disagreements.
Many people take disagreements too personally.
I was just yesterday talking to a paramedic. I mentioned to him the article I saw on HN about medical error being the 3rd highest cause of death next to cancer and heart disease.
He immediately started to attack me, exclaiming how I was not an authority in this area so how would I know what I was talking about and therefore had no right to talk about something like this.
I explained I was not the source of the information, and that large numbers of people inside and outside the medical profession are concerned that a great deal of medical research isn't reproducible and in any case if car accidents don't even touch the number of medical fatalities occurring then it's a problem since car accidents are so problematical it's one of the leading reasons we're trying to automate them.
He blew his top. Ad homs everywhere.
Later I took to the Net to see if I was correct in my impression. Turns out my 'Master of Paramedics who read lots of journals' can't have been paying much attention because the British Medical Journal and Lancet are in lockstep with everything I had been saying. In fact I was probably far too kind judging from what the leading lights of the medical profession are saying themselves.
I expect he would accuse me of being arrogant.
I've said it before but tribalism is very powerful. It is far more dangerous to Science than Religion and it definitely gets in the way of technological change.
One of highlighted examples from an abstract of a major study was an "error" where a doctor didn't tell a kid with diagnosed heart disease that strenuous exercise would be dangerous, and the kid died after collapsing during a run.
Failing to tell a patient that strenuous exercise might be fatal sounds like an error to me!
As it happens, my maternal grandmother was killed by a medical error (before I was born). She went in for some kind of routine surgery and didn't make it. Decades later, when the doctor involved died, among his effects were found a letter he had written to another doctor, explaining that he had made a stupid mistake that killed her. (I don't know any more details.)
Does that mean I believe the "#3 killer" claim? Not necessarily. But I'd bet that the problem is worse than most medical professionals would expect, and that there remains considerable room for improvement.
The medical profession, much like the teaching profession has a nasty habit of victim blaming if things go wrong when it can get away with it. They need humbling because they are too confident. They are highly resistant to this because it diminishes their status and few people enjoy a loss of confidence even if introspection is required for improvement. Computer people tend to take this for granted because we're in a feedback loop with the compiler that tells us constantly when we're in the wrong. That we're fucking idiots all the time isn't news to us. It is news to some other professions that don't have such a tight feedback loop between success and failure.
It is true for example that people don't take their medicine on a timely basis but frustration of dealing with people has bled into other areas and that is unhealthy.
- Alan Kay
Many design choices, of course, involve trading off one virtue against another. But that hardly seems to matter when so many of our current systems are so far from the efficient frontier where tradeoffs are necessary!
"We don't care if they're prima donnas as long as they can sing!"
This is great.
I started programming in earnest and for money in 1961 mostly in machine code on a variety of machines, and I'm pretty sure that the notions of single dimensional sequential locations of memory as structures (most often called arrays, but sometimes called lists), predated the idea of index registers. This is because I distinctly remember being first taught how to move through arrays sequentially just by direct address modification -- this was called indexing in the Air Force -- and later being introduced to the index register or registers -- depending on the machine.
My strong guess (from above) is that the array as sequential memory accessed by address modification of some kind was a primary idea in the very first useful computers in the late 40s, and that this idea pre-dated index registers.
It would be well worth your time to look at the instruction set of EDSAC, and try writing some programs.
So are you saying that you said this as a friendly jab, knowing he would probably love it himself?
> His comment about OOP (a rather different kind of thing at Xerox PARC than what the term means today) did not bother any of us at all
So he did actually say that quote about OOP? Do you happen to know the source?
"Comes across", because more often than not I think it's more likely that people who are too pleased with themselves don't like to be told that, really, what they've achieved isn't good enough, or even misguided. Even if, or perhaps especially if, the person who tells them that (Dijkstra when it comes to programming, Kay when it comes to HCI) is right or at least has a very good point.
indeed! no tumblr posts. no videos on YouTube. no SnapChats. no downloadable apps on Steam, and no Kickstarters. no accepted PR's on GH. no certs. and a pretty low Influencer (TM) rating on FooBlamWooBlah.io. therefore, he's obviously a slacker! or a sock puppet. prob yet-another-teen-in-his-mothers-basement. guy probably needs to discover things like "Khan Academy" and Bitcoin and Node.js
(/s)
Kinda like how a teacher will repeat the same material because he keeps getting new kids every year.
Also, there's this single question by him on Stack Overflow, where he keeps replying to people with "nope, already had that in the 60s/70s":
http://stackoverflow.com/questions/432922/significant-new-in...
:-)
I only skimmed it but it is definitely an interesting read. I didn't interpret their stance as arrogant though. In my book both are enthusiastic about their subject and think the other party is misguided. Both take some effort not to hurt the others feelings while still bringing their point across.
Unfortunately, the Library of Congress does not allow scanners without prior approval, so this was the only way I could make my own copies. It did not help that all these letters were written or typed on very thin mimeograph paper.
We've taken the "arrogance" quote out of the HN title and replaced it with a neutral description of the letters—perhaps a bit too neutral, given how fabulous the post is. But HN readers can figure that out for themselves, especially once a post gets so high on the front page.
I was surprised to discover recently that the word 'array' prior to 1950 was used exclusively to describe two dimensional tables of numbers that one might find in a matrix or determinant. But by the advent of FORTRAN I in 1957 and ALGOL 58, 'array' now referred exclusively to a one-dimensional entity, as compared with 'n-dimensional arrays'. I was interested in digging through John Backus's papers from this era to see if I could find any clues.
I was able to narrow down the near window to 1952-1954, since the FORTRAN preliminary report of 1954 uses the word 'array' casually in the modern one-dimensional sense as interchangeable with 'subscripted variables', the latter being the more common terminology at the time. By comparison, a virtually unknown paper by Rutishauser in 1952 describing the "for" loop did not use the word 'array' at all, only 'subscripted variables'. (Rutishauser was an accomplished mathematician and quite possibly the world's first computational scientist.) A paper by Laning and Zierler at MIT in 1954 describing a formula compiler also used only the term 'subscripted variables'.
Backus's papers also have evidence showing that FORTRAN I was clearly written specifically to take advantage of the IBM 704's machine capabilities. Not only was the IBM 704 the world's first commercially successful computer, it was also an improvement over the preceding IBM 701 in providing index registers (3 of them) and floating point instructions which were fast for its era. Backus's papers describe how providing hardware support for indexing and floating point was revolutionary, as all programs up to that time had to write in all these instructions by hand (and for many programs was pretty much all they did).
So it is clear to me now that the changeover in the implied dimensionality of the word 'array' must be related to how the array developed as a data structure abstracting away indexing operations. By the time IAL (pre-ALGOL) came on the scene in 1958, the idea of indexable homogeneous containers was already well established. But I still haven't found any strong smoking gun evidence introducing the one-dimensional sense of the word. I suspect further digging into the description of the IBM 704 may be necessary. The 704 was not the first to provide index registers, but it may have been the first to call them as such. (The Manchester Mark I computer of 1948 appears to be the first computer with an index register, but it was called a B line. The [patent](https://www.google.com/patents/US3012724) claiming to cover index registers uses the term "control instruction" - no arrays mentioned - but it very cutely describes numbers as residing in known locations or "addresses" in quotes.)
I missed that the quote was Backus's, it makes a lot more sense now.
The quality of your photos is excellent and I doubt a scanner would make them much better. I wish there was a transcript though.
As a start, here is the first letter transcribed:
to John Backus
International Business Machines Corporation
5600 Cattle Road
SAN JOSE CA 95193
U.S.A
Monday 29th of May, 1978
“To interrupt one’s own researches in order to
follow those of another is a scientific pleasure
which most experts delegate to their assistants.”
(Eric Temple Bell in
“Development of Mathematics”)
Dear John,
I do not claim to be more noble or sensible than “most
experts”; perhaps it is only becauseI have only one
assistant to whom I can delegate no more than one
man can do. But when you open your letter with:
“I am quite sure that you have never read
any paper I’ve sent you before”
it is my pleasure to inform you that - although
“quite sure” - you were very wrong. I very well
remember that you mailed me a sizeable paper
on reduction languages to which I owe my introduction
to that subject. I didn’t only read it, parts of it
were even studied. I also remember that it left me
with mixed feelings.
I can very well understand your excitement, although,
for lack of experience, I still cannot share it. I am
far from delighted with the state of the art of programming
today, and anyone suggesting an alternative has in
in [sic] principle my sympathy - until, of course, he loses it again,
like Terry Winograd did when he suggested natural language
programming - “natural descriptions”! - as an alternative-.
In the long run I have more faith in any rephrasing of the
programming task that makes it more amendable to mathematical
treatment. But you must have lots of patience, for the
run will be very long. It isn’t just mental inertia - that
problem can be solved generally by education a new
generation of youngsters and waiting until the old ones
have died - . It is the development of a new set of
techniques needed to achieve a comparable degree of
proficiency.
Could you get me a copy of G.A. Mago’s (not
yet published) “A network of microprocessors to execute
reduction languages”? That might whet my appetite!
From various angles I have looked into such networks
and I am not entirely pleased with what I have
seen. Firstly I suspect our techniques for proving the
correctness of such designs: each next proof seems to
differ so much from all the previous ones. I suspect
I discovered that all I could design were special
purpose networks, which, of course, made me suspect
the programming language in the Von Neumann style
which, already before you have chosen your problem,
seems to have set you on the wrong track.
Semantically speaking the semicolon is, of course,
only a way of expressing functional composition: it
imposes the same order that can also be expressed
with brackets - innermost brackets first -. In combination
with the distribution you can generate many innermost
bracket pairs, thus expressing very honestly that it is
really only a partial order that matters. I like that,
it is honest.
When you write “one can transform programs [....]
by the use of laws [...] which are _part of the program-
ming language_” etc. I am somewhat hesitant. I am
not convinced (yet?) that the traditional separation in
fixed program, variable data and assertions is a
mistake. The first and the last - program and assertions -
are somewhat merged in LUCID, in which correctness
proofs are carried out in (nearly) the same formalism
as the program is expressed in. On the one hand that
presents a tempting unification, on the other hand I
thought that mathematicians speerated carefully and
for good reasons the language they talk about and
the metalanguage in which they do so. To put it in
another way: given a functional program, I feel only
confident if I can do enough other significant things
to it besides easy[striked out three times] carrying it out. And those I don’t
see yet. The almost total absence of redundancy
is another aspect of the same worry. In the case
of a traditional program we know how to make it
redundant: by inserting assertions, combination
of text and assertions makes it into a logically
tightly knit whole, that gives me confidence. How
do we this with functional programs? By supplying
two of them and an argument that demonstrates
their equivalence?
What about the following example? (My notation
because I lack the fluency in yours.)
(1) Consider the function f defined by:
f(0) = 0, f(1) = 1, f(2n) = f(n), f(2n+1) = f(n) + f(n+1)
(2) Consider the following program (“peven” = “positive and even”,
so for “podd”)
{N>=0} n, a, b := N, 1, 0;
_do_ peven(n) -> a, n := a + b, n/2
[] podd(n) -> b, n := b + a, (n-1)/2
_od_ {b=f(N)}
Here (1) gives a recursive definition of f, (2) gives
a repetitive program. Both definitions can be translated
in a straightforward manner into functional programs.
What would be involved in the demonstration of their
equivalence?
The above seems to me little example of the appropriate
degree of sophistication to try your hands on. (I am
not going to try it myself, as I fear that my past
experience would misguide me: I am afraid that I
wouldn’t rise above the level of translating (2)’s
traditional correctness proof - with which I am very
familiar - in an unfamiliar notation. Good luck!)
With my greetings and warmest regards,
yours ever
_Edsger_
P.S. Please note my new postal code: 5671 AL
(capitals obligatory) _in front of_ the village name
EWD.
Maybe someone with OCR software at hand can give the typed ones a try?I've started a GitHub repo with a first draft of transcripts for all the letters.
https://github.com/jiahao/backus-dijkstra-letters-1979
Anyone else who is interested is welcome to help proofread and submit PRs.
to John Backus
91 Saint Germain Avenue
SAN FRANCISCO, California 94114
U.S.A.
12th July 1979
Dear John,
My schedule is very tight, and I am afraid that I
must disappoint John Williams; Monday 20 or Tuesday 21
August seem the only two slots that are really available.
I shall arrive in the USA on Sunday 29th July, and
had already committed myself for the week of Monday
30 July to Burroughs in Mission Viejo. My wife and
the two youngest children will join me for the first
three weeks of the trip, and the week of 6th August
was planned as "the real holiday." Since then Bill
McKeenan has twisted my arm: the programming
course to be given that week was so heavily over-
booked that he asked me to give a second course,
in parallel to David's. Under those circumstances
I would like to avoid further commitments in
the week starting on Monday 13: it is their last
week in the USA, and I have then been working
almost all the time!
Immediately after WG2.3 I shall go to Austin,
Texas (Burroughs and University), and then home!
If I have a completely free choice, I think I
would prefer Monday 20 slightly over Tuesday 21.
New title and abstract:
Title: Termination Detection for Diffusing Computations
Abstract: The activity of a finite computation may
propagate over a network of machines, when machines
may delegate (repeatedly) subtasks to their
neighbours; when such a computation is fired
from a single machine, we call it "a diffusing
computation." We present a signalling scheme
--to be superimposed on the diffusing computation--
enabling the machine that fired it to detect its
termination. Our signalling scheme is perfectly
general in the sense that, independently of the
topology of the network, it is applicable to
any diffusing computation. (End of abstract.)
Please give my regards to John Williams and tell
him how sorry I am that I shall miss him. I
am not familiar with distance and transport facilities
between Santa Cruz and San Jose. If it is better
when I come to San Jose the day before that
is fine with me; may I leave the logistics to
you? (Wife and children leave on Friday 17th of August.)
With my greetings and best wishes,
yours ever
_Edsger_Yes, yes, yes.
> Shall we divide the task among a few of us and type this up?
I did the first one, see my other comment. It took me about 20 minutes to type and another 20 minutes to proof read. It is one of the longer ones and handwritten. If a few other join in, absolutely doable.
Edit: I sent a mail to the project maintainer.
Edit: I'm going to just do Dijkstra to Backus, 1978–05–29 right now, update when done
Dr. Edsger W. Dijkstra to John Backus International Business Machines Corporation 5600 Cottle Road SAN JOSE CA 95193 U.S.A.
Monday 29th of May, 1978
"To interrupt one's own researches in order to follow those of another is a scientific pleasure which most experts delegate to their assistants." (Eric Temple Bell in "Development of Mathematics")
Dear John,
I do not claim to be more noble or sensible than "most experts"; perhaps it is only because I have only one assistant to whom I can delegate no more than one man can do. But when you open your letter with: "I am quite sure that you have never read any paper I've sent you before" it is my pleasure to inform you that - although "quite sure" - you were very wrong. I very well remember that you mailed me a sizeable paper on reduction languages to which I owe my introduction to the subject. I didn't only read it, parts of it were even studied. I also remember that it left me with mixed feelings.
I can very well understand your excitement, although, for lack of experience, I still cannot share it. I am far from delighted with the state of the art of programming today, and anyone suggesting an alternative has in principle my sympathy - until, of course, he loses it again, like Terry Winograd did when he suggested natural language programming - "natural descriptions"! - as an alternative -. In the long run I have more faith in any rephrasing of the programming task that makes it more amenable to mathematical treatment. But you must have lots of patience, for the run will be very long. It isn't just mental inertia - that problem can be solved generally by educating a new generation of youngsters and waiting until the old ones have died -. It is the development of a new set of techniques needed to achieve a comparable degree of proficiency.
Could you get me a copy of G. A. Mago's (not yet published) "A network of microprocessors to execute reduction languages"? That might whet my appetite! From various angles I have looked into such networks and I am not entirely pleased with what I have seen. Firstly I suspect our techniques for proving the correctness of such designs: each next proof seems to differ so much from all the previous ones, I suspect that we haven't found the general patterns yet. Secondly I discovered that all I could design were special purpose networks, which, of course, made me suspect the programming language in the Von Neumann style which, already before you have chosen your problem, seems to have set you on the wrong track.
Semantically speaking the semicolon is, of course, only a way of expressing functional composition: it imposes the same order that can also be expressed with brackets - innermost brackets first -. In combination with the distribution you can generate many innermost bracket pairs, thus expressing very honestly that it is really only a partial order that matters. I like that, it is honest.
When you write "One can transform programs [....] by the use of laws [...] which are _part of the programming language_" etc. I am somewhat hesitant. I am not convinced (yet?) that the traditional separation in fixed program, variable data and assertions is a mistake. The first and the last - program and assertions - are somewhat merged in LUCID, in which correctness proofs are carried out in (nearly) the same formalism as the program is expressed in. On the one hand that presents a tempting unification, on the other hand I thought that mathematicians separated carefully and for good reasons the language they talk about and the metalanguage in which they do so. To put it in another way: given a functional program, I feel only confident if I can do enough other significant things to it besides carrying it out. And those I don't see yet. The almost total absence of redundancy is another aspect of the same worry. In the case of a traditional program we know how to make it redundant: by inserting assertions, combination of text and assertions makes it into a logically tightly knit whole, that gives me confidence. How do we this with functional programs? By supplying two of them and an argument that demonstrates their equivalence?
What about the following example? (My notation, because I lack the fluency in yours.)
(1) Consider the function f defined by: f(0)=0, f(1)=1, f(2n)=f(n), f(2n+1)=f(n)+f(n+1)
(2) Consider the following program ("peven" = "positive and even", so for "podd") [1]
{N >= 0} n,a,b := N,1,0;
do peven(n) -> a,n := a+b,n/2
| podd(n) -> b,n = b+a,(n-1)/2
od {b=f(N)}
Here (1) gives a recursive definition of f, (2) gives a repetitive program. Both definitions can be translated in a straightforward manner into functional programs. What would be involved in the demonstration of their equivalence?The above seems to me a little example of the appropriate degree of sophistication to try your hands on. (I am not going to try it myself, as I fear that my past experience would misguide me: I am afraid that I wouldn't rise above the level of translating (2)'s traditional correctness proof - with which I am very familiar - in an unfamiliar notation. Good luck!)
With my greetings and warmest regards,
yours ever Edsger
P.S. Please note my new postal code: 5671 AL (capitals obligatory) _in front of_ the village name.
EWD.
Transcriber's notes:
[1] This was a little difficult to transcribe cleanly into pure text. The "od" on the last line is clearly the ending delimiter for the opening "do" (both of which were lowercase and underlined). The pipe on the second-last line was, I think, meant to clearly show that the line beside it was within the do loop.
In a few places Dijkstra underlined text. With some I have not carried that over, with others I've surrounded the text with underscores.
Yes, please :). Esp. Dijkstra's letter from the 5th of Apr '79. It's a bit hard to read.
Thanks for making them available.
Some of the "verbosity" seems to arise from their apparently mutual concern to preserve their relationship by delivering criticism of ideas or behaviors without attacking the person. We could certainly use more of this style of communication today, but all it earns now is a "TL;DR" at the beginning or the end.
My technical correspondence is nowhere near this kind, and I know that it offends in some cases, but I also find that anything over 2-3 paragraphs these days is generally ignored.
Email, texts, slack, etc. have greatly abbreviated our conversations, allowing for many more, but reducing their quality.
EWD1303:
> The profound significance of Dekker's solution of 1959, however, was that it showed the role that mathematical proof could play in the process of program design. Now, more than 40 years later, this role is still hardly recognized in the world of discrete designs. If mathematics is allowed to play a role at all, it is in the form of a posteriori verification, i.e. by showing by usually mechanized mathematics that the design meets its specifications; the strong guidance that correctness concerns can provide to the design process is rarely exploited. On the whole it is still "Code first, debug later" instead of "Think first, code later", which is a pity, but Computing Science cannot complain: we are free to speak, we don't have a right to be heard. And in later years Computing Science has spoken: in connection with the calculational derivation of programs —and then of mathematical proofs in general— the names of R.W. Floyd, C.A.R. Hoare, D. Gries, C.C. Morgan, A.J.M. van Gasteren and W.H.J. Feijen are the first to come to my mind.
Backus as quoted in EWD692 (Djikstra attacks this):
> One advantage of this algebra over other proof techniques is that the programmer can use his programming language as the language for deriving proofs, rather than having to state proofs in a separate logical system that merely [sic!] talks about his programs.
From a perspective of a PL person like myself, these ideals are very compatible. Without correctness by construction, there is not enough proof reuse so formality will forever be doomed for niche applications (aka computers embedded in dangerous things). Likewise after trying out intuitionistic type theory--based things (e.g. Agda) separating the program and proof langauges just seems clumsy. Overall separating programming and proof whether spatially (seperate languages) or temporally (correctness proved after the fact) is bad, and the reasons hardly depend on the dimension of separation
> A fundamental reason for the preference is that functional programs are much more readily appreciated as mathematical objects than imperative ones, so that you can teach what rigorous reasoning about programs amounts to.
Lamport's TLA (which serves as the basis for TLA+) is simpler than Agda because in Agda, even though you use the same syntax for both types (i.e. propositions) and programs, they are conceptually separate, while in TLA they are the same. Both the program (well, its specification) and its properties are just logical propositions. TLA+, which is untyped (and based on TLA and ZFC) has the added advantage of being far, far (far) easier to learn than any dependently-typed language. TLA was introduced in this paper[1], which begins thus:
> Correctness of the algorithm means that the program satisfies a desired property. We propose a simpler approach in which both the algorithm and the property are specified by formulas in a single logic. Correctness of the algorithm means that the formula specifying the algorithm implies the formula specifying the property, where implies is ordinary logical implication. We are motivated not by an abstract ideal of elegance, but by the practical problem of reasoning about real algorithms. Rigorous reasoning is the only way to avoid subtle errors in concurrent algorithms, and we want to make reasoning as simple as possible by making the underlying formalism simple.
However, there is a debate between the PL approach (championed by Backus, Milner) and the specification approach (championed by Dijkstra, Lamport) over utility. The question is whether proving at the code level is empirically viable (at a reasonable cost) for all but the simplest of programs. Indeed, so far the answer seems to be no. There has been only one program ever written and verified using dependent types (CompCert), it is certainly non-trivial but rather small, yet it required a world-class expert, took a lot of effort, and even then required corner-cutting (Leroy says he gave up on proving termination, simply adding a counter and throwing a runtime exception if it runs out). TLA+, OTOH, is used extensively and regularly by Amazon (and Oracle, Microsoft and others), by "plain" engineers, on real-world large systems (far larger than CompCert), and management loves it because it actually seems to save them time and money. This is only possible because TLA+ is not the programming language.
Lamport doesn't reject the theoretical possibility of a PL that could provide large-scale verification of some sort (end-to-end verification will always be extremely expensive due to simple complexity arguments), only points out that no PL has so far come anywhere close to fulfilling this promise. Of course, TLA+ doesn't provide end-to-end verification, but that is too expensive -- and unnecessary -- for 99% of the industry anyway. I think that the ideas are compatible, but so far in theory only, while in practice they are not (yet?).
[1]: http://research.microsoft.com/pubs/64074/lamport-actions.pdf
Even if I was programming in a separate language, I'd still want a it embedded in a hosting type-theory so I can use it both for proofs and macros.
Has there ever been another non-trivial real-world program written in Coq (or another dependently-typed language for that matter) other than CompCert?
> Didn't know people actually used TLA.
Oh, far more than Coq; possibly more than Isabelle, even. TLA+ was designed for engineers, and evolved along with careful observation of how it is used by them. Coq is mainly used by type theoreticians to explore proving various mathematical theorems using ITT, and by PL people exploring various PL theories. TLA+ is used by engineers for large-scale software[1][2][3].
> I'd still want a it embedded in a hosting type-theory so I can use it both for proofs and macros.
I'm not sure what you mean by macros (I'm not a type-theory person in the least), but what does it matter if it's a type theory or a set theory? Lamport's thesis was that just as virtually all math proofs don't require type theory, neither do program proofs; indeed virtually none of the algorithm proofs in computer science employ type theory at all. TLA+ is a formalization of such proofs.
The main advantage of TLA+ is that it's as powerful as Coq for verifying algorithms (though it is not designed for proving general mathematical theorems), yet it any engineer can master it in weeks, without learning any new (not to speak of rather exotic) math. Also, it is amenable (and indeed supports) to model-checking, which alone is enough to make it practical. Deductive machine-checked proofs (which TLA+ also supports) are just too time consuming, and have ever been used on full programs only a handful of times. Their main use in industry is to tie together model-checked programs[4].
TLA+'s disadvantage is that it's very hard to extract a program from a TLA+ spec. Translations going the other way (from code to TLA+) have been done a number of times (for C and Java) but only in academia AFAIK. But this is unsurprising as Lamport's (and Dijkstra's) thesis is that effective, practical, program verification can only be done at a level higher than program code (unless it's a small program, specifically written in a simplified matter for the sake of verification, and the very large effort is acceptable -- all three conditions that have been met in the few cases where deductive, whole-program verification has ever been used).
[1]: http://research.microsoft.com/en-us/um/people/lamport/tla/fo...
[2]: http://tla2012.loria.fr/contributed/newcombe-slides.pdf
[3]: http://research.microsoft.com/en-us/um/people/lamport/pubs/h...
[4]: http://events.inf.ed.ac.uk/Milner2012/J_Harrison-html5-mp4.h...
From this we can tell where he placed his efforts; if you want to troll computer science on an international level, you better beef up your English!
He would have made a great asset to any software team, as a documenter; I wouldn't have had him write much code, though.
Every Dutch person I've met has had a terrific command of the language, much better than average native speakers. I suspect that at least some of this is due to the fact that as second-language students they actually take time to learn the rules of grammar, etc. Most native speakers pick it up "on the street", so the Dutch (that I've met) tend to sound more formal and educated, especially when there's no discernible accent.
Once upon a time, I read a thriller of the Cussler style where a Russian translator for the CIA or something goes to a conference and gets to exchange knowing glances with two colleagues over some mistake that only they recognized. Then I realized that there would have likely been more than a few native Russian speakers in the room.
You realize your statement is ridiculous, right?
Dear John,
...
But when you open your letter with:
“I am quite sure that you have never read
any paper I’ve sent you before”
it is my pleasure to inform you that - although
“quite sure” - you were very wrong.
Zing!I found internaut's comment to be insightful, well-conceived and on-topic. What's wrong with this comment?
EDIT: Moreover, who downvoted me so quickly for asking this question? This happened almost immediately after I posted this. Are there some nasty bots at place here?
search for 'downvoted'.
> Please resist commenting about being downvoted.
... does not explain anything here.
1. This guideline does not explain the downvote of internaut's comment.
2. This guideline does not explain the downvote of my initial comment, where I complained about another person being downvoted, not about myself being downvoted (which I wasn't at that point in time).
3. This guideline does not explain the situation after my EDIT. Although it violated that guideline, it did not receive heavy downvotes, but received multiple upvotes instead.