Except most programmers don't spend even a second to think about it, and we end up with "int mid = (low + high) / 2;" bugs in standard implementations of binary search in e.g. Java. And that implementation even had a written proof accompanying it!
A whole bunch of assumptions about the language are hidden and the verification conditions are generated automatically for the underlying automatic or semi-automatic proof tools (why3-intermediated) - the second best part of SPARK.
You have to trust that the SPARK FrontEnd makers got it right - or you can inspect/review all the discharged VC if you want - and you still have to actually prove or help prove them all, but I'm not losing sleep over a forgotten 'normal' check.
There seems to be a fundamental difficulty here. Either we prove things in the language we want to use, which means modelling the behavior of the things we use in that language, or we prove things in Lean, but then cannot apply that to an actual implementation, because of issues like the one above.
I would be surprised, if there was no standard approach for modelling bounded integers and their specific properties in a language (which can differ) in a proof language like this. There must have been more people having thought about this and come up with solutions.
In the long past, Lean 3 had this idea that you could use one language both for writing proofs, and writing proof automation -- programs that manipulate proofs and automatically do things. Like in the article itself, the 'grind' thing-a-mabob is proof automation. (Roqc has two separate languages for proofs and proof automation.) But there was a problem: Lean started as a theorem prover and the implementation tried to move towards "executing programs" and it didn't work very well and was slow. The prototype compiler from Lean 3 to C++ ran out of steam before Lean 3 got canned.
Lean 4 instead went and did things the other way around: it started as a programming language that was executable. The Lean 4 compiler was self-hosted during development for a long-time, way before anyone ported any big math proofs to it from Lean 3. Why did they do this? Because the key insight is that if you want to write programs that manipulate proofs (AKA programs), the best thing to have at hand is have a robust general programming language -- like Lisp or Racket. And so Lean is macro based, too, and like Racket it allows you to have families of languages and towers of macros that expand as deep as you want. Meta programs that write programs that write programs, etc...
So in Lean you write Lean programs that can manipulate Lean ASTs, and you write "reader macros" that allow you to use domain specific syntax right inside the source file for any kind of math or programming-language DSL you want. And all those macros and meta-programs are compiled and executed efficiently just like you'd expect. Finally, there is a total fragment of the language that you can actually write proofs over and reason about. And there is a big library called "mathlib" with lots of macros for writing math and math proofs in this language-framgent-we-can-reason-and-prove-things-with.
Lean 4 is very close in spirit to the Lisp idea, but modified for math proving and generalized programming. It's very unique and powerful.
That first question is hard to parse. If you mean "Are you writing your next program in Lean then?" then: No, but in principle we could, it runs on each OS we use (Windows and Linux, standard x64 hardware). If you mean something else, I can't figure out what it would be.
> Either we prove things in the language we want to use
Sure, I mentioned SPARK/Ada. There are systems for C, Java, and others that also work and understand their types so you don't have to add extra modeling.
> which means modelling the behavior of the things we use in that language
It would already be done for you, you wouldn't have to model Ada's integers in SPARK, for instance.
> we prove things in Lean, but then cannot apply that to an actual implementation, because of issues like the one above.
https://lean-lang.org/doc/reference/latest/Basic-Types/Fixed...
If you knew your target system was using fixed-width integers, you'd use this.
The problem is that you want to be doing all your coding monadically (so that it looks like the familiar imperative style), and I think there are still many problems about doing proofs on monadic code.
I see this as far more of an indictment of Java than an indictment of programmers. If you make the symbol "+" part of your programming language's syntax (especially in a language where you don't allow custom implementations of "+") then it should do what the ordinary mathematical + operation does!
Yes I do. It's an acceptable cost in most cases (you will note much of the world runs on Python, a language in which + does behave sensibly, these days), and dangerous optimisations should be opt-in rather than opt-out.
In the worst case you should at least make your language fail-stop definite by making it error when the result of + is too large rather than silently continuing in an invalid state.
That's what Swift does fwiw. But there are downsides to that too (although they're also related to Swift not having a good fault recovery mechanism).
In practice, whenever you know you're going to use large enough numbers in Java, you probably want to use BigInteger or BigDecimal (e.g. in banking). Unfortunately, Java doesn't allow you to use the + operator with it.