https://lean-lang.org/doc/reference/latest/Basic-Types/Subty...
A crucial difference between type theory (as its known in Lean) and set theory is that an inhabitant/element is of exactly one type.
Object-oriented programming languages are not that different: The types induced by classes can easily be viewed as sets: A child class is a specialized version of its parent's class, hence a subtype/subset thereof if you define all the sets by declaring `instanceof` to be their predicate function.
https://bertrandmeyer.com/2015/07/06/new-paper-theory-of-pro...
Technically speaking the elements in the supertype are all distinct from the elements in the subtype and viceversa. They are not a subset of the other, hence why it's improper to consider one a subtype of the other.
Emphasis on "technically". The embedding is trivial. The Lean docs linked by the GP suggest to put those technicalities aside:
> Even though they are pairs syntactically, Subtype should really be thought of as elements of the base type with associated proof obligations.
> For an arbitrary n, compute the table full of values and their proofs, and just pull out the nth proof
if you thought harder about it you'd realize what you're suggesting is impossible
For example, in a theorem prover, you can write an inductive proof that x^2 + x is even for all x. And you can write this via a computation that demonstrates that it's true for zero, and if it's true for x, then it's true for x + 1. However, you don't need to run this computation in order to prove that it's true for large x. That would be computationally intractable, but that's okay. You just have to typecheck to get a proof.
No, this is what would happen _if you ran the proof_, but proofs are not meant to be ran in the first place! The usual goal is proving their correctness, and for that it's enough for them to _typecheck_.
> For an arbitrary n, compute the table full of values and their proofs, and just pull out the nth proof
if you thought harder about it you'd realize what you're suggesting is impossible
That's what the intermediate function is doing, but _it doesn't need to be executed_ for the final proof to be valid.
> if you thought harder about it you'd realize what you're suggesting is impossible
No, it is perfectly possible. This is not a program running all cases and `assert`ing that they have the expected result, this is a proof encoded as a program that when executed will surely produce a proof that for the given n the theorem holds. But it surely produces a proof when executes you don't need to execute it to know that a proof exists and thus the theorem holds!
This is exactly what mathematicians do when they prove theorems. Do you think they go on and check for every number n that some theorem holds? That's impossible to do for every possible n, and this is exactly what you're claiming you can do in every other language.
And just to be extra clear, a signature like this:
> theorem maxDollars_spec_correct : ∀ n, maxDollars n = maxDollars_spec n
Does not mean that for every n the function returns a boolean indicating whether `maxDollars n` is equal to `maxDollars_spec n` or not. Instead, it _surely_ returns a proof that they are equal. And with _surely_ I mean it cannot return errors or exceptions or whatever. It is guaranteed that the function will terminate with a proof for that proposition, which I want to reiterate again, it's the same as knowing that the proposition is true.
1. build a table tab[n]
2. check that for every i, tab[i] == maxDollars_spec[i]
if you take the latter approach i proposed (bottom up) there is nothing to check the totality of.
Well, you at least need dependent types just to state the theorem, which eliminates nearly all other languages.
let rec helperMemo : Nat → HashMap Nat Nat → Nat × HashMap Nat Nat
is a big turnoff to me. I find it annoying to parse mentally. I can do it but I have to concentrate or it's easy to gloss over an important detail. let rec helperMemo (n : Nat) (map : HashMap Nat Nat) : Nat × HashMap Nat Nat
This is how it would usually be written. I will update the post accordingly. def MemoMap := HashMap Nat Nat
def MemoResult := Nat × MemoMap
let rec helperMemo : Nat → MemoMap → MemoResultTupples don't really indicate what I can expect from the members.
Defn. f: X → Y is flat ⇔ O_{Y,f(x)} → O_{X,x} flat ∀ x.
Then immediately I dropped that class. Turns out I like real analysis much moreInduction is often taught as something you do with natural numbers. But it's actually something you do with sets that are defined inductively. Any time you have a set that is defined like so:
1. x is in the set.
2. for all y in the set, f(y) is in the set
(where x and f are constants), you can apply induction. The base case is that you show some property is true of x. The inductive step is that you show that when the property is true of y, it is necessarily true of f(y).If you have multiple elements that you guarantee will be in the set, each of them must be dealt with as a base case, and if you have multiple rules generating new elements from existing ones, each of them must be dealt with as another inductive step.
For the case of the natural numbers, x is 0, and f is the successor function.
If you wanted to apply this model to the Fibonacci sequence, you could say that the tuple (0, 1, 1) is in the set [representing the idea "F_1 is 1"] and provide the generator f((a, b, c)) = (b, a+b, c+1). Then since (0, 1, 1) is in the set, so is (1, 1, 2) [or "F_2 is 1"], and then (1, 2, 3) ["F_3 is 2"], and so on.
(Or you could say that the two tuples (1, 1) and (2, 1) are both in the set, and provide the generator f( (i, x), (i+1, y) ) = (i+2, x+y). Now your elements are simpler, your generator is more complex, and your set has exactly the same structure as before.)
The approach taken by the author's "improved solution" is to define a set consisting of the elements of the memoization table, with the generator being the function chain that adds elements to the table. He annotates the type of the elements to note that they must be correct (this annotation is administrative, just making it easy to describe what it is that we want to prove), and then does a proof over the addition operation that this correctness is preserved (the inductive step!).
https://live.lean-lang.org/#codez=JYWwDg9gTgLgBAZRgEwHQBECGN...
Doing the proof inside the algorithm (i.e. doing inline induction over the algorithm recursion structure) has the advantage that the branching structure doesn't have to be duplicated as in an external proof like the one I did.
In my proof I didn't struggle so much with induction, but much more with basic Lean stuff, such as not getting lost in the amount of variables, dealing with r.fst/r.snd vs r=(fst, snd) and the confusing difference of .get? k and [k]?.