And if you want to read why we need additional types of science organizations, see "A Vision of Metascience" (https://scienceplusplus.org/metascience/)
It's probably a narrow set of problems with the right set of constraints and scale for this to be a win.
Scientific research is often not immediately applicable, but can still be valuable. The number of people that can tell you if it's valuable are small, and as our scientific knowledge improves, the number of people who know what's going on shrinks and shrinks.
Separately, it's possible to spend many years researching something, and have very little to show for it. The scientists in that situation also want some kind of assurance that they will be able to pay their bills.
Between the high rate of failure, and conflicts of interest, and inscrutability of the research topics. It's very hard to efficiently fund science, and all the current ways of doing it are far from optimal. There is waste, there is grift, there is politics. Any improvement here is welcome, and decreasing the dollar cost per scientific discovery is more important than the research itself in any single field.
But part of me feels like if they are going to spend the massive effort to formalize Fermat's Last Theorem it would be better to use a language where quotient types aren't kind of a hack.
Lean introduces an extra axiom as a kind of cheat code to make quotients work. That makes it nicer from a softer dev perspective but IMO less nice from a mathematical perspective.
> Throughout this section, we use A≈B to mean that A and B are essentially equal, in the sense that B is a suitable approximation of A in some sense that we will formalize in a later section. The reader may feel free to assume A=B when verifying estimates, even though A=B is generally false.
Is that when this would be needed?
This rule is fine in itself, but the Lean developers were not sufficiently careful and allowed it to apply for quotients of propositions, where it interferes with the computation rules for proof irrelevance and ends up breaking subject reduction (SR is deeply linked to computation when you have dependent types!) [0]. It is not really a problem in practice though, since there is no point in quotienting a proposition.
[0] see the end of section 3.1 in https://github.com/digama0/lean-type-theory/releases/downloa...
So all the libraries they build up will have these holes in them that make it harder to do things like treat two isomorphic objects as the same -- something mathematicians do implicitly on a daily basis.
You can probably get a long way in Lean with the soundness axiom. But what I don't know is what happens when you build up a decade of software libraries in a system that adds a lot of manual and cognitive overhead when you want to use them.
My gut instinct is that by cutting corners now, they're creating a form of technical debt that could possibly escalate quickly and force mathematicians to reformulate their tools in a nicer way.
This actually happens continuously throughout the history of math. Sometimes it leads to errors like the so-called Italian school of algebra. Sometimes it just leads to pauses while we go back and figure out what the objects we're working with actually are before we can make more progress.
Take all this with a grain of salt: I haven't worked with Lean so I don't know how much this crops up in practice, and I don't know how large Lean libraries are at this point. This is all gut feeling.
But my sense is that what you really want is to get the foundations right, then build abstraction layers on those foundations that are nicer to use. Lean tries to build a "good enough" foundation and historically the gap between what we know is correct and what is seen to be "good enough" tends to show itself sooner or later in math. If you are just working in natural language then you can just forget it was a problem as soon as a fix is found. If you're working in software, though, you'll likely need to do a major rewrite or refactoring.
If this does end up being the case, that translation becomes easy, then essentially all theorem proving efforts should be conducted in the language that is the easiest to work in. You can translate into the "mathematically superior" languages later.
https://github.com/ImperialCollegeLondon/FLT/blob/main/GENER...
In particular, note that a key lemma of crystalline cohomology rests on a mistake. Experts think that it is fixable by virtue that results have depended on it for a long time and no issue was found, but it is not fixed.
Research level mathematics like this is as hard as it gets, and this proof is famously difficult: uses many branches of advanced mathematics, required thousands of pages of proofs, years of work.
Plus, Kevin Buzzard is a world expert with some ideas for how to better organize the proof. In general, formalization leads to new understanding about mathematics.
Something people outside of mathematics don't tend to appreciate is that mathematicians are usually thinking deeply about what we already know, and that work reveals new structures and connections that clarify existing knowledge. The new understanding reveals new gaps in understanding, which are filled in, and the process continues. It's not just about collecting verifiably true things.
Even if somehow the OpenAI algorithm could apply here, we'd get less value out of this whole formalization exercise than to have researchers methodically go through our best understanding of our best proof of FLT again.
(gemini llm answer to google query: constructive math contradiction)
"Wiles proved the modularity theorem for semistable elliptic curves, from which Fermat’s last theorem follows using proof by contradiction." https://en.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27s_Last_Theorem
So, will the Lean formalization of FLT involve translation to a direct or constructive proof? It seems not, I gather the proof will rely on classical not constructive logic."3. Proof by Contradiction: The core of the formal proof involves assuming ¬Fermat_Last_Theorem and deriving a contradiction. This contradiction usually arises from building a mathematical structure (like an elliptic curve) based on the assumed solution and then demonstrating that this structure must possess contradictory properties, violating established theorems. 4. Formalizing Contradiction: The contradiction is formalized in Lean by deriving two conflicting statements, often denoted as Q and ¬Q, within the context of the assumed ¬Fermat_Last_Theorem. Since Lean adheres to classical logic, the existence of these conflicting statements implies that the initial assumption (¬Fermat_Last_Theorem) must be false."
(gemini llm answer to google query: Lean formalization of fermat's last theorem "proof by contradiction")
https://math.andrej.com/2010/03/29/proof-of-negation-and-pro...
also
"It’s fine to use a proof by contradiction to show something doesn’t exist. When the assumption that it does exist leads to a contradiction, then that shows it can’t exist.
It’s not so fine to use a proof by contradiction to show something does exist. Here’s the situation. The assumption that it does not exist leads to a contradiction. What can you conclude from that? You would like say “therefore it exists”. But you haven’t got any idea what it is. You may know it’s out there somewhere, but you have no idea how to find it. It would be better to have a proof that tells you what it is.
That’s a difference between what’s called “classical logic” and “intuitionistic logic”. In classical logic, proof by contradiction is perfectly accepted as a method of deductive logic. In intuitionistic logic, proof by contradiction is accepted to show something doesn’t exist, but is not accepted to show something does exist."
David Joyce, https://www.quora.com/In-math-are-there-any-proofs-that-can-...
https://www.quora.com/In-math-are-there-any-proofs-that-can-...