Those other languages are also definitely worth learning. Happily, there's lots of cross-transfer of ideas and skills between them, so learning one will make the others easier. I got my start in dependent types with Software Foundations and Coq, and that was very helpful when learning Agda and Idris later. Similarly, skills from them transferred quite readily to Lean.
Looking forward to read it during Summer time.
Dropping a print version and focusing on epub could very well be faster, as I suspect that an mdbook->epub pipeline is less challenging than creating a quality print-ready PDF. But no plans right now.
* tactics (proof scripts are a lot easier than manual proving)
* syntax extensibility (Racket-like, supports custom elaboration/delaboration)
* mathlib (library of formalized math)
* tooling (can't say it's better, I haven't used Idris, but it's at least a lot nicer than Agda's: rustup-like version manager `elan`, own build system `lake`, official vscode extension supporting mini web apps which can interact with the code)
Small things I can remember:
* do-notation with early return and imperative loops
* easier termination proof handling (provide some expression using function arguments with `decreasing_by` and prove it's decreasing with `termination_by` block, which may be automatic even for non-structural recursion because of some built-in tactic)
Redoing some proofs in Lean by construction of proof terms was eye-opening, it's just functional programming with dependent types! I still think that teaching with tactics first without exposing proof terms is a mistake.
I've learned to see the value of tactics later, again thanks to Lean: there are proofs that are more natural in the tactics style. Sometimes a mixed approach is ideal: growing intermediate proof terms from the premises and then wrangling the hypothesis with tactics to meet the lemmas.
in C and C++, the conditional statement is written using if and else, while the conditional expression is written with a ternary operator ? and :.
ending a sentence with . is a must. Including a period directly after a syntactic construct about ? and : is .. jarring because it is possible to attempt to read this as :. as an atomic construct in the language. Or even worse, the three-dots pyramid in some mathematical proof notations.
In C++, the lexical construct you describe is an atomic operator[0] and neither an expression nor a statement:
The first operand of the conditional operator is
evaluated and contextually converted to bool.
After both the value evaluation and all side effects
of the first operand are completed, if the result
was true, the second operand is evaluated. If the
result was false, the third operand is evaluated.
0 - https://en.cppreference.com/w/cpp/language/operator_other ternary(condition, true-path, false-path)
which executes either true-path or false-path depending on condition. But in notational terms, how "big" condition and true and false parts are is rather unconstrained, so you wind up with the ? and the : widely separated. They aren't operators in the higher sense: they're the syntax which forms the ternary operation as a whole over the expression and it's condition.Lexing bleeds into syntax and syntax bleeds into semantics.
:. is a possible faux-pas in lexing the sentence around the ?...: construct because of :. being unfortunate.
I don't think "operator in syntax terms" is nearly as well-defined as OP thinks. Why couldn't `?` be parsed as half an operator and `:` as the other half? There's no rule that says a parser has to call each distinct symbol token its own "operator". In fact, I'd argue that the only reason this might seem seem like a rule is that almost all of the other operators in common use are either unary or binary, making it easy to use a single token for the operator itself. That's why it's called the "ternary operator"; it's the only one that operates on three things! The only alternative to spreading it out across separate tokens with an operand in between is to put multiple operands in a row on one side of it; as confusing as `foo ? bar : baz` might be, I have strong doubts that `foo ?: bar baz` would be less confusing in most real-world cases.
Thanks for the feedback!
I was stunned when I discovered it, because it does exactly what its name suggests: it's a huge library of math proofs available as a library in Lean.
Tooling around it great (it is installed with a rustup fork called elan).
As far as performance, it uses a particular version of ref-counting that allows mutation of unique references based on run-time tracking rather than compile time. If you accidentally keep a reference to an array and modify it, then you end up copying it rather than just mutating it.
Now it has a RC GC and boxes everything >= 64 bits (including struct fields and IO/ST results), and as the compiler isn't polished it is probably significantly slower. In the referenced raytracer repo you can find rendering time compared to the C implementation (Lean is 25x slower, but that was a year ago).