Ada's type system, SPARK, general clarity on behavior, etc. allows you to structure programs in a manner that makes it hard to Hold It Wrong, especially when dealing with embedded firmware that has all sorts of hardware gotchas. I haven't gotten the chance to use the Tasking primitives in anger yet, but I have a strong suspicion that they're going to bail my ass out of a lot of problems just like the rest of the language has.
My team started at a new employer and made the jump from C to Ada for MCU firmware. We control things that spin real fast and our previous experiences with C definitely resulted in some screwups that left you weak in the knees for a bit. There was some initial hesitation but nobody has any interest in going back now. Rust was floated but we're all glad we didn't opt for it -- memory safety on a system that never allocates memory doesn't hold a candle to Ada's bitfield mapping/representation clauses, ranged types, decimal types, reference manual, formal verification options, concern from the powers that be about providing a stable and trustworthy environment for doing shit that you really don't want to get wrong, etc.
https://www.ghs.com/products/ada_optimizing_compilers.html
https://www.ptc.com/en/products/developer-tools/apexada
https://www.ddci.com/products_score/
http://www.irvine.com/tech.html
http://www.ocsystems.com/w/index.php/OCS:PowerAda
http://www.rrsoftware.com/html/prodinf/janus95/j-ada95.htm
And AdaCore sponsors GNAT, with Ada being one of the few official GCC languages for two decades now.
SPARK 2014 itself is the same too AFAIK, the problem is there's a lot of auxiliary static analysis tools and plugins that are gated behind AdaCore's sales wall (and of course they'd never deign to sell licenses affordable to individuals)
So (at least according to pjmlp in https://news.ycombinator.com/item?id=42548360) Rust also only has a single "wide open" implementation.
Which means Rust doesn't have any "other" open implementation either, right?
Honestly, it's hard to know what of all the pro-Rust stuff one sees (here and elsewhere on-line) to take seriously, when its advocates constantly -- consciously or not -- exaggerate its virtues like this.
That's before we even talk about important stuff like libraries.
I think most applications of Ada are in embedded systems where you don't often want anything not in the standard library.
Note that ranged types, decimal types, etc. can fairly easily be emulated in Rust, with what I find is a clearer error mechanism.
SPARK is, of course, extremely cool :) There are several ways to work with theorem provers and/or model checkers in Rust, but nothing as nicely integrated as SPARK so far.
It also didn't help that the UNIX vendors that had Ada compilers like Sun, it was an additional purchase on top of the development SKU that already had C and C++ included.
There's a couple of major advantages that Ada could have in the Linux over Rust for safe drivers:
1. Having the option to use SPARK for fully verified code provides a superset of the correctness guarantees provided by Rust. As mentioned in the parent comment, Rust focuses purely on memory safety whereas SPARK can provide partial or complete verification of functional correctness. Even without writing any contracts, just preventing things like arithmetic overflows is incredibly useful.
2. Representation clauses almost entirely eliminate the need for the error-prone manual (de-)serialisation that's littered all over Linux driver code: https://www.adaic.org/resources/add_content/standards/22rm/h...
Do representation clauses let you specify endianess? From a quick glance at that link it didn't appear so. I would imagine that invalidates most use cases for them.
Linus hated Ada. I suspect he doesn't exactly like Rust either but the Tribe is just too strong within Linux.
Why? Do you program in Ada or Coq?
People can't be bothered to track lifetimes, what makes you think they are ready to track pre/post-conditions, invariants and do it efficiently and thoroughly.
But, its easy to figure out why it didn't become popular. C/C++/any other top10 language all had free compilers available to everyone. Ada didn't during the explosive era of computing expansion. Also, not a problem nowadays with IDE auto-complete/snippets but the language was too verbose for older generation of programmers.
What's probably more important is that 80 columns was far and away the likely maximum for a screen, and 40 columns wasn't unheard of. The word PROCEDURE took up 11 to 22% of the column width! This wasn't a show-stopper, Pascal uses a similar syntax (both of them derived from Algol of course) and was pretty popular, but plenty of people complained about Pascal's verbosity as well, and Ada is definitely more verbose than even Pascal.
The lack of autocomplete (even things like snippets were relatively uncommon) didn't help, but mainly, verbosity imposed real costs which are mitigated or irrelevant now.
Whereas Ada's first version is from 1980 and first standardised version (different to 1980) in 1983. Yeah, "ancient."
Ha, you could almost read this as a stuxnet joke
I can't really do the same for rust which tends to lend itself into a more confusing format.
Other languages focus on terseness and expressiveness. Ada expresses a bunch of constraints directly and forces you to do things in organized ways (e.g. use namespaces appropriately).
There's a very fine line between nice language syntax and ease of use via tools you use to interact with the language (with APL-influenced languages being the only exception I can think of, but even there I've heard of programmers having physical key map symbols overlaid on keyboards).
The interesting feature of Ada here seems to be what it calls "subtype predicates". As you've explained, these come in a "dynamic" flavor, which are a nice syntax for runtime assertions, and a static flavor, which are compile-time checked but restricted to certain static expressions (per https://ada-lang.io/docs/arm/AA-3/AA-3.2#p15_3_3.2.4).
An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order. I am pretty sure this cannot be done in Ada; checking at runtime does not count!
I do believe (having heard from multiple sources) that Ada's type system was ahead of its time and its success in creating practical programs that are likely to be correct is probably underrated. But I'm not here just to legislate semantics; one should be aware that there is something vastly more powerful out there called "dependent types" (even if that power is not likely to come into most people's day-to-day).
(Unfortunately Wikipedia is quite poor on this topic; you will see, for example, that on the Talk page someone asked "Is Ada really dependently typed?" two years ago; no reply. And it makes no sense to say that Ada has "tactics" but not "proof terms"; tactics are a way of generating proof terms. There are many better resources out there (especially ones associated with the languages Agda, Coq (currently being renamed Rocq), and Lean, e.g. https://lean-lang.org/theorem_proving_in_lean4/dependent_typ...). But be warned, there is no "short version": dependent types cannot be explained in a sentence, and they are not something you will arrive at with enough "hacking away".)
It actually can be done in Ada, but not purely with the type system, instead we rely on SPARK, which converts Ada code and passes it through various automatic theorem provers. Some examples of fully proven sorting functions are here: https://github.com/AdaCore/spark2014/blob/master/testsuite/g...
You can also see from the above code just how good theorem provers and SPARK are now with the reasonably low number of assertions required to both prove that the output is sorted and prove that the input and output contain the same elements, not to mention all the hidden proofs relating to integer overflow, out-of-bounds access, etc..
You could maybe do all this with types and SPARK, but it's not the approach that would usually be taken.
It does look like the "discriminant" system of Ada shares key similarities with what dependently-typed languages call a "dependent sum", a generalized record type where "later" elements of the record can depend on "earlier" ones.
"Dependent products", on the other hand, can be viewed as an extended version of generic functions, although they may also suffice to account for e.g. the examples given by OP of Ada's runtime range types and runtime-sized arrays. The key here being that the representation of a type is indeed given as a "function" of the value parameters that are "depended" on.
Another example of a language with limited dependent typing is Sail. It has "lightweight" dependent types for integers and array lengths (pretty similar to Ada from what it sounds like).
It's very good in my experience - it lets you do a lot of powerful stuff without having to have a PhD in formal verification (again, sounds similar to Ada).
> An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order.
Yeah you can't do that but you can have the type checker say things like "n^m is positive if n is positive or even" or "foo(x) -> (bits(n), bits(m)) with m+n=x" (not the actual syntax). I'm pretty sure you can't do that stuff in a type system without dependent types right?
(So when Sail says it has "lightweight dependent types", that seems fine to me (it does seem to do more than it could with simple types or polymorphic types), but if it simply asserted that it "had dependent types" I would feel misled.)
The wording is subtle and language does change, but what I want to push back on is the confusion I see from time to time that "if I can write anything that looks like a function from values to types, I have the same thing that everybody talking about dependent types has". If you think this you don't know what you're missing, or even that there is something you're missing, and what you're missing is very cool!
Would you say Lean is a somewhat learnable language for somebody who only has cursory exposure to functional programming and static types? I’ve almost exclusively used typescript for the last few years, except for some clojure in the last few months.
Sometimes I find a neat language, but my very TS-oriented brain has a hard time getting into it.
In between ad hoc types like TypeScript and dependently-typed languages like Agda, Coq/Rocq, and Lean are well-typed, polymorphic (but not dependent) languages like OCaml, F#, or Haskell ("ML family" or "Hindley-Milner" are related terms). Those are what I'd suggest checking out first!
The first thing I tried to do after learning all the basic syntax is write a function f: Nat —> Set (from numbers to types), and then stick it in a function signature (i.e. g: f (3).)
[1] https://people.inf.elte.hu/divip/AgdaTutorial/Index.html
Here is some previous discussion:
Show HN: Getada: rustup-like installer for Ada's toolchain/package manager 194 points | 115 comments: https://news.ycombinator.com/item?id=40132373
There is a quickstart with a link to a large tutorial: https://www.getada.dev/how-to-use-alire.html
If you wanna try out Ada without even installing anything, you can also check out https://learn.adacore.com/
type My_Record (A, B : My_Integer) is record
X : My_Array (1 .. A);
Y : My_Array (1 .. B);
end record;
A record that's created from this will have those arrays tightly packed rather than leaving space at the end of the first one like you might expect (this might be compiler dependant, but it's definitely how GCC does it). Also note that these values can be set at runtime, so this isn't just a generic in disguise (although in Ada you can also instantiate generics at runtime).If you do microcontroller firmware development, I'd say it's perfectly reasonable to float it for a smaller project and just give it a spin. The language is significantly more modern/sane than C so you're not really exposing yourself to much talent risk. There's no gaping holes in the environment, experienced firmware devs will adjust easily, and new devs will feel more at home with the facilities provided.
In a language with dependent types you don't merely have arrays with arbitrary bounds, but you get a proof that array access is always within bounds. AFAICT this is missing in Ada, even when SPARK is employed. Similarly with bounded integers. In a DT language don't get runtime bound checks, it's all compile time proofs.
In Ada, even the name of the feature makes it pretty clear that it's just runtime verification:
type My_Record (Top, Bottom : My_Integer) is record
Field : My_Array(Bottom .. Top);
end record
with Dynamic_Predicate => Bottom <= Top;
It's not a dynamic predicate in either a language with DT or refinement types! It's all in compile time proofs!SPARK does attempt to prove certain properties of programs automatically, including things like bounds checking, which is great, but it's all best effort and it's nowhere at the same level of capability compared to when using DT (or even refinement types). Of course it's far more lightweight (although it can be argued that systems based on refinement types are just as lightweight).
It's very clear that people developing SPARK know a lot about type theory and formal verification, and use as much of the theory as possible to make verification of Ada programs as cheap and ergonomic as possible, but to claim that Ada has DT is quite a stretch. People are using DT to do formal verification of C programs but that doesn't mean that C has dependent types.
But the way you get 'proofs' in dependently-typed languages is just by building a tree of function evaluations that demonstrate a given logical inference at compile time, and that disappear (are translated to a unit type) when 'program extraction' is performed on the code. This is ultimately a kind of meta-programming. And in a DT language you would still need runtime checks within your extracted code to ensure that program inputs provided at runtime satisfy the logical conditions that the "proof" part of the program relies on: the need for runtime checking is thus reduced to a bare minimum, but not fully eliminated.
There is connection between advanced type systems and metaprogramming, you don't even need dependent types to reach it, GHC can express, for example, typed symbolic differentiation of compiled terms[1], something that would be of interest to a Lisp programmer. This is not a surprise, System Fω has a copy of simply typed lambda calculus at the type level.
> But the way you get 'proofs' in dependently-typed languages is just by building a tree of function evaluations that demonstrate a given logical inference at compile time, and that disappear (are translated to a unit type) when 'program extraction' is performed on the code.
As someone who works on dependently-typed language, I have no idea what you mean. Programs in most DT languages run directly (Lean, Idris, Adga, etc), code extraction is a Coq thing (and Isabelle, but Isabelle does not use DT). Some languages have type erasure, some don't. Some are explicitely concerned about type erasure (Idris), some don't.
> And in a DT language you would still need runtime checks within your extracted code to ensure that program inputs provided at runtime satisfy the logical conditions that the "proof" part of the program relies on: the need for runtime checking is thus reduced to a bare minimum, but not fully eliminated.
In a typed language you need to parse the input exactly once and construct a typed term. The type system itself will prevent you from constructing an ill-typed term -- that is, the parsing routine itself is typed checked (at compile time). Yes, parsing needs to happen, this is true for any language, not just a DT one, but the act of parsing itself produces a dependently-typed term, there are no additional checks happening on the term while it is being used at runtime. The fact that a parsed term cannot, for example, cause an integer overflow inside your program no matter what is quite a massive guarantee.
[1] https://mail.haskell.org/pipermail/haskell/2004-November/014...
I think the lines between what some consider true DT and what is possible w/ Ada might be more blurred than people expect.
It's just a static buffer that you can use for temporary allocations, e.g. to return an array of 50 int32's to a parent stack frame you can just allocate `50*sizeof(int32)` bytes on that buffer and return a pointer to the beginning, instead of a heap-allocated `std::vector<int32>`. Every allocation advances the head of the buffer, so you can have multiple allocations alive at the same time.
Every once in a while you need to free the buffer, which just means resetting its head back to the start - in the case of games the beginning of a new frame is a convenient point to do that.
This doesn't solve dynamic memory in general, as it can't be used for long-lived allocations, it just saves you a bunch of temporary heap allocations. It doesn't solve all problems that RAII does either, as RAII is often used for resources other than memory - files, sockets, locks, ...
[1] https://www.rfleury.com/p/untangling-lifetimes-the-arena-all...
For example, when a function returns a thing on the second stack the compiler can arrange that, before returning, the thing is moved to the second stack position that was at the start of the function. This releases the memory that was used for other things by the callee. Then the caller knows the second stack depth after the call and can release its second stack usage just as well.
But using the second stack is just simpler, avoids the extra copies and more compatible with the mainstream ABI.
All my knowledge on it comes from some documentation on configuring the secondary stack for embedded targets and from comments+diagrams in the GCC GNAT source code (and maybe I saw a conversation on it on the comp.lang.ada Usenet group at some point…): https://docs.adacore.com/gnat_ugx-docs/html/gnat_ugx/gnat_ug... https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/ada/libgnat/...