Syntaxes like: `A | B(Int) | C(String)`. That means A, B, or C.
> Where by extension: class-based inheritance is actually pretty simple to understand. The classic "IS-A" relationship isn't as simple as "fields in a struct", but it's not hard to understand
This value is either `A`, an Int (`B(Int)`) or a String (`C(String)`). Or: this knapsack either contains an A, B, or C. Difficult?
> (c.f. all the animal analogies),
Reminds me of the fact that software isn’t typically as static as animal taxonomies.
> and the syntax for expressing it is pretty clean in most languages.
I’m most used to Java where you spread `extends` over N files. (Sealed classes in Java is an exception.)
It’s fine. I don’t understand how it is particularly clean.
> Is it the "best" way to solve a problem? Maybe not. Neither are ADT sum types.
Is this an argument? ’Cause I don’t see it.
> But I think there's a major baby-in-the-bathwater problem
Inheritance is something that needs to have some concrete implementation affordance. Baby in the bathwater? I don’t see how you bolt this onto the struct model in a way that gets out of the way for people who don’t want to use it (the zero-cost-abstraction (in the if you don’t use it sense) is important to some low-level languages).[1]
Maybe the designers of hypothetical language X thinks that algebraic data types is enough. What baby are you missing then?
[1] For algebraic data types: structs are straightforward enough. While the “sum type” can be implemented by leaving enough space for the largest variant. That one-size-fits-all strategy isn’t perfect for all use-cases but it seems to have been good enough for Rust which has a lot, a lot of design discussions over minutiae.
> trying to be different.
With a technology from the ’70s. I also saw your “one oddball new idea is a revolution” snark. You’re clearly being very honest.