I believe it was written by an HN commenter, and accumulated 67 comments: https://news.ycombinator.com/item?id=10606355
And implementation details are provided. An undergrad should be able to understand this.
Consider an OO language with classes and inheritance. Each class is a type. (There may be types that aren't classes, for example Java an interface also represents a type. The equivalent in a dynamic language with duck typing is "all objects that satisfy this contract".) An object belongs to the type of its class, and every class you inherit from. (For instance an integer is an integer, a number, and an object.)
So far, so good.
Covariance occurs when we can use objects of any subtype freely. For instance we can insert integers into a list of numbers.
Contravariance occurs when we can allow an object to be of some supertype freely. For instance it is safe to assume that numbers from our list of numbers are objects.
The problem is that we can almost never do both. For example in Java you can put integers into a list of numbers, but you can't read numbers out of a list of numbers and assume that they will be integers! (Not even if you only put integers in - the type system won't let you do it.)
So, which do we want? Well, sometimes one, and sometimes the other. For example the Liskov substitution principle says that an object of a subtype should be usable anywhere we can use an object of the original type. Which means that if we override a method in a subclass, we are OK if we change the method's signature to accept a supertype, but are breaking the rule if we change it to require a subtype of the original.
Unfortunately it sometimes makes sense to have a subtype override a method and require a subtype be passed in. The paper offers a graphics example involving colors.
When we have this, we have 3 options. 1) Disallow it because the type system can't easily guarantee that things won't break. (Static languages like Java mostly do this.) 2) Assume that the programmer isn't an idiot then throw run-time errors if the programmer was. (Most dynamic languages do that.) 3) Build a sophisticated type system that can figure things out and reason out problem cases in a clever way. (This is what the author would like language designers to do.)
Unfortunately for the author, there is a chicken and egg problem here. Few programmers understand the sophisticated type system required for the reasoning solution, or can understand the weird errors that the type system can give you to say why it won't let you do something stupid. So developers shy away from languages that provide such types. Therefore there is little demand for languages that provide it.
As a result language designers have little reason to do anything other than either simple type systems with easy to understand checks, or dynamic dispatch with run-time errors. Which is a frustration to people who have put effort into how to have clever type systems that provide both programming flexibility and automatically catch common classes of errors.
Somehow Haskell and OCaml programmers manage to get by! OCaml has proper variance management built into the core language. Similarly, GHC Haskell with Rank2Types (or anything subsuming it) enabled, this is what lets you say things like “every Lens is a Traversal”: Lens (resp. Traversal) has a Functor (resp. Applicative) constraint in contravariant position in what's otherwise the same type, and Applicative is a subclass of Functor, so Lens is a subtype of Traversal.
The notions of covariance and contravariance are too natural and useful to get rid of them. If your type system doesn't have them, people will work around it to express as much variance as they needed. Except the workarounds will be clumsy, ad-hoc and most likely incorrect.
Seriously, the average programmer trying to learn Haskell starts with wanting to print "Hello, world", eventually winds up at a tutorial about monads, then retires with their head spinning. Haskell remains on the, "I should learn that some day" bucket list and remains unlearned.
This is not to say that you don't have plenty who don't learn them. But now we have another problem. One of the biggest reasons to use a language is available libraries. Because of the initial barriers to entry for these more sophisticated languages, there is a smaller pool of people writing useful libraries. Which means in the real world that when you want to get something done, you'll be more likely to find what you need pre-written if you use a more mainstream language.
Just to get a sense, in the (admittedly highly flawed) TIOBE index, the top language with a strong inference system is Scala, and the next is F#, then Haskell, and nothing else is in the top 50. The sum of popularities for these three would tie with Groovy at #18.
I have never written anything more than a toy program in any of these languages. I doubt I ever will.
Sure, lots of things in the real world are naively "is-a" relationships, eg.
interface Fruit {
boolean isSoft();
}
class Apple implements Fruit {
boolean isSoft() { ... }
}
class Banana implements Fruit {
boolean isSoft() { ... }
}
But this is both less explicit and less flexible than modelling it as a "has-a" relationship, eg. interface Fruit {
boolean isSoft();
}
class Apple {
Fruit asFruit() { ... }
}
class Banana {
Fruit asFruit() { ... }
}
The latter example needs no subtyping, and thus no covariance and no contravariance. All for the price of an explicit .asFruit() here and there instead of an implicit upcast.(0) Inheritance is a (rather undisciplined) form of code reuse - it's literally automation for copying and pasting part of an existing definition into the body of another. It doesn't presuppose a notion of type.
(1) Subtyping is a semantic relationship between two types: all terms of a subtype also inhabit its supertype(s).
There's nothing too wrong with inheritance as long as you're aware that it doesn't always lead to the creation of subtypes. This is, for example, the case in OCaml.
Sadly, Java, C# and C++ confuse matters by conflating classes with types (which is tolerable) and subclasses with subtypes (which is a logical absurdity and leads to painful workarounds, I mean, design patterns, as we all have learnt the hard way).
Indeed it is a good idea to use composition whenever feasible. But your problems aren't over. Suppose you write a method that can accept anything that implements the Fruit interface. You've got covariance again. Suppose you have a dictionary whose values are of type Apple. You can pass those values into that method. That's contravariance again. And so it goes.
It doesn't matter whether you're defining types by classes, or what interfaces you implement. You will have types of some sort, and as soon as you do, you have covariance and contravariance as concepts again.