Using the word "inherits" might be misleading (as might "extends", which I used). We don't particuarly care that the any actual behaviour/implementation gets re-used.
The core meaning of A < B is the is-a relationship. That is to say, a value of type B is also a value of type A.
I assume you agree agree that my first example ought to type-check (although, as the second example shows, there is an argument to be made that it shouldn't). The question is how does it typecheck?
In the last line, we call writeFirst(as,b). Here, the first parameter has type List[A].
However, writeFirst is declared as taking a first parameter of type List[B]. The fact that this works means that List[A] is-a List[B], which is the defining feature of List[B] < List[A].
If this were not the case, then we would have needed to define writeFirst with a type along the lines of:
writeFirst[X < B]( xs:List[X], x:B): xs[0] = x
Where we explictly declare that the type parameter of the list is a subtype of B. In this case List[A] could be used not because of the languages decision on variance, but because the program explicitly typechecks with X=A.
Note that this actually changes the return type. In my original example writeFirst(as,b) would have a natural return type of List[B]. However, in this new example, the natural return type would be List[A].
This second example is closer to how ML style languages work.
EDIT: It occurs to me that I was thinking a bit too functional in this. The natural return type of writeFirst is void in (most?) OOP langauges because arrays are mutable. What I wrote assumed that the natural meaning of writeFirst was to construct a new list which replaces the first element.