You don't want to litter your code with "f150.ford.car.vehicle.object.move(50, 50)". You can and should re-implement "move" so that you only have to call "f150.move(50, 50)", but that still requires boilerplate, just in the "F150" class.
Often you have class containing all of the functionality of another class, except a bit more functionality. You can always use composition but this happens so often you're creating a lot of boilerplate.
You could develop some other "syntax sugar" to replace inheritance. Maybe Haskell's type-classes are better (although they also kind of use inheritance, since there are subclasses). But chances are you'll go back to something like inheritance, because it's very useful very often.
However, depending on which stack one is using (VB 6, .NET, MFC, ATL, WRL, WinRT), the amount of boilerplate to deal with the runtime differs.
class F150(@delegate private val underlying: Car) { ... }
class F150(private val underlying: Car) : Car by Underlying { ... }
// etchttps://kotlinlang.org/docs/delegation.html
With it, you F150 can say it implements the "movable" interface, just buy stating which field it contains that implements it, and the you can run "f150.move"
E.g. something like:
class MyClass:
def __init__(self, member_class):
self.member_class = member_class
# Delegate one member
delegate move member_class.position.move
# Delegate all members
delegate * subclass.position.*
Then: a.move == a.member_class.position.move
etc. obj->foo()
will expand into enough -> dereferences until a foo is found. For instance suppose the object returned by obj's operator ->() function doesn't have a foo member, but itself overloads ->. Then that overload will be used, and so on. class Base:
def func(self):
print("In Base.func:", self.name)
class Child:
def __init__(self, name):
self.name = name
func = Base.func
c = Child("Foo")
c.func() #=> In Base.func: FooEdit: Totally with you on boilerplate though. +1.
"If [some fact] in the code base needs to change, how many places would we have to change it in?"
If the answer is > 1, you have a very good DRY case. Otherwise, when [some fact] changes, it will probably not be changed in one of the places, and the system will be broken.
This often coincides with having an "elegant codebase", but that's not the most important part.
I followed every principle of good code, except one, DRY. I tried to make generic parts to connectors, because they do have similarities. But this is a work of at least a year, and the price to make it generic was increasingly more complex configuration files (Just the pagination alone added 3 variables for two different APIs, and the number of app i am supposed to interact with should grow to ~40). I decided after a few days of reflexion that the idea was not that dumb in principle, but unworkable in my case, and decided that one connector for one API, even with a lot of repetition.
https://docs.scala-lang.org/scala3/reference/other-new-featu...
I really enjoyed the article above, which I read many years ago (before Rust 1.0!) which discusses how Golang and Rust handle polymorphism and code-reuse without classic object inheritance. My current thinking is that software objects are a general-purpose tool, but classic object inheritance should rarely be used as it is a solution to a narrow problem—classes should be "final" by default, and if not the inheritance pattern should be completely designed up front.
Java had the misfortune to be designed at a time when OOP was the new craze and the design decision to force all code into an object hierarchy has not held up well. I'd rather use languages designed either before or after Java, where you can use objects when they are appropriate and ignore them when they aren't.
It makes it difficult to jump into an unfamiliar project
Assuming that’s what you mean by signature/ interfaces
IMO, the case where inheritance makes the most sense is when you have a set of objects polymorphically answering some question, usually with a simple answer.
class Subset
class Whole < Subset
def of(items)
items
end
end
class Range < Subset
def initialize(from:, to:)
@from = from
@to = to
end
def of(items)
items[@from:@to]
end
end
end
which is used as such: subset = Subset::Whole.new
puts subset.of(["a", "b", "c"]) # => ["a", "b", "c"]
subset = Subset::Range.new(from: 0, to: 1)
puts subset.of(["a", "b", "c"]) # => ["a"]
You can then pass around a Subset object anywhere (aka dependency injection) and push conditionals up the stack as far as possible.Simply saying "inheritance is bad" gets nobody anywhere.
And the text support that. The "general inheritance" the author describe is not the one you've just used.
And i'm hijacking your post, sorry, but i really agree with the author with the "incidental inheritance" point. This is the worst. I lost a month to a bug caused by this kind of inheritance (Jenkins package that tried to be cute and interfered with a cloudbees class). I won't take a java gig ever again. Not worth the brain damage.
In such a language (e.g. Ruby), you will need test suites where languages with (strong) types use the type system to prove some level of correctness.
I used to be a fan of dyn typed langs (Ruby), but I've changed, I prefer strongly typed langs now for anything more than quick throw away scripts.
fn subset(superset, start, end){
// superset is type inferred as long as it supports the [] operator
// logic to collect superset[start] to superset[end] into an array and return it
}
with uniform function call syntax: [1,2,3,4,5,6].subset(1,4) == [2,3,4,5]
If you really want to reuse a subset range, you can use lambdas/closures, or in this case a simple wrapper // in some code
fn subset1to4(superset){
return subset(superset,1,4)
}
array.subset1to4()
anotherArray.subset1to4()As another example I've encountered in the past, let's say you have some object that can dynamically define fields. Once you define a field, you can retrieve its value or maybe some default value e.g.
model = Model.new
model.define("points", default: 1)
model.store("points", 10)
points = model.retrieve("points")
puts points # => 10
Let's say doing anything with an undefined field is invalid. Here's my first pass at an implementation: class Model
def initialize
@fields = {}
end
def define(name, default: nil)
@fields[name] = Field.new(name, default)
end
def retrieve(name)
@fields[name].value
end
def store(name, value)
@fields[name].value = value
end
end
class Field
attr_reader :name
attr_accessor :value
def initialize(name, value)
@name = name
@value = value
end
end
Works great! One day a requirement comes along that default values need to be lambdas, too, which are called every time the value is retrieved. How do we implement that? One way is to add a conditional to the Field class: class Field
attr_reader :name
attr_writer :value
def initialize(name, value)
@name = name
@value = value
end
def value
if value.is_a?(Proc)
@value.call
else
@value
end
end
end
But now Field knows that it can be passed a lambda, so testing it needs to account for that case (among many other considerations, probably, in a real-world system). And any time we add more cases for default values, let alone changes to regular values like type casting or something, the Field class becomes more complicated. I'd probably reach for a new object instead: class Model
def initialize
@fields = {}
end
def define(name, default: nil)
@fields[name] = Field.new(name, nil, Default.for(default))
end
def retrieve(name)
@fields[name].value
end
def store(name, value)
@fields[name].value = value
end
end
class Field
def initialize(name, value, default)
@name = name
@value = value
@default = default
end
def value
if @value.nil?
@default.value
else
@value
end
end
end
class Default
def self.for(indicator)
if indicator.is_a?(Proc)
Default::Dynamic.new(indicator)
elsif indicator.nil?
Default::None.new
else
Default::Static.new(indicator)
end
end
class Static < Default
def initialize(value)
@value = value
end
def value
@value
end
end
class Dynamic < Default
def initialize(callable)
@callable = callable
end
def value
@callable.call
end
end
class None < Default
def value
nil
end
end
end
Now we've changed the conditional in the Field class to one that's actually relevant to it (do I have a value yet?) and won't change when the kinds of default values that it can accept change. Because we dependency-injected the Default object into the Field object, testing that conditional becomes a binary of retrieving the default value when no value is set, and retrieving the value once it's set. We can then test each kind of Default on its own, and changes to Default don't impact Field. If we really, really wanted to we could even eliminate the conditional in Field alltogether by unifying the interface for @default and @value such that they're both objects with a #value method (or maybe rename it to something else so we don't write @value.value). In either case we've made each piece simpler to reason about and pushed conditionals up the call stack so the resulting code is more straightforward.I can probably recall more examples of simplifications like this, but this is where I find inheritance the most useful: a known set of things that each polymorphically conform to some interface. In these examples I don't actually use the superclass for any shared behavior, but you can imagine a case where I might.
One other benefit that I really like from the inheritance-object-modeling-as-pushing-up-conditionals perspective is that it makes you define what the different cases of something are as distinct objects, and give names to them. It's a similar benefit that falls out of using named sum types instead of signal values or tagged unions or something, but has the opposite effect (overall reduction of conditionals rather than proliferation).
This quickly devolves into the inheritance vs. composition argument which isn't where I thought the Author wanted to go (but then sort of ended up going there). I agree with other commenters that it's an overstated idea. Inheritance is ridiculously useful in the right design structure, as is Composition. They both have a place. (Incidentally, bad Inheritance design usually looks very ugly very fast - bad Composition is often less glaring).
I find that years of designing in OOP has led me to build designs that have a goal of preventing me from making future mistakes and correctly consider implications of my code.
I find that my most immediate designs tend me towards Abstract Classes and Interfaces. While I usually get credit for "programming to the Interface" for this, that's not what usually led me there.
I like abstract methods. They (i.e. the compiler will) FORCE me to think about something if I ever decide to create another subclass of the Abstract class. The Author points out the "forget to call super" bug which is particularly nefarious and I avoid it at all costs. I can do that by providing a final concrete method which calls the abstract method. Let the subclasses implement that and never worry about super.
Anyway - governing inheritance across package hierarchies seems like a reasonable guideline. As for Inheritance vs. Composition, I don't favor either. When designing a class structure, I just make my best guess (as we'd all do) and find the structure quickly evolves on it's own. Usually, this ends up in a blend of shallow Inheritance trees with logical composition. There's always multiple Class Structures that will work - my goal is to find a reasonable one of those.
I’ve made very little use of inheritance since I turned my back on C++/Java a decade and change ago. Can you give some examples where you feel inheritance wins out over composition?
Inheritance is just the right thing once in a while, but Java coders are obliged to apply it well beyond its useful range.
Fortunately nowadays, records and sealed classes remedy this for the most part in java.
By extension then, because it's possible to misuse Java/any programming language/computers/electricity/etc., you should never use it.
Make no mistake, designing classes to support inheritance is much harder than just declaring everything final, and in many scenarios there is no good reason to do so
The whole idea of language design (in my opinion) is to reduce the opportunities for mistakes, without getting in the way (thus reducing productivity). The biggest problem with Java and C# is that they are deceiptively simple. Anyone can get off the ground and the path of least resistance initially is the path of maximum pain in the end. That's the path of making large classes, lots of mutable state, long inheritance chains and so on. The languages aren't forcing anyone to use these antipatterns, but neither are they guiding the hand of the newcomer not to do that.
How about you stop making decisions for me and let _me_ decide whether I want to inherit your class or not.
> I can't mock some stupid class in some stupid library that I have no choice but to use just because someone is high on "inheritance is bad" hype.
yea, at the very least, classes public members should be more like interfaces, that way mocking can be done easily in test mode, then in prod build lots of optimizations could "dissolve" the interface and be statically dispatched etc... hmmm....Simple example, String is final in Java. It is also immutable, and that is (mostly) irrelevant. Lots of string fields on inbound requests have validations, a simple one would be a field that contains a fixed length string. So obviously you validate that at the ingress before passing it down. Now, the question arises, should the core library be defensive and re-validate the string? Why not simply capture the subtype, TenCharacterString and parameterize methods with that?
Modern languages get this right. Subtyping is not inheritance. Inheritance is not subtyping. I should be able to subtype at zero cost, I don't need inheritance to do that, [and encapsulation is definitely not subtyping].
But Java doesn't have that. You mark something as final and you lose the ability to subtype just to eliminate the possibility of inheritance. On the other hand, to be fair to the argument against final, the real answer to my complaint is a proper type aliasing support.
What issues do you see with wrapping? TenCharacterString eg. could use char[] as its backing store and implement CharSequence if you want to get it to speak a common language with String.
While reading it I was reminded of a design/implementation style I’ve run across several times over the years which is to find an existing class that does something similar to what you want. Then, subclass it and override methods until you get the behavior you want. And you’re done!
This leads directly to the Fragile Base Class problem. I think it also violates the Open/Closed principle. When subclassing occurs across components that are released independently (e.g., a library and an application), it either leads to continual breakage at each release, or ossification of the library. The latter happened to Java’s Swing. It got to the point where it was difficult to fix any bugs, because any “fix” would end up breaking some subclass that relied on the old behavior.
(See also Hyrum’s Law, which is more general than subclassing & inheritance.)
One exception comes to mind though — Java’s SAMs, or in the general case, classes that more or less only wrap around a few methods intended to be overridden/implemented with clear requirements (but maybe this use case also should be restricted to interfaces?) But the default should be to add an explicit open instead of defaulting to non-final.
There is great value in reducing type errors at runtime. Is hilariously ironic that one of the main tools we reach for seems intent on just moving them to design time.
Notably, not compile time. Design. Most failures from mistakes in ontology stall the problem out before release.
(Obviously, ymmv.)
I've experimented myself with "table oriented programming", but don't have time to explore all the leads I uncover and rework the problem areas. Maybe when I retire?
For example, modern CRUD stacks are really just "event handling databases" done poorly. An RDBMS would be better at managing the gazillion event snippets, if it could "talk to" the compiler properly.
The "do everything in code" mantra of the web era is a mistake. Databases are better at managing complex relationships and masses of field/UI attributes, code better at non-collection-oriented algorithms. We should use the right tool for the job. "Data annotations" in Java and C# look like JCL's mutant stepdaughter. If that's the pinnacle of CRUD, then slap me silly.
Is that you, Bryce?
I also note that, while the author does make some useful points about how to program more defensively, especially in the face of unexpected modifications to super/sub classes written by other programmers, one is inevitably beholden to at least a certain extent on the trustworthiness of code that one depends upon. (Even languages like LambdaMoo that start from the assumption that a program consists of code written by multiple mutually-untrusting programmers cannot entirely protect each against malicious subterfuge by the others.) I therefore question the value of the kind of 'hardening' the author recommends, especially when it might have unfortunate consequences on extensibility and testability.
I believe the author's arguments are quite valid, inheritance breaks the concept of a "black box" in Object Oriented Design. Once you inherit from a class, all that class internals become an "unadvertised signature", nothing is a black box that can be transparently changed anymore, any internal change may break a subclass.
Same stuff in a new way.
How about considering if OOP might be a stupid idea at the first place?
Though I feel it would be dishonest to “blame it” on inheritance over on concurrency itself, when we don’t have any good solution to general concurrency as far as I know. We can only deal with it reasonably well by heavily restricting the domain-space to begin with (eg. immutables, no globals/sharing).
In the past, it was feared that this would lead to reduced
performance but this is simply not the case.
Great to see the strong evidence here /sAt its core, inheritence is a special case of composition anyways (looked at from the other perspective it's syntactic sugar over either static or dynamic delegation), so it can't really be "faster".
At any rate, there's no abstraction so powerful it can prevent a programmer from making it slow.
What is inheritance? Inheritance (for any given language) is a language supported type of class composition (https://youtu.be/eEBOvqMfPoI?t=2874), as a closure. A class is a function and once you understand this, it opens up possibilities in how you design and test. This has nothing to do with performance, which is a nonsequitor. Is Rust less performant than Java because of how it does composition? No. Perhaps there's something in the JVM that makes mixins difficult to optimize for, but that would require some evidence (there's no general branch prediction in the JVM, last I checked) and is, ultimately, at the feet of the JVM implementation. Have a look at Go and Rust.
Naturally, because a specific kind of inheritance is a language feature, it gets overused and a language, (like Java) for backward compatibility's sake, overuses it in designing new features. Looking at other languages like Javascript, Lua, Erlang, PHP, Ruby, Rust, etc. saner heads have prevailed and even Java has resorted to using "Aspects" ...which are runtime traits for additional types of composition^^.
Regarding the rest of the article... His arguments for using final include: 1. Someone may forget to use super() and that's bad because what I want to happen trumps what they want to happen. 2. People can't subclass my class across package boundaries, because I don't know why handwaved JPMS (then covered in Should Inheritance Across Package Boundaries Ever be Used?). His reasoning is not compelling, in the least. I can say, without hesitation, that 'final' is harmful. Adding final to a class is such a violation of the concept of reusable software, I'm surprised the FSF doesn't boycott languages. In C++ there are performance benefits. In Java it's just to put up a roadblock. This was never a good idea and makes testing impossible in some cases (where final classes are injected into final classes). This is purely because of Java's design as a language, not because of some demonstrably helpful concept, implemented poorly^^.
^If you are a language designer, always allow backdooring of accessors for testing, at the very least. This conviction that they must always apply to protect developers from each other is misplaced and has hurt the reliability of software, badly.
^^Spring has a form tacked-on composition, which is both ugly from a conceptual standpoint and problematic from a testing standpoint. Java always seems 20+ years behind.
You are going to get a lot of false positives using success as a metric for good language design.
I cannot think of a single popular language, other than Python, in the last 40 years that did not have huge companies with sizable marketing budgets behind them.
Programming languages do not succeed based on merit. They may fail based on lack of it.
Object oriented programming gets a horrible wrap on the basis of inheritance alone, and it's no wonder. Outside of limited domains, such as GUI programming, object inheritance makes little sense. Computer science students are right to question their introductory classes on inheritance when they teach contrived examples of dogs barking and cats meowing as an example of Mammal.makeSound() inheritance.
It's almost as if we're shoehorning in a code dispatch framework as a major language feature, except that framework sucks and we're stuck building with it. The best strategy working in languages with inheritance is to avoid it.
Duck typing or traits are better ways to represent polymorphic behaviors. We've known this for over a decade now.
Here's hoping that no new languages come with object inheritance as a concept. It's deader than NULL and shouldn't be resurrected.
Any objective measure of that? Because there is a catch, we can’t do what doctors can. There are no double-blind tests for language design. All we have is empirical studies and based on that, OOP languages do objectively much better. So if anything, your exceptional claim require exceptional evidence.
But otherwise I agree that these Animal hierarchies are just dumb and many definitely overuse inheritance.
The trend is in the opposite direction.
TypeScript enables JavaScript programmers to benefit from static typing, and is seeing widespread use. Python now has type-hints.