The fiction has largely dissipated, except among pundits and the aggressively ignorant. Meanwhile, a backlash insisting OO is a fundamentally mistaken design element arose among some loud users of not-OO (not to say not-good!) languages. Even among users of what were called OO languages, C++ particularly, the relative importance of OO techniques has fallen off as other language facilities and conventions surfaced.
C is just not adequately equipped for what is formally defined as OO. You can cobble up a dodgy simulacrum of it, with enough effort, but what is the point? Other languages make it easy, and can invariably be used in place of C if you really feel like you want OO.
If you are coding C++, you probably long ago graduated from the notion that "OO" has any particular merit as an organizing principle, and simply use language features to achieve your goals. Some part of any big-enough system will look OO, more or less, if you care. Few working C++ programmers do.
Some languages, Java particularly, aggressively try to horn everything into the OO shoe. The result is that if what you want to do doesn't fit, you must abuse what features the language offers to achieve what you mean to do, typically with some violence to OO norms. There's nothing wrong with that. It was, rather, wrong for the language to provide you with only OO facilities to the exclusion of all else.
A similar thing happened with functional programming.
> WinUI is powered by a highly optimized C++ core that delivers blistering performance, long battery life, and responsive interactivity that professional developers demand. Its lower system utilization allows it to run on a wider range of hardware, ensuring your sophisticated workloads run with ease.
Apparently Microsoft does care about OOP in C++.
As does Apple,
https://developer.apple.com/metal/
https://developer.apple.com/documentation/driverkit
And Google,
https://github.com/google/oboe
https://www.tensorflow.org/api_docs
Maybe modern C++ is safe from OOP, so lets look into ranges,
https://en.cppreference.com/w/cpp/ranges
So we have factories, adaptors, concepts (aka interfaces/traits in other languages), std::ranges::view_interface as base class mixin, all stuff I can find on the Gang of Four book.
Likewise the others, with one exception. The Apple driverkit page does mention base classes.
Factories, adaptors, concepts/traits/interfaces have nothing to do with OO, beyond that OO designs often also use them. OO designs define functions, too, but functions do not imply OO.
It is, in any case, meaningless to trot out apparatus system-vendors oblige programmers to use to access proprietary facilities, and equate that to programmers' interest in whatever tech is used for them. Programmers are interested in using the facilities, and are glad when whatever they are obliged to use works at all; too often it doesn't.
My main grip with Java is that everything is by default a library.
But there is hope, recent changes in Java, records and pattern matching move away from that idea.
Excellent point, and I think in our contemporary age something similar is happening with "Agile". It has ceased to become a project management technique useful for a certain class of products. It has instead become redefined to mean "good" - for example, in popular imagination, if a team is not "Agile" then they are somehow backwards or out-of-the-loop. Now teams, products, and situations where Agile is not appropriate are finding themselves having it forced upon them, or spinning what they do as agile.
C++ had, for a long time, problem mapping classes and objects in python or ruby, and usually, that was done via C wrappers and hacks. In recent years, things are a little better thanks to clang, but there are still tons of template hacks and C workarounds if you want portable code between compilers. Even then, you often rely on compiler specific stuff.
Other languages are much worse, so the only solution, frequently, is to use a virtual machine, like JVM or .NET.
According to Alan Kay (one of the creators of Smalltalk, the first OOP), message passing is more important to OOP than inheritance. If you listen to his old speeches, what he is describing sounds a lot more like microservices and VMs/containers than what most of the later languages turned it into. (He describes objects as "mini computers" that interact with each other using only public interfaces)
Alan Kay gets credit for the name "object-oriented", not the concept. His own definition has varied radically over the years, insisting only lately on any importance of message passing, as such. Smalltalk-72 was not OO. Smalltalk gained OO features over the time from 1972 to 1980. (Message passing has anyway always been isomorphic to function calls.)
Reflection was never described as essential to OOP. It is a feature of many languages, equally useful in all.
As an engineer, I'm never comfortable using something when I don't know how it works.
I wish more "introduction to OO" things would start by demonstrating a dispatch table and then showing how the vtable concept maps onto that, I suspect it would make things significantly clearer to a bunch of people as they learn.
It's like saying English doesn't have a built-in politeness pronoun, so if you need to be polite you can't use English.
As long as is not C++...
Of course, so is D...
If you need to implement virtual tables and function pointers, you'd use C++ - there's no need to reinvent the wheel.
Besides software engineering is about focusing on the intrinsic complexity, and using languages and tools to mitigate the incidental complexity.
It's just not worth it. Too much casting, too much scaffolding, much too brittle and error-prone. The experience is like "how much suffering can one endure."
For example Linux Kernel style is:
static const struct file_operations fops = {
.open = my_open,
.release = my_release,
.read = my_read,
.write = my_write
};
that's all, very straightforward. Also there is no need to prefix the function name with &Tenuous - if you are willing to implement objects yourself then you could use C sure, but I don't think that means the language "lets you use OO". It's like you could also probably implement algebraic datatypes in C using unions and structs, but would that mean "C lets you use algebraic datatypes"? I would strongly argue no.
Thus, to meaningfully claim that a language supports some programming technique, it has to automate it to some appreciable degree. C manifestly does not; it automates nothing but stack frames and register allocation. Every detail is hand-coded, with every opportunity to make trivial, hard-to-spot mistakes that nothing but exhaustive testing can bring to your attention.
OOP has 2 main features, which have first appeared in SIMULA-67: inheritance and virtual functions.
I agree with the following definition for OOP as as a special kind of programming with abstract data types:
"There are two features that distinguish an object-oriented language from one based on abstract data types: polymorphism caused by late-binding of procedure calls and inheritance. Polymorphism leads to the idea of using the set of messages that an object understands as its type, and inheritance leads to the idea of an abstract class. Both are important." (Ralph E. Johnson & Brian Foote, Journal of Object-Oriented Programming, June/July 1988, Volume 1, Number 2, pages 22-35)
When virtual functions are not used, the operations can be considered as belonging to the type (a.k.a. class), not to the objects, which corresponds to the jargon used by programming with abstract data types.
When virtual functions are used, the operations are considered as belonging to the objects, not to the class, as each object may have a different implementation of a given method. This corresponds with the point of view of OOP.
I have used the SIMULA/C++ term of "virtual functions", but in some OOP languages all functions are of the kind named "virtual" in SIMULA, so the "virtual" term is not used.
Once you separate type hierarchy and behavior, you get a much more flexible system. That's what newer languages (Rust, Swift and co.) do. I think the reason why "classes" are so popular has to do with compiler technology — type hierarchies can implement polymorphism very efficiently via vtables, and folks jumped on the opportunity of having high-level abstractions with high performance (instead of doing expensive lookups associated with earlier per-instance polymorphism). But as the compiler tech and understanding of programming language theory have progressed, these limitations are not necessary for good performance anymore, in fact, they become limiting.
The best thing in C++ is compile-time computation (ignoring the horrible template syntax). OO is not esp. well implemented.
- you have very few objects
- you have very few virtual functions
If you have a lot of objects or a lot of virtual functions, things will be less likely to fit in cache which will entirely destroy your performance, by an order of magnitude when compared to a mostly-always-branch-predicted indirection.
I remember experimenting with a an "inlined" version of std:: function and just having inlined the 5 ctor/move ctor/copy ctor/assignment operators was already slower than the vtable version when going through hundreds of callbacks ; imagine for something like Qt where QWidget or QGraphicsItem have 20+ virtual methods.
And despite this public visibility of the object's struct members granted to provide inheritance, the examples are still using getters and setters.
Ok...
("Free" after adding some metadata comments specifying parameter ownership/lifetimes, at least.)
2. Creating functions that take a pointer to the struct as the first parameter
3. Declaring a variable with that struct type to create an instance of the object
4. Declaring another variable with that struct type for another object instance
There are formal definitions of OO. The above satisfy exactly none of them.
For example, the formal definition of the Liskov Substitution Principal:
> Let ϕ(x) be a property provable about objects x of type T. Then ϕ(y) should be true for objects y of type S where S is a subtype of T.
This doesn't allow any change in behavior when subclassing. Not even the addition of logging, which makes the overriding of methods generally useless. Yet languages still provide this feature (perhaps to their detriment, but they still do).
There's also a common definition of the Liskov Substitution Principal:
> Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it.
This is, generally speaking, what people are talking about when the mention the Liskov Substitution Principal, unless they're actively writing an academic paper on the topic.
Bringing up the academic definition when someone is using the common definition is 1. not relevant and 2. usually not helpful.
This is in fact, by any meaningful definition of OOP, is a very reasonable pattern. With regards to formal definitions of OO, message passing aspect ought to be recognised rather than overemphasizing the object aspect.