In comparison, C programs tend not to hide this stuff. C functions are often long and complex. If you were implementing quicksort in C, you would write (more or less) one function with all the logic packed in there you can just read top to bottom. In Java it would be a nest of SortComparator interfaces and SortAlgorithm implementors, which would act to hide the algorithm itself.
There’s something more honest about the C style. It’s like, yeah, the algorithm is complicated. So we put it all together in one dense function. Here it is - go nuts! You don’t have to go hunting for the right implementing class. Or divine how FooFactory has configured your Foo object instance.
All that Java style class abstraction seems to (intentionally or otherwise) make the actual logic of your program hard to find and hard to trace. It’s coy. When I’m trying to read someone’s code, that’s simply never what I want.
Many developers get trapped trying to recognize patterns and come up with perfect mental models for whatever problem they're trying to solve when the straight forward dense function is probably the simpler and more maintainable solution. I often fall for this trap myself and am constantly trying to be wary of it.
That is odd because when I look at the Java style, DualPivotQuicksort [1], it seems that the language authors did not do that. This is so very strange! Their methods are long, complex, and highly documented. Maybe they must be really incompetent Java programmers? I mean, they did put this obtuse abstraction by hiding it in Arrays. That's super crazy, just look at this monstrosity!!!
public static void sort(int[] a) {
DualPivotQuicksort.sort(a, 0, 0, a.length);
}
Maybe you need to have an intervention with them? Somehow, in some crazy world, even Java programmers are capable of writing good and efficient code. Its like bad developers might do an awful in job in whatever language they used? Can't be true...[1] https://github.com/openjdk/jdk/blob/master/src/java.base/sha...
I think the same about Javascript, though the specifics are different. I've been writing JS for years, but I've been moving away from it lately because increasingly I feel like an odd duck in the JS world. Most javascript programmers have much less experience (in any language) than I do. When I mention I write a lot of javascript professionally, people assume I'm an fresh faced bootcamp grad. Its sometimes hard to find high quality libraries on npm because the average quality there is reasonably low. Eg good luck finding a password generator which doesn't use Math.random(). Or finding an email parsing library which preserves the order of email headers. (This has semantic content!)
Does there exist high quality java + javascript code? Sure. But even macdonalds makes good food sometimes. That isn't enough to make me a regular customer.
Sometimes the right call is to fight for the ecosystem you're a part of and help it improve. I've done a lot of that. But you don't have to fight for high quality software if you just go where the high quality software is being made. Its easier to switch languages than change a culture.
... I'm being a bit sloppy and judgemental here. Maybe it would be better to say, each ecosystem has a set of values. Javascript values programming velocity, accessibility to new programmers and simplicity. Java has a different list. Insomuch as you're living inside an ecosystem, you don't get to simply ignore and dismiss those influences when you don't like them. It sucks writing Go if you hate gofmt. It sucks writing rust if you don't even want the borrow checker. And it sucks writing Java if you hate dealing with AbstractIteratorFactoryImpl. Even if there's some redeeming code in openjdk.
Code doesn't magically become less complex by hacking it into pieces.
Whether you want your code to be more modular is an opinionated decision but most people don't realize the benefits of high modularity. Almost all major design mistakes that necessitate code rewrites come from lack of modularity.
Mixed blessing, that
Another similar issue that I see a lot in both Java and C++ codebases is "premature wrapping" of foreign APIs. Basically when building a program that has to consume a certain API that is somewhat incompatible, every single concept of this API is wrapped in a separate class before any planning, to the point each one-line procedure call turns into a 20 line class.
Of course, after the wrapper is written, the program still needs higher lever abstractions that use those wrappers. But since zero planning went into the design, now you need exactly the same call order as before, however instead of an ugly (but simple) procedure call, you have a class wrapping it, and to understand a simple workflow you have to go trough at least two layers of classes.
That said, I've also seen "object-oriented obfuscation" in C, so some people seem to just love complexity and writing tons of code to do a simple task, or were taught "abstraction is great, use as much of it as you can" and never thought about when to stop.
Overabstraction usually increases macro-complexity while decreasing micro-complexity; a function with a single line of code is "simpler" in that its immediate purpose may become obvious, but having to mentally stack from its callers means that the big picture is harder to comprehend.
At the extreme high end of density are languages like the APL family, where the density is so high that the "big picture" becomes a slightly smaller one, and Arthur Whitney has been famously quoted as hating scrolling; but at that density, you can no longer "skim" large portions of code --- instead, each individual character needs to be read and pondered carefully, because each one says a lot.
Java developer here, no such thing. I guess it varies based on who is doing the coding. Although, I did start out as a C developer.
I would only write code like that if the use-case called for it. Otherwise, no.
Writing code is more of an art form, some times you may need to do crazy stuff like that, but a lot of times not.
KISS
1. not all software is about pushing and pulling to/from a database; if yours isn't, be sure you understand why that's the case.
2. "backends" (not "web backends", but the more general "where the mechanisms are") should know nothing about "frontends" (again, not web, but the more general "user interface of some kind"). This is really just MVC in its most basic sense. One good way I've found to think about this is to assume that there's always at least two UIs running simultaneously. Make sure this can work.
3. if your program has a user interface, everything the user can do without further interaction should be represented by a closure that can be invoked from anywhere (but always in the correct thread).
4. single-threaded GUI code seems like a limitation but in most projects, it's the right choice. By all means use helper threads when needed, but never allow them to use any API that's part of your GUI toolkit. Knowing that your GUI code is ALWAYS serialized is a huge conceptual assist when reasoning about behavior.
5. access to an excellent cross-thread message queueing system is likely to be a must if your software uses threads. This should include a way for one thread to cause arbitrary code execution in another thread.
6. direct memory access for the UI is nice from a programming perspective (that is: just directly call methods of backend objects), but can erode the wall of separation between the UIs and the backend.
7. lack of direct memory access for the UI(s) can significantly impede performance, but enforces a conceptual clarity that can be valuable.
8. when notifying the View(s) about changes in the Model(s), there's a tradeoff between fine-grained notifications ("frob.bar.baz.foo just changed") and high-level notifications ("something about frob just changed"). Finding the sweet spot between these two can be a challenge across the life of a long-lived piece of software.
9. lifetime management will never be trivial. Accept it, and move on to thinking about how it is going to work even if it is not trivial.
10. try to refer to as many things as possible indirectly. if something has a color, don't make it's state refer to the color, but the name or ID of the color. do not over-use this pattern when performance matters, but also do not over-estimate your ability to understand when performance matters.
This is one I constantly struggle convincing my colleagues about. It becomes much more "obvious" if you are trying to write unit tests in C++ code[1], but unit tests are a mere side benefit. It's more about reducing coupling.
Currently working on a code base that outputs to an Excel file. We recently started dealing with more data than the Excel file can handle easily, and the system came to a crawl. So we had to allow for the option to output to CSV (easily 100x faster in our use cases). At least now some of my colleagues have a bit of appreciation on what I've been harping on.
The Excel library is still intrinsically tied to much of our code. We've been getting over 15GB RAM usage for data that I'm sure would not take more than 2GB if we manage to bypass the Excel library.
[1] Why does my class that computes X need to know that something called email exists? So to write a test for this class I need to instantiate a whole other set of classes just for output? Just have it "ReportMessage" on the Reporter interface and let whatever class that inherits from it figure decide if the message will go out via email or SMS.
But in my niche (realtime audio software), there is underlying data in the system that changes over time independently of events. So there is "something else to it".
The main point of it is to avoid testing via mocks and to also avoid interfaces that are only there for testing purposes. The end design is extremely simple, and because it has command / query separation, the actual logic is trivially testable.
Give it a read. I love Growing Object oriented software guided by tests as well, these are all just different approaches with different mindsets.
https://en.wikipedia.org/wiki/Command–query_separation
https://martinfowler.com/bliki/CQRS.html
That sounds very much like Command-query separation or Command Query Responsibility Segregation to me. I haven't read this particular book (and almost certainly won't be getting to it within the next decade based on my ever increasing pile of unread books), but I wonder if they call this out. It's a critical decision in the architecture/design of a piece of software and it's worth stating that it's how they intend to design the system.
Coincidentally, to my mind, that model (CQS/CQRS) fits well with the blog author's idea of using an agent-based event system. Moving the objects into distinct threads of execution or processes, which also coincides with one of the intended ideas of OO by Alan Kay. OO-as-message-passing very much fits within the agent-based execution model.
The "callback-y" approach in Growing Object-Oriented Software sounds fascinating.
> Immediately I remind myself that the implementation from the book ignores the problem of persistence completely. If you close that application it loses all the state. I think this is not an accident. This is where things go wrong for OOP really fast.
This is a really good point.
EDIT: I see you really did not like 99 Bottles of OOP, which is by the author of POODR. In that case maybe her way of explaining things doesn't agree with you and you should skip it!
I liked this piece. Enough to log in and comment at least. It's the rare article where the author is open about his biases but gives an opposing approach an honest go. Reading three books about OOP is way more generous than I'd ever be. And when that approach still doesn't make sense, he offers a better one. Always enjoy reading the informed hater's perspective.
This point hits hard.
Managing "live" scattered state that is gonna go away when the program dies is hard in itself. But as soon as you have to persist it or do anything fancy with it, you pretty much have to change the whole approach of your app. This is why it's always a good idea to start with established frameworks that handle persistence if you're ever gonna need it. Bolting-on is just too hard.
It also reminds me of my first job, in a Desktop app. State became so complex that to apply a "global change", like currency or language, it was required to close the app and open again. It was something very common, seeing how many apps required such things.
In the middle of my career I also worked in a very large video game. The higher ups wanted to change how "game saving" worked and instead of having to serialise just the basic stuff (health, lives, level) we needed to change it to serialise the whole game state including enemy positions and actions. It was the biggest change we did and we ended up having to add lots of boilerplate because of the scattered state. IMO, ECS was a very interesting development purely because now state is not encapsulated anymore, making serialisation completely separate from everything else.
Curiously, as much as modern frontend programming is maligned, such issues are easier to solve with central state management libraries like Redux.
I do like how ECS and Redux have that common thread of rediscovering global state
In the end it seems that the "web app with a database" is becoming the template for other types of apps.
I do care and I read this with interest just now. The idea that it took three hours to write a review of a month+ of work as part of "professional development" and that this person must bear that costs of that education, is a tip of the iceberg indicator to me as to the difficult work world we live in today. How is it that capital owners make money while sleeping, while craftsmen intellects spend a month+ without compensation to "get up to speed?" I am on the tail end of this after decades and it jumps out for me now, past the OOP part.
Second - I learned OOP approaches long ago, have written a lot of software and have used OOP in my own ways, much like this author. I appreciate the effort here! It is an interesting, technically somewhat shallow (no code in this essay) yet as noted, good balance of critical and open mindedness.
I do not understand OOP-hatred past "I hate the music my parents liked" and "Java is so tedious that it makes me hate all of the whole structure of it".
I used OOP code myself to separate parts in loosely coupled systems of several flavors; to make a systematic ordering of commands, to enable scripted or menu-driven command sets; and to wrap an interface around data for the convenience of other code. I feel that a strong point of OOP is to REDUCE the cognitive load for the human. Yet many snipes in articles about OOP specifically complain about the lengthy, spread-out, tedious nature of OOP code. Your mileage may vary ! Use it badly or use it well .. its not my doing.
The specific kind of software system described in the third book here, with messages passed without state between objects, is interesting, and reminds me to say now: I think there is vastly insufficient distinction made between software solutions, their design and implementation, in OOP criticism. What are you trying to solve? How much persisted data is there? or state, or interface to XYZ external system. This matters in design choices and I feel like OOP-critics often race to their favorite annoying thing rather than do the intellectual work of distinguishing for a reader, what the assumptions are and what the finished product requires..
Overall, this essay is worth reading, feels short to me despite obvious effort on the part of the author, and personally, I get a nagging feeling that people doing this kind of work should be less scammed by low-morals middlemen and more valued socially for the architects of software that they are.
Pretty cool feature IMO: https://docs.oracle.com/en/java/javase/14/language/records.h...
https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...
Not only is that not the secret sauce, an OOP program with almost nothing but methods that return nothing (i.e. have some effect without reporting a result), is a giant red idiot flag.
If your gonna pick three books on OOP it should include Design Patterns at the top of the list. At least if you want to understand why OO is a thing people still use and talk about.
Bridge comes to mind, I ran into that specific example myself trying to OO-design a GUI framework with different back ends.
With a pinch of common sense stuff like Facade etc.
Much of it only makes sense for static languages such as C++ and Java.
I'd say it gets way more credit than it deserves.
Stepanov on OOP is interesting (just search for object oriented):
> I'm looking to gain more confidence in my criticism and understanding of OOP. In the past, I have published multiple posts criticizing Object Oriented Programming ... I always feel this anxiety that... maybe there is such a thing as “good OOP”, maybe all the OOP code I wrote, and the OOP code I keep seeing here and there is just “incorrect OOP”.
To that I'm saying read DP and critique that and if you still feel that way, your on to something. Cherry picking 3 crap OOP books to critique then concluding OOP is crap feels like a bit of a strawman argument.