- There is a lack of respect for the history of programming. IMO it has caused the industry to be stuck in a perpetual cycle of half-baked rediscovery.
- Similarly, a type of "FAANG-syndrome" exists and allows sub-par ideas to take over mind share of the industry. Once a technology picks up enough momentum, it will snowball and we're stuck working with legacy trash almost immediately. Developers legitimately seem to believe each trend is good.
- Our industry's shared vocabulary is too weakly defined. Phrases like "The right tool for the job" are ubiquitous but essentially meaningless and used as a form of shorthand "I currently feel this is correct". If we had a real professional lexicon, the first thing juniors would learn would be to enumerate reasoning to a precise degree. IME most "senior" devs can barely do it.
The way I see it is there's a spectrum of ways of handling this, from type systems, validation code, documentation, integration tests (validating runtime behaviour), and static analysis tooling, but I don't agree that a static type system is the best (and is often only barely adequate) way of integrating modules. The optimal solution is going to vary based on each project's requirements: e.g., is the modular code going to be consumed via networked API or as a library, and is it internally controlled or third-party? How many teams are going to be working on it? How much care has been taken with backwards compatibility? If we break the interface of some random minor function every update, a static type system may help, then again if it's just for our team: who cares? I'm sure we've all seen updates make internal modifications that break runtime behaviour but don't alter data models or function signatures in a way that get's picked up by a compiler.
Even in the most extreme type systems, interfaces are eventually going to need documentation and examples to explain domain/business logic peculiarities. What if a library interface requires a list in sorted order? Is it better to leak a LibrarySortedList type into the caller codebase? The modularity starts to break down. The alternative is use a standard library generic List type, but you can't force the data to be sorted. To encode this type of info we need dependent types or similar. A different example would be a database connection library, every database supports different key/value pairs for connection strings. If the database library released a patch which deprecated support for some arbitrary connection string param, you wouldn't find out until someone tried to run the code. Static analysis tools may catch common things like connection strings, but IME there's always some custom "stringly" typed values in business applications, living in a DB schema written 10+ years ago.
We also have to consider that the majority of our data arrives serialised, from files or over the wire. It's necessarily constrained to generic wire protocols, which have lower fidelity than data parsed into a more featured type system. Given that this type of data is getting validated or rejected directly after deserialisation, how much extra value is derived from having the compiler reiterate your validation code? Non-zero for sure, but probably not as much as we like to think.
From my perspective "our" job is to deliver solutions to specific problems our end-users have. How people accomplish that varies a lot by their position/seniority.
To get into the weeds: When I first stared worst case scenario, I could waste my own time with poor or overengineered solutions. Now I can waste half a dozen programmer's time by not doing enough due-diligence and or planning.
But it's not always the best tool. Even as an IC, I've done far more with emails and meetings than just writing code without thinking, whether that's discussing UX implementation details with the designer or pushing back on some half-baked over-engineered solution from management or fighting some evil advertising and tracking scheme from marketing. We don't just write code but also gatekeep it with some level of professional judgment and ethical discretion. (Or at least should.)
I think the orgs that don't give you any autonomy or agency beyond "code monkey" are the more problematic ones. Not only will you burn out having to repeatedly implement things you have no input into, you'll also be the easiest to replace if all you do is code.
Doesn't this describe every job on Earth?
This issue is particularly prevalent in the Javascript world, where it isn't uncommon that when you hit build/run, half a dozen process need to occur. This is partly why I wish they'd just put native TypeScript in the browser, simplify the build pipeline by removing several steps (and, yes, TS would evolve slower/more conservatively which I also consider a positive).
WebAssembly is the apex of this issue. Super fragile to build and impossible to debug. It is what I call "prayer based development," because you "pray" it works or troubleshooting becomes a nightmare.
* WebSockets are superior to all revisions of HTTP except that HTTP is sessionless. Typically when developers argue against WebSockets it’s because they cannot program.
* Your software isn’t fast. Not ever, unless you have numbers for comparison.
* Things like jquery, React, Spring, Rails, and so forth do not exist to create superior software. They exist to help businesses hire unqualified developers.
* If you want to be a 10x developer just don’t repeat the mistakes other people commonly repeat. Programming mastery is not required and follows from your 10x solutions.
* I find it dreadfully hypocritical that people complain about AI in the interview process and simultaneously dismiss the idea of requiring licensing for professional software practice.
What did you mean by this? Were you suggesting that interactive web apps should maintain a persistent and stateful connection to the server and use that to send interaction events and receive the outputs back, like a video game would, rather than using stateless HTTP calls and cookies and such? Why is that superior?
And sorry if I misunderstood!
That is how I design all my web facing applications now. The idea is that with WebSockets all messaging is fire and forget and that is independently true from both sides of the wire. That means everything is event oriented on each side separately and nothing waits on round trips, polling, or other complexity. In my largest application when I converted everything from HTTP to WebSocket messaging I gained an instant 8x performance boost and dramatically lower the architectural complexity of the application.
I had thought (perhaps incorrectly? it's not something I've spent a lot of time pondering) that a stateful connection like this is fragile in the real world compared to HTTP because it requires some sort of manual reconnection to the server on network changes (like if you're on a phone in a train or in a car), and that it would require both the server and app itself to be aware of what is dynamic realtime data, what is cacheable, what is stale, etc. Like kinda related to your other statement about state... doesn't this mean you're sharing and syncing state across both the server and the client?
Competitive video games operating over UDP are the closest everyday analogy I can think of, where the server handles most state (and is the ultimate source of truth) but the client optimistically approximates some stuff (like player movement and aiming), which usually works fine but can lead to rubber-banding issues and such if some packets are missed. But most gaming happens between one server and just a small handful of clients (maybe 100 or 200 at most?).
In a web app, would the same sort of setup lead to UI jank, like optimistic updates often flicking and back and forth, etc.? I suppose that's not inherent to either HTTP or websockets, though, just depends on how it's coded and how resilient it is to network glitches.
And how would this scale... you can't easily CDN a websockets app, right, or use serverless to handle writes? You'd have to have a persistent stateful backend of some sort?
One of the things I like about the usual way we do HTTP is that it can batch a bunch of updates into a single call and send that as one single atomic request that succeeds or fails in its entirety (vs an ongoing stream), and it doesn't really matter if that request was one second or one minute after the previous one as long as the server still knows the session it was a part of. Like on both the client and the server, the state could be "eventually" consistent as opposed to requiring a stable, real-time sync between the two?
Not disagreeing with you per se (hope it didn't sound that way), just thinking out loud and wondering how the details play out in a setup like this. I'm definitely intrigued!
No one is using my project, and I'm not exactly learning a lot from them, except in the first half of each one because whatever afterwards is just polishing (e.g. I did an interpreter project on a Python subset a year ago, and TBH the whole concept was pretty straightforward once my brain got it in the first few weeks). It's more of an obsession and that probably explains why I got burned out after 2-3 months and COMPLETELY lost interests in it. Looking at my GitHub commit history, it is always 2-3 months of almost daily commits compensated by 3 months of absolutely non-activity.
I don't think this is the right path for me if I want to leverage my side projects to get a job in low-level programming. Either I figure out how to drill deeper into each of my projects, or I need to figure out how to remove the burnout every 2-3 months. If the market is good I'd go straight to apply for system programming jobs but right now it's even tough to keep my own job.
So this is my unpopular opinions about side project programming -- if you are like me, maybe it's time to rethink the strategy. We only have one life, and I'm already 42. Gosh! Maybe I (we) should just find another hobby.
To me algorithms and solving problems in the abstract sense are the actual interesting part of the job
If discussing language/framework choices is the most interesting part of the job, it means I have a boring job/project/domain
- Cache invalidation is relatively easy; it's choosing the right strategy that is hard.
- You hate SPAs because you conflate them with running javascript, and you hate running javascript because you conflate it with predatory advertising strategies. You should hate predatory advertising strategies.
- IP clauses in job contracts contribute to the formation of monopolies and monocultures; they should be collectively fought harder than they are.
- Teaching juniors should be < 10% of a developer's job. If you want an instructor, hire an instructor.
In an alternate timeline, Microsoft might've won out with single vendor .NET everywhere. The DX would be better but everything would be way more closed.
On the iphone, only BASIC. :-)
"Just Run, Baby."
I'm not a 'programmer' like you all are, at best I can hack together code to get things done. I use git maybe once a year. I'm a biotech person that likes to hang out because y'all are mostly smart and the community is great here.
But man alive, this is not that difficult. Yes, it's hard to wrap your head around some nested dependancies. But it's a lot easier than any chain of protein/gene/neuron interactions. This stuff makes sense and you can edit it. My field can't really do that most of the time and it really doesn't make sense for decades (at best).
Like, I'm trying to follow along here and am mostly lost. But the few times I do know something about the code y'all are talking about, it's made out to be a lot more complicated than it needs to be.
I mean, yeah, keep that up though. Makes your bosses pay you more and lord knows those suits should be doing that and not spending the cash on rockets and shitty pick-ups.
But for real, y'all are making this out to be a lot harder than it is.
[0] this is supposed to an unpopular opinion, right?
It doesn't get as much mindshare as the problems with typing and frameworks and fashion trends and such, but my god, I've never seen a major, popular language with such poor support for basic time zone manipulation and storage. It is really really bad and won't be fixed until the Temporal API is stable and widely available: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
JS itself can't "keep" an original timezone in a Date() object. e.g.:
`new Date("2024-04-01T00:00:00Z").toString() `
Becomes your browser's time zone, even though the input was in Zulu/UTC. Also note that a time offset (Z or +08:00) is not the same as an IANA time zone string, and that's a one-way conversion. If you go from something like Los_Angeles to -7:00, you have no way to tell if the -7:00 is due to a daylight savings time or another locale that doesn't observe American DST. And JS doesn't store either piece of info.
If you have multiple users across different time zones trying to plan an event together, JS's date handling is super footgunny (is that a word?) and it's very easy for devs to make errors when converting datetimes back and forth from the server/DB to the app UI.
And because JS is so powerful today, there are many things that should be doable entirely clientside, but aren't easy right now. For example, relative time: wanting to make a scheduler that can search for +/- 7 days from a certain selected datetime. What is a "day"... is it 24 hours * 7? What do you do about daylight savings time boundaries? Or if you go from Feb 1 to Mar 1, is that "one month" or "28 days"?
These may seem like edge cases to you, but when it comes to datetime... really it's all edge cases :( You run into half-hour timezones, daylight savings time (which is a set of rules that vary over time even within the same country, not just a static offset per timezone and date range), cross-locale formatting, etc.
A lot of this is very doable with existing libs like Luxon, or datefns for simpler use cases, but they are fundamentally hacking around the weaknesses of the built-in Date() object with their own data structures and abstractions.
For me as a frontend dev working on ecommerce sites and dashboards, I've had to correct more datetime bugs from other JS devs (including some with decades of experience) than any other issue, including React state management. The tricky part is that a lot of the weaknesses are non-obvious, but it really is buggy and weak as heck, especially compared to many serverside languages. It's based on a very early implementation of Java's dates, which has since gotten a lot better, but JS's Date was still frozen in time.
Thankfully, most if not all of these issues will be solved with the Temporal API once it's stable... it's been like 10+ years under development, since the Moment days. Can't wait!
There is a relevant xkcd: https://xkcd.com/1987/