The conflict here is that object oriented philosophies aren't actually about objects. They're about communication between objects. The messaging between objects. As per Alan Kay himself:
> I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging".
The goal of object oriented design is to focus on the communication between objects, not the objects themselves. Part of that is that the type of object receiving the message doesn't matter so long as it understands the message and knows how to respond. If the object looks like a duck, swims like a duck, and quacks like a duck, that's good enough--even if the duck turns out to be a chicken with an identity crises. It understood the message and responded, and that's all we want in object oriented programming, objects that can communicate with each other.
Adding type checking flies in the face of this philosophy. Instead of type being irrelevant as long as the receiver of a message can understand that message, suddenly it's front and center. The code will accept or reject objects based on their type even if they're fully capable of upholding their end of the conversation.
Type-less-ness is core to Ruby. But some people may still prefer to include typing. We all want to use the tools and practices that best enable us to deliver, so that's a fair want. But since Ruby as a philosophy doesn't care about type, it's important to maintain type checking as an accessory to the language, not a feature of it. Something that can be layered on top of the Ruby language for those that want it, but that can be ignored by those don't.
That's why the approaches being used keep the type annotations out of the source files themselves.
Ruby is not the first or the second or even the third dynamic language that has added static type checking support, has this _ever_ happened?
TypeScript hasn't ever done anything for me than give me 3rd party dependency integration headaches. I love strongly typed languages and compile time checking, but TypeScript has never seemed worth the trade off due to its broken interoperability with normal JavaScript and the terrible state of crowd sourced typedefs. I'm either fighting some badly defined third party typedef, spending a lot of time creating typedefs myself or dealing with a version issue because the typedef isn't compatible with the version of the library I'm using.
When I use JavaScript I hardly ever run into issues that static typing would have prevented and I have zero TypeScript issues.
Honestly how has it improved the speed at which you get things done? Were you just constantly running into JavaScript bugs due to the lack of typing?
Then at the end of the day, it was still JavaScript (an interesting word for "not ruby"), but with types slapped on top.
I ended up switching to crystal, which is basically ruby + types (infered when possible, but I actually wanted the types) with the performance of golang.
Honestly, I rarely refer to documentation for these things because every project is a snowflake and the documentation gradient goes from no documentation to perfect documentation. By that, I don't just mean the words, I mean the website or the framework used to document, as well as the style of documentation (more like flavor?) Typescript is the great equalizer that makes a project with no documentation (but decent comments or method/var names) just as documented as one that that does.
I can also ctrl-space easy to get a list of methods in case I forgot which method I needed, or if I want to discover what's available. That's enormous in my style of programming. Sure beats going to someone else's documentation page, trying to read it.
Some of the improvement is not necessarily that I have javascript bugs due to lack of typing but rather that with typescript I don't get those bugs which means I don't have to reason about avoiding those bugs anymore like I did with javascript. Sort of a reduced cognitive load.
Also, I have a few coworkers that are not javascript/typescrpt savvy that I was able to get up to speed with typescript fairly easily due to the easy of using the types. There are, of course, hard things such as partials or understanding tsconfig.json or even generating types that I don't cover with them and just have them come and get me when they're ready.
For most things without types I just do the declare module in a d.ts - however, I will first try to find another package that does the same thing with types. Most popular packages these days do include types, some better than others.
After I re-read above, I realized that a lot of it depends on the IDE. If I were still using vim or kate/gedit, it probably wouldn't be a huge timesaver. Fortunately, I settled on one of the intellij editors.
It's about having things auto completed and you get to see errors before you run.
And what kind of library are you using to complain that third party typings are the source of your concern?
I'm having a really hard time understanding this "I need types forced down my throat" and "I like typing 3x as much as I would otherwise need to" and "yes, I want half my screen obscured by the types of everything I'm doing, not the actual code" and the "adding types now means bugs are impossible" mass cult hysteria that's running so rampant. Typing very occasionally prevents bugs that are generally easy to catch/fix or show up straight away when running an app. It's mostly a documentation system. And it slows development down.
Especially in Ruby which is such an elegant "programmer's language" I think it would just be silly.
Static types do make certain classes of bugs impossible, like missing method bugs, typos, and the like. You can eliminate a large group of defensive programming techniques and trivial unit tests that you would need in a dynamic language to ensure a similar level of confidence in a program. Obviously they don't make all bugs impossible, there will be bugs as long as there are programs, because we write programs without perfect knowledge of the requirements, and this is an unavoidable pitfall of software.
This can depend really heavily on what you mean by "development." If it's just getting the first version banged out, sure. If it includes coming back to code a couple years later in order to incorporate a new business requirement, having that documentation present can be a really big deal. 2 seconds spent typing out a type hint now might, down the line, save several minutes on average. Even in a recent Python project I did over the course of just a couple weeks, when I got to the "clean the code up and get it ready to put on the shelf for now" phase of the project, I ended up wishing that I had bothered to use type hints just a wee bit more when I was banging it out in the first place. It would have been a net time saver.
I don't like static typing super a lot in all cases because it makes it hard to do data-level programming. Which I find to be the true productivity booster in dynamic languages. But optional typing seems to hit the sweet spot for a great many purposes.
This is not true. You could paint almost every language feature aimed at producing correct software in this way: "writing tests makes me type more, and they catch very few bugs that would have been shown when running my app anyway". (Or, as an ex coworker once told me, "I don't need to write tests because I never have any bugs").
And what are types if not a kind of test/proof that the computer writes for you?
> And it slows development down.
There's a software development adage that goes like this: "I don't like writing tests, because they make me waste time I need to fix bugs on production that weren't caught because I don't write tests."
Well, I guess this is also a matter of perspective.
From where I'm standing, I'd rather you slow down and "document" your code. Code written at the speed of thought makes for an awesome MVP and for an awful legacy for your co-workers.
In the course of my job I write Swift for iOS and Ruby for server APIs and our web-based UIs.
Type issues are about 0% of my Ruby bugs, but dealing with all the damn type requirements in Swift regularly takes dozens of minutes to track down when some weird esoteric error message pops up. And God help you if you try to use generics.
If you want strong typing, then good for you. Just pick a language that fits that mold.
So much of what I love about Ruby is what it doesn't make me do.
Doubt.
In my experience at least 70% of bugs are ones that you'd catch by using types - things like x instead of y, possibly-empty list instead of known-nonempty list, user ID instead of group ID. Logic errors that couldn't be caught by typing do exist, but they're very much the minority.
3x? Even on languages that do not support type inference I would say that this is at most 1.1x. Even then, type inference exists.
> adding types now means bugs are impossible
I usually see that as a mis-representation of what type advocates say. Rather, it seems that people just support that types reduce the amount of bugs.
> or show up straight away when running an app
Or that show up after you had said app running for a while, and then you get a run-time type error which appears only after doing certain actions. This is the main reason that I am avoiding languages like lua and python.
(In addition languages with more advanced type-systems allow you to catch bugs such as buffer overflows or division by 0 at compile time)