I don't know about other folks, but the vast majority of code I read is in my editor, where the type inference saves me a ton of time and pain.
I do not miss having to change in32_t to int64_t at every location in my code when I changed my mind about types.
For enterprise scale problems it’s difficult to get the full context for a line of code. It could be in a repo you don’t even know about, run by people who you have only met once and never think about.
A good reason to use logical typedefs, though.
Seriously, that argument is like saying that long variable names are bad because sometimes you have to write code on punched cards. The solution to the drawbacks of obsolete technology is to not use obsolete technology.
So far, color highlighting is about the most I can expect from a CR tool. Though I’d be very open to being able to do CRS from my IDE.
They pointed out every place we weren’t following The Rules and one of those was that we weren’t creating a developer manual for people to use our library. Which was somewhat fair except I’d already walked people on every team through it, in person with a sequence diagram for doing their part of the workflow. But that was going to take me days of busy work every release and suck all of the joy out of my life. It was a clever chess move.
So what I did instead was fix our functional tests to be compatible with JCite, mash the printed form javadoc and some wiki pages together, and shoehorned them into a template of the corporate document format, programmatically. I couldn’t quite get the footers and indexes right, but I asked our less senior tech writer if she would be okay having to spend an hour fixing typesetting problems every release and she agreed. So instead of two developer days it was one hour of typesetting, by a tech writer. It took almost a year to break even on the time investment but it was well worth it in the long run.
To draw this curvy line back into a circle, I would think it perfectly reasonable not to utilize type inference in a functional or integration test. For the reason that you’d want something to break that tells you that you have a breaking change in this release.
That feels like a bit of a straw man because you just took the weakest example from the author's list and took that single one out of context. The full quote was:
> But there are many other contexts where I read code: a book, a blog post, a Git diff. When editing code in a limited environment, e.g. stock vim in a VM.
In other words, there are other contexts that most programmers encounter in their day-to-day that don't have the benefit of autocomplete or popups. I think the general principle of "the code should stand alone" is a fair one to point out without just scoffing "Who reads code in a book??".
But even when I've used IDEs in the past, type inference still seems an unnecessary slowdown and pain in the ass to save 1 second of typing. Explicit types make the code a lot more readable, even if your IDE is capable of showing you the type.
``` forward_iterator auto it = ...; ```
shows intention. Auto has saved me lots of headaches, particularly in generic code: what is the type of some arithmetic operation? Type inference can help. What is the return type of a lambda? auto helps, it is impossible to spell. What is the result of a non-type-erased range view pipeline? Almost impossible to get right.
There are lots of examples where auto is valuable at least in C++.
If inference is bad, maybe the second sentence shouldn't leave it up to the reader to infer what "it" means. Surely it would be much easier to read as "Type inference [...] type inference [...] type inference".
This results in highly repetitive, and as you've pointed out, somewhat tedious reading.
F# is my daily driver at work and I definitely benefit from both the type inference as well as my IDE making those types explicit for my reading pleasure!
- Omitting return types in non-private interfaces (eg crossing module or package boundaries, or anything used as input to generate documentation of same)
- Omitting concrete input parameter types
- Omitting explicit, known constraints on generic/polymorphic parameters
Subjective:
- Omitting any of the same on private equivalents which form something like an internal interface
- Omitting annotations of constant/static aspects of an interface whose inferred types are identical to their hypothetical annotation
Subjective but mostly good:
- Relying on inference in type derivation, where type derivation is the explicit goal of that API
- Preferring inference of interface/protocol/contract types over explicit annotation of concrete types which happen to satisfy them
- Omitting annotations for local bindings without distant indirection
Unambiguously good:
- Omitting local annotations of direct assignment where the type is obvious in situ
- Omitting redundant annotations of the same type which have the same effect without that redundancy
Like when I read about "auto" in C++. It still bugs me when I see it...
> In Rust and Haskell you have to at least annotate the parameter types and return type of functions. Type inference is only for variable bindings inside the function body.
This is false for Haskell. Can't speak for Rust.
Even in Rust, I tend to specify types at variable bindings if the type gets overly complex, just to push errors closer to their cause.
A nice feature of Rust is you can specify partial types, wih underscores for the still-to-infer part. E.g. let x:Vec<_>=someexpression; is a vector of something, but you don't know what exactly.
I believe it was a conscious decision to limit de cascading of type errors from global type inference, the base algorithm was capable of it.
Also, what do you build in F#? I think if I moved out of Python that would be it. OCaml is one of my favorite languages ever, and F# looks rad.
I built a basic static program analyzer for Solidity smart contracts over the past 7 months. I'm re-writing it to work off CozoDB now, as I want to be able to express more complex mutually-recursive concepts and I believe Datalog is much better suited for that.
I also get nervous that I'll change my function logic and the wrong type will be inferred, and it'll compile because the old and new types just happen to be logically compatible, but I get the wrong behaviour (or maybe this is just trauma from my VB6 days).
I tend to type my public function parameters and return parameters for this reason.
He posts this on HN, which happens to be built in Arc Scheme, a Lisp dialect (which is dynamically typed). I wonder if the author has written software that has better uptime and more popularity than HN? Is the author's codebase truly easier to understand?
And what does it matter what HN is built on? Do you think any criticisms of C should not be posted on a site that runs on top of the linux kernel because the kernel is written in C?
It does matter that there is a lot of successful software written in languages that are dynamically typed or use type inference when some want to dismiss these approaches entirely. Because it proves that these approaches can still result in useful and reliable software and productive software engineering.
Complex composed types just have this issue to be honest, and personally I like having inference as a tool to avoid the litter.
In Rust you do, but in Haskell you don't.
There are plenty of examples where a type declaration is required to get functionality, especially when we consider what GHC offers as being "Haskell".
I agree the author could be more clear about this—that Haskell and GHC only sometimes require type annotations in certain circumstances.
And even in some cases where you used to need them a simple type application is often enough now.
You can go an entire career without writing a single type annotation. It depends heavily on what style of Haskell you write.
At the definition site, I agree with the author that type inference is more of a burden for the most part, at least if the function is more than a hidden implementation detail of the class/module/file.
Bidirectional type checking kinda sorts this out by requiring annotations on top level functions and inferring or checking the rest in a mechanical manner. That's kind of a sweet spot or me. (And reportedly it's faster and gives better error messages.) Most dependent typed languages do this. I believe out of necessity. And Typescript also requires top level function definitions, but I haven't cheked if it is using the bidirectional algorithm.
If that's what the author is trying to say, then I agree. And with a Hindey-Milner system it's still best to annotate (most of) your top level functions (IMHO).
And I've gotten into trouble not doing this in the past. I started a project at work with flowjs, got inscrutable type errors in a different file than wherever the root cause was and bailed for typescript. In hindsight, it wasn't the fault of flowjs, but rather my lack of annotations on top level functions. (I knew far less about type-checking at the time.)
First off, the complaint about reduced readability outside of IDEs feels like a niche problem. Sure, it's a valid point when you're reading code on paper or in a basic text editor, but let's be real: most of us live in IDEs with excellent type hinting capabilities. The argument kind of falls apart when you consider that good variable naming can often make the need for explicit types less critical. Plus, isn't the goal of any good codebase to be as self-documenting as possible?
Regarding OCaml's type inference being a "footgun," it seems like a bit of an exaggeration. Yes, OCaml's system is powerful and can lead to some head-scratching moments, but isn't that just part of the learning curve with any powerful tool? It sounds like the frustration comes more from not leveraging the type system correctly rather than an inherent flaw with type inference. And honestly, adding type annotations for debugging is a pretty standard practice across many languages—not just OCaml.
The point about academic effort being wasted on type inference research also misses the mark. This research pushes the boundaries of what's possible with programming languages, leading to more expressive and safer languages. To frame this as a waste is to ignore the broader benefits of advancing programming language theory. Sure, it'd be nice if papers spent more time on practical applications, but that doesn't mean the theoretical aspects aren't valuable.
It feels like the article is conflating personal gripes with systemic issues. Type inference, when used correctly, can significantly reduce boilerplate and make code more concise and readable. Of course, it's not a silver bullet, and there are situations where explicit type annotations are beneficial for clarity, especially in public APIs. But to dismiss type inference outright seems like throwing the baby out with the bathwater.
In the end, it all boils down to using the right tool for the job and understanding the trade-offs. There's no one-size-fits-all answer in programming, and dismissing type inference entirely overlooks its benefits in many scenarios.
type 'a binary_tree =
| Leaf
| Node of 'a binary_tree * 'a * 'a binary_tree
type http_response = [
| `Ok of string
| `Error of int
| `Redirect of string
]
module type QUEUE = sig
type 'a t
exception Empty
val empty: unit -> 'a t
val enqueue: 'a -> 'a t -> 'a t
val dequeue: 'a t -> 'a option * 'a t
end
type _ expr =
| Int : int -> int expr
| Bool : bool -> bool expr
| If : bool expr * 'a expr * 'a expr -> 'a expr
type person = {
name: string;
age: int;
address: string;
}I love Python, but I use it carefully in large scale production settings. Python with a Typescript-like static type system would hit the sweet spot for me (and yes, I've used mypy, but it doesn't hit the same spot).
In Python's case, there are probably thousands of types in the standard library alone.
You know exactly what that person meant.
Also you're confusing dynamic typing and type inference.
I think you missed their point, which is that "it is fine for me in Python not to see the type in the source code, and therefore I believe that it should be fine in other languages (be it dynamic typing or type inference)".
A bicycle is objectively worse if you have many staircases to deal with, but that doesn't imply that getting used to one and being productive with it is fundamentally and necessarily misguided.
Too hard. I need to see the types:
((((3::Int)+(9::Int)::Int)+(15::Int)::Int)+(18::Int)::Int)+(5::Int)::Int
Is that even harder to read?> My response? Go refactor your hideous code.
Personally I'd like to see languages embrace "format on save" as an explicit part of language ui to improve ergonomics here. Firstly, a first class auto formatting tool is just great and spamming cmd-s as you write code until it auto formats nicely is a really quick way to observe and address syntax issues -- to me that part of the experience is already important enough to explicitly incorporate making that developer experience work well a goal of language design.
But secondly -- if you do embrace auto format on save at language design level, there's a lot more you can do than just auto format the code! You also gain a really nice channel for communicating information about "program change" to the developer. Say you allow in the language to differentiate between inferred and not inferred types -- and then at auto format time, the inferred types are explicitly added to the code (in some heuristically useful set of places-- or even just everywhere that a non type inferenced language would require explicit types).
In that world, as you make changes to the code, your git diff state is going to start giving you a lot of potentially useful feedback about what actually happens in your program when you make certain changes. Additionally because the inferred types are automatically added -- you can easily have a mode to hide them when you want a less noisy view. Mayb the convention would become that committeed code is always serialized to a form that conveys more of the statically knowable program information by default -- which your ide can hide to give you a more streamlined view -- rather than the other way around). And then the parts of your code you know are boundaries or apis or not expected to change types, you just update the annotation to indicate that the type is not supposed to be reinferred and a type error will be issued if it doesn't match instead of being updated. Now you've got a nice way of constraining program evolution in desired directions to help tame complexity or at least force explicit acknowledgement as certain assumptions about the program structure become invalid over time ...
This way you can read the code more easily and if you want to see the type it's there for you.
Now, if the IDE autocompletes the type declaration for me somehow, that's great! That's a win-win: I save time but still maintain readability.
Then you will need type inference to some extend. :D
I enjoyed this reference to a false dichotomy while making one itself. Languages don't make programmers leave out annotations where it aids readability.
This is true, but most people are not going to incur write-time penalties to benefit read-time later on. Annotations (usually) benefit read-time at the expense of write-time. Having had to work on codebases that were written by people who were furiously trying to "get things done" because not shipping or shipping late might mean going out of business, they take whatever shortcuts they can. It ultimately saddles other people with tech debt for years and in some cases decades.
I'd say that for most people, use annotations most of the time is good advice, but there are exceptions and people should refrain from casting judgment on other's work due to a dogmatic instance on others using their preferred style.
It's great that we don't have to type every type definition. But when reading code, it sure is much easier seeing exactly what every type is explicitly. You can look object by object in most ides by hovering over each item, but it doesn't have the at-a-glace see-it-all viewability; hence the idea, just rewrite the code with or without the explicit types, as desired.
There's still a lot of type narrowing and other things that happen that aren't super visible, that alter the known typing state as we go. I have less of an idea of what to do with that.
It's really great for stuff like passing enums as function arguments. You can write `context.setColor(.red)` instead of `context.setColor(Color.red)`, the latter of which I find just unnecessarily repetitive.
The coolest part about this is when you're using a new unfamiliar API, you can let your IDE's auto complete suggest options for you just by typing '.' and pick from the list of options shown inline.
Ok, let’s go back to normal imperative programming. What about alias analysis? What to do with devirtualization? You NEED type inference. That is being said, I am not a fan of the “usual” ocaml’s style where ppl seem to write as less type annotations as they can. That is not user friendly.
Ditto.
The world where data and it’s type is the SSOT means you can trivially validate every bit of code that touches it.
I use type inference for this: the compiler looks at the existing types and then infers the type of the code I haven't yet written and tells me. I then write code based on the type given to me by the compiler.
I'll admit, once in a while I write something like `let a: Int = <the_var_which_type_I_dont_know>` and compile, such that the compiler says something like "expected an Int, got a HashMap<String, Int>". But pretty rarely.
So yeah, I kind of like type inference.
Type inference inside a function body, is still type inference. Type inference gives us options and can sometimes improve readability. I find the title and premise of this article rather silly.
One could argue it's better for type inference to not just be part of the language but also part of the IDE. E.g. you type
auto x = ...
And then the IDE offers to replace the auto with vector<SomeClass>
That way you can both write code with type inference, but read code with fully annotated types!
x = foo();
And then accept the correction it offers to vector<Bar> x = foo();
Which is nice enough since you can invoke all this with just the keyboard.But I can see the value of inference if the type is defined by a constant. If the rule is "Variables are the type of the constant that you assign in the definition, anything else is manual" it's pretty obvious.
All this visual help makes a lot of pet arguments obsolete.
x: number[] = []
y: number = x[0]
The array type is missing information about the length of the array, and types are very often missing important information like this. Say you want to describe an array containing only odd integers - good luck.
Types are simply a heuristic for humans to try and remember vaguely what their code should do. If you want to do anything complex you need to abandon them anyway and use things like x!, and x as my_type. So designing around types seems like a bad idea.
You could do much better by abandoning text based programming languages and creating a visual programming language where you can zoom out and see what information gets passed where. The whole reason for types is to be a hack fix to the problem that we’re too zoomed in on their code and can only really reason about one function at a time given our crappy text-based programming languages.
"noUncheckedIndexedAccess": true
in your tsconfig.json. For odd only numbers, you can make a "branded" type. There are many options, one way is type OddNumber = number & { __BRAND_ODD_NUMBER: true }
then it's just OddNumber[]
and to onboard untrusted data, use a type guard const isOddNumber = (x: unknown): x is OddNumber => typeof x === 'number' && x % 2 === 1Yes. Always annotate types. Keep inference, it tells you when your annotations are inconsistent with tour code.
Isn't that plain type checking, rather than type inference?
Type checking detects inconsistencies, type inference assigns types in ways that avoid inconsistencies.
No, this sounds like:
* Type Inference Was a Mistake
* Type Inference Makes Code Less Readable
* Type Inference is a Footgun
* Type Inference Wastes Academic Effort
These are incompatible with:
> Keep inference