story
Rewriting code is expensive. As my office knows well. We maintain lots of old embedded systems and have to periodically rewrite or rehost it because the old hardware platforms aren't available or aren't performant enough for new features. These become multi-year, multi-million dollar projects, for relatively little gain.
By ensuring that developers and architects conform to certain conventions, it means (in theory) that this code maintenance is much cheaper, and that rewrites can be avoided or minimized. This is a good thing and lets organizations be more flexible and productive, as their time and money is no longer wasted on the old things, but can be spent on the new things.
What you say doesn't make sense given how Go reflection is implemented. If it was really about limiting choice Go would have no reflection . Go reflection is basically a way to opt-out of its (poor) type system. You should never have to do that in a statically typed language yet Go reflection is used a lot in the standard library itself.
Furthermore let's be honest. What do you think is more complicated ? generics or concurrency ? generics aren't complicated, at all.
> We maintain lots of old embedded systems and have to periodically rewrite or rehost it because the old hardware platforms aren't available or aren't performant enough for new features.
But Go isn't for embedded system programming. You can't run Go on bare metal without an OS.
Enforcing conventions is of course a good thing! The problem is how Go enforces conventions:
(0) When Go enforces a convention mechanically, it's a triviality that can be adequately handled by external tools (e.g., naming, formatting, unused variables, etc.).
(1) When a convention is actually useful (e.g., the correct way of using an interface), Go's type system is too dumb to understand it, let alone enforce it.
> aren't performant enough for new features
Second-class parametric polymorphism (“generics”) is purely a compile-time feature. It can be completely eliminated (that is, turned into the non-generic code you would've written otherwise) using a program transformation called “monomorphization”, before any target machine code is generated. So there's no runtime price to be paid.
Second-class polymorphism is what Damas-Milner gives you: let-bound identifiers may admit more than one type, in which case every type they admit is subsumed by a type schema.
Second-class polymorphism rules out polymorphic recursion if you consider every recursive definition as syntactic sugar for applying a fixed point combinator to some expression of type `a -> a`, for whatever monotype `a`.
"The key point here is our programmers are Googlers [...] They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt."
I'll concede there's the possibility for some weird tongue-in-cheekness here, but it definitely seems to be the canonical view among gophers that Go's paucity of features is about accessibility for programmers that don't understand them or find them cumbersome to work with.
I think this idea that "paucity = good" is so easily abusable that whenever this comes up from gophers I wish they would concede that this is an unhelpful simplification of what they must actually believe. Assembly language has possibly the highest paucity of concepts given it offers no ability to introduce language-level abstractions (other than say conventions about calling, etc.), but Go is nothing like this.
The argument can't be that paucity is good as a general condition, its that there are forms of abstraction and programming language features that Gophers find unhelpful or difficult to understand. The problem I have with this when applied to parametric polymorphism, is that Gophers already work with these concepts daily, so it can't be that use of them is complicated.
I also have a hard time believing that the ability to define parametric types and functions costs you anything. It's almost always self-evident when to use parametric types or functions, things that are "wrappers" or "collections" probably account for 80% of their use. I also don't think I've ever experienced ambiguity of choice with the feature. For instance I don't think I've ever been in the situation where I had to trade off implementing a generic definition vs. N specialized definitions. The frustration of using Go is actually that I now have to consider the later as a possibility or trade off type safety by using unsafe casting.
If there's a place where parametricity truly introduces complexity, I'd love to hear about it from a Gopher instead of a blanket statement about how "programmers don't understand it", "it decreases readability", or "Go is simpler without it".
Please keep in mind that there are differences at scale. What is "easy to work with" for 1 programmer over a month might not be so for 20 programmers over years.
The argument can't be that paucity is good as a general condition, its that there are forms of abstraction and programming language features that Gophers find unhelpful or difficult to understand
The argument is that simpler is better at scale. Airplanes can move freely in 3 dimensions, but airliners are constrained to fly in particular ways around busy airports and cross country.
I also have a hard time believing that the ability to define parametric types and functions costs you anything. It's almost always self-evident when to use parametric types or functions, things that are "wrappers" or "collections" probably account for 80% of their use.
I could see an argument for parametric collections and parametric sorting in Go. Not, however, for wrappers.
The frustration of using Go is actually that I now have to consider the later as a possibility or trade off type safety by using unsafe casting.
In your experience, what kind of "cost" has there been in unsafe casting to use collections? Even in environments like Smalltalk, where all use of collections amounts to "unsafe casting," I've rarely seen situations where a mistake of this type wasn't found trivially. Does your frustration come from having to abandon the "assured safety" the type system would give you, or does it come from an experience of the costs?
For me, it's entirely about expressiveness. This:
reverse: 'a list -> 'a list
where the type encodes that `reverse` is a function from a list of some type of elements to another list of the same type of elements, is more informative than this: reverse : list -> list
where it's obvious that this list has elements of some type, yet it's not clear what the element type is, and it's not clear that the resulting list has elements of the same type as the input list.Beyond being able to express and communicate intent, there's the added benefit that the type system can statically check that the input elements are the same type as the resulting elements. There's also no worry about information loss associated with subsumption (the rule of subtyping that allows a value of a subclass to "become" a value of one of its superclasses, losing specificity in the process which may only be regained with a type cast (this is one reason I tend to favor row polymorphism as well -- no subsumption means no information loss and no need to cast)) because no subtyping is involved in this case of parametric polymorphism.
Parametric polymorphism is simple and well understood. And not exactly new either: it has been understood for some 40 years already.
> I could see an argument for parametric collections and parametric sorting in Go.
C++'s <algorithm> header is proof that there are lots of algorithms that benefit from being expressed generically, not just sorting.
> In your experience, what kind of "cost" has there been in unsafe casting to use collections?
Without type safety, there's a disincentive for decomposing things into smaller parts, because the cost of manually verifying that the parts are compatible is greater than the benefits of decoupling them. Would a Go programmer even dream of bootstrapping fancy data structures from simpler ones?
> Even in environments like Smalltalk, where all use of collections amounts to "unsafe casting," I've rarely seen situations where a mistake of this type wasn't found trivially.
At scale, the law of large numbers says that even improbable events will occur every now and then. Unfortunately, a program with even one bug is still incorrect.
What use is there for a fancy data structure? In practice, these occasions aren't that common. Many "fancy" data structures tend to exhibit bad cache behaviors if implemented naively.
At scale, the law of large numbers says that even improbable events will occur every now and then. Unfortunately, a program with even one bug is still incorrect.
Are you an undergraduate? Depending on how you interpret the spec (which isn't cut and dried when business requirements meet the real world) almost every page of production code has some kind of bug in it. Also, the law of large numbers isn't that relevant for most codebases and developer populations -- the numbers aren't that large. The effect of hubris is much larger in practice.
> The argument is that simpler is better at scale. Airplanes can move freely in 3 dimensions, but airliners are constrained to fly in particular ways around busy airports and cross country.
For one, I just debate the premise the simplicity has anything to do with cardinality of features/concepts. But let's take that argument at face value: then why _not_ assembly if this is the case? Why not a language with the absolute minimum number of concepts? I think if you interrogate this premise you'll find it doesn't hold a lot of water and that Go doesn't really aspire to this goal anyways. I think we have some amount of working memory for being able to intuit programming with a certain number of concepts. There's a valid argument that some languages suffer by breaking that barrier (though I personally think Go underestimates where that barrier is), but it seems incorrect that language designers should be optimizing for a minimal number of features.
I think complexity at scale has more to do with features that interact poorly (or cause poor interactions more frequently with a larger number of people). Specifically its about composition. For instance, there's a valid argument to be made that asynchronous exceptions (i.e. the ability to interrupt another thread with an exception) and locks poorly compose. Mutable state is a common example of a feature that's a detriment to composition. But parametric polymorphism, if anything, gives us a much greater ability to compose. It allows us to define functions that work on data arbitrarily parameterized by other types, which makes them conducive to composition. And likewise, we don't suffer ability to reason about composition at scale with parametric types. A parametric function does not gain complexity as more team members are added, more code is written, more deadcode accumulates, etc. Parametricity changes nothing at scale.
> In your experience, what kind of "cost" has there been in unsafe casting to use collections? Even in environments like Smalltalk, where all use of collections amounts to "unsafe casting," I've rarely seen situations where a mistake of this type wasn't found trivially.
That's an argument for Go to not have types. But Go does have types, and type safety is often espoused as a benefit of Go. If you're going to have types, it makes zero sense to me why you should not have parametric polymorphism, since this is the only way to have things like typed collections without opening yourself up to the possibility of casting errors. Frankly I find it bizarre that people claim that they have found type errors to be trivially fixable, because the scope of where a type error can be introduced is enormous in an untyped language... its literally every location that potentially calls into the code where the error occurs.
> Does your frustration come from having to abandon the "assured safety" the type system would give you, or does it come from an experience of the costs?
Yes, type safety is an enormous advantage to writing correct code in my opinion. It's one of the best mechanisms a programming language can give you for enforcing invariants about data. The curry-howard correspondence is a huge advantage to writing correct code. Every place a type checker isn't being used to delimit acceptable data is a potential source of a huge number of bugs. It's also a frustration because casting introduces conversation and type checking boilerplate that a type checker could ultimately take care of for you.
Okay, then you can throw away the rest of your post and stop right here. The overwhelming historical evidence is that assembly doesn't scale.
> That's an argument for Go to not have types.
Sorry, that doesn't follow. Is the logic here just because I mention Smalltalk, that I'm advocating late binding and the only type being Object for Go? Sorry, but that doesn't follow. The argument is that Go doesn't need a more complicated type system to avoid problems with heterogeneous collections -- because practice shows that even a simpler one can suffice.
> Frankly I find it bizarre that people claim that they have found type errors to be trivially fixable, because the scope of where a type error can be introduced is enormous in an untyped language...
Sounds like you're invoking freshman level false "common knowledge." Have you ever worked in an "untyped" language in a real project? What if a project simply used runtime asserts? Then a type error in a heterogeneous collection would be caught in unit testing. If it got out to production, it could be easily caught and logged. In 15 years of Smalltalk industry work I never encountered the kind of heterogeneous collection type error you're referring to in production. The closest thing I can recall involved the heterogeneous typed reuse of a local variable. (Which is simply bad coding style in Smalltalk.) In Go, you have a type system that provides much more feedback at compile time, and workable mechanisms for detecting the problem at runtime. So at least in this one instance (heterogeneous collections) there is arguably almost no practical benefit to parametric polymorphism.
(P.S. Technically speaking, Smalltalk is strongly typed with message passing semantics for methods implemented through late binding. It's not "untyped.")
Guess they don't think so highly of their hires anymore.
By the way, http://www.deathandtaxesmag.com/200732/google-admits-its-fam...