This rubs me the wrong way. Even back when Go first came out, anyone who knew anything about programming languages rolled their eyes at pretty much everything about Go's type system, including the inference. Just because Sun couldn't figure out how to do it in the 90s doesn't mean that type inference wasn't mostly solved in the 70s. Well before many people were using it or lived any real length of time, Go's always been a language people have - rightly - complained about.
That said...nothing in the original post says anything along the lines of "Go's type inference [is] an advance in the state of the art", so I might be misunderstanding the author here.
And Go has succeeded despite these condescending diatribes on how a language needs to have a Hindley-Milner type system with ADTs and type classes to be useful. Go made me truly realize how insufferable the PLT community is, and why they are so absolutely lost when it comes to creating successful languages.
In under a decade Go swept up entire markets with a simple, down to earth language you can learn in a day and keep in your head. It optimized for the masses and the common cases and has absolutely eaten the lunch of these languages with lauded type systems that takes several courses in formal logic to even get started with.
What we did to settle the debate was to ask an entry level dev that was just starting if he was interesting in writing two sample applications in each language, having known nothing of either. Nothing complicated but touched enough points (HTTP endpoints, database interaction)
A full day later was still trying to get the Clojure app working correctly.
He finished up the Go one in like an hour.
Since then we've brought new devs on with zero Go experience and they are up writing good code in a day. I can't imagine where we would be if we had gone down the Clojure route.
Fine. But it's beside your parent comment's point. The article claims that everyone thought Go's type system was an advance on the state of the art. This isn't even close to true.
It makes sense in retrospect of course, python and ruby are slow, dynamically typed languages, and any improvement in performance and typing is welcome. It doesn't mean that golang is inherently better somehow to other offerings. I still maintain that Java and C# are superior languages and ecosystems.
They aren't lost—they're just more interested in actually good ideas than in popularity. Popular languages must appeal to all kinds of programmers with varying backgrounds, so they are heavily constrained. Your argument is basically that mathematicians don't know what they're doing because their most advanced theories aren't used by mechanical engineers.
And nothing says that go wouldn't have been more successful had they added those features. In the final analysis, the relationship between the success of a language and any intrinsic qualities is very hard to qualify. But IMO, the success is not a good measure of wether or not the criticism of go were/are valid.
> Go made me truly realize how insufferable the PLT community is
Agree PLT folks can be a passionate bunch, i am not sure they are any worst to any other online community.
> and why they are so absolutely lost when it comes to creating successful languages.
Depends on who you include in the PLT group :
- C# and typescript were design by Anders Hejlsberg , arguably the most successful language designer - Scala is also pretty successful and really tied to the PLT community - Kotlin by JetBrain, Dart started with Gilan Bracha
Not to mentioned the wide range of features seen in most recent language (async-await, reactive programming for msft research) etc... etc...
The fight between pragmatic and simple language vs complex and expressive language is not happening outside the PLT, we have proponent of both ways of thinking inside the community. Not everyone in the PLT is pushing for overly complex theoretical approaches.
But more importantly, let's not forget the 1000's of engineer quietly implementing the compiler, libraries etc... that make go, or any other language possible.
Back to go, my personal gripe with go wasn't the decisions they made, but the rational given for those decisions.
Take the most famous example of not including generics. Designing a good generic type system is a very complicated task, and if the team had come out and say they didn't want generics because they didn't have the bandwidth or the know-how to do so, i wouldn't have care. But the rational given, describing generics as border line useless, or somehow too hard for the average programmer to grasp not only fly against basically 25 years of programming language history, but were just plain rediculous.
Meanwhile, most code is still written in languages like Java, C and PHP.
That's no value judgement or anything, I don't think Java is an amazing language (neither do I think it's a terrible one), but it's not like Go has revolutionised anything. It's just become a new option among many.
Typescript and swift have shown a better type system can appeal to the masses.
I'm not sure that this successpoint is really that valuable. It sounds more valuable for those who want butts in seats than it does for long term satisfaction and survivability of your code base.
Kubernetes adoption seems to be going up but does it really add value for most that adopt it? I would say, no.
The Go team was populated by people who had created one of the most influential languages of all time, C. Who created the new language based on theories about how to encourage good engineering practice. Theories that they were able to test through internal access to a very large and actively maintained codebase at Google. A codebase into which they had inserted several other languages that they had devised for various purposes.
I'm pretty sure that nobody in the external programming languages community had the same depth of experience in the practical use of programming languages that the Go team had. And so it was bizarre to them to see the Go team deliberately leave features out because of concerns about how those features are used in practice.
Few of the critics who "knew about programming languages" have created any language that ever made it into the top 10 programming languages in the world. It is therefore funny to me that they were complaining about exactly the kinds of choices that lead to Go becoming popular.
I'll generally take the design choices of a team with 2 popular languages under their belt over academics in the ivory tower.
Just because you can build a microchip doesn't mean you can build a spaceship and vice versa.
So I'm not surprised about that they e.g. left out generics and said they did so because they didn't know a good or right way to add them to the language. They were honest at least which I value a lot.
As to the success of Go that you mention. Well, let's be honest: it targets junior developers, or at least that was originally a major goal. It is backed by Google and is marketed.
There are just currently way more junior developers due to the demand and the developement of the field.
However, you can already see that a lot of junior developers that started with Go are not so junior anymore and now that they got more experienced, they demand language features that make them more productive - like generics. And they will be added and in the end Go will be a language that is not simple anymore, it will be the new python.
Go is a very practical and pragmatic language, no doubt. It's one if its strengths. But it is not by any means an advanced high-level language in any sense that I would know of.
I often encounter some people in programming which I think lack humility. It's fine to question dogmas and the "big heads", but one has to look at the caliber of people you're up against and maybe give them the benefit of the doubt, if only a little bit.
I certainly wouldn't read a few PLT books and lambda-the-ultimate.org and then point to Ken Thompson and Rob Pike and say "your language has no <Type System thing>, you don't know what you're doing". These are not amateurs.
It's also doubtful they liked everything they put in, or disliked something they left out. Even Rust's creator didn't like some of the direction his language took.
If we were to ask other famous language designers they probably have a fonder feeling towards Go than people in this thread, knowing all of the hard design decisions one has to make to create an impactful language with an identity.
Golang works extremely well in practice, which is what I really care about.
Which is absolutely terrible from a programming language point of view, there were already better languages at its inception. Ken Thompson has a huge legacy in the CS world, but he is frankly not a good language designer at all.
Also, appeal to authority. If go were so good, it should able to be praised on its own accord.
It isn't as if C would have been a commercial success, had it come with a price tag.
Plan 9 and Inferno commercial successes, and Limbo adoption, are a clear example of how it would have gone instead.
The fact that it was even a point of concern shows how misguided the PL community is. Advancing the state of the art is not the goal, producing a tight, clean design is.
> Just because Sun couldn't figure out how to do it in the 90s doesn't mean that type inference wasn't mostly solved in the 70s.
Theory's only as useful as its implementation. If there's no sensible implementation of type inference out there, but then a new language comes out with it, is that not a significant improvement to the status quo, even if the theory behind it may be decades old?
Code autoformatters were not exactly dark PL magic when Go came out either. But somehow the impact gofmt has had on the PL space has been immense. Funny how that works.
Why should there be only one goal of "the PL community" (whatever that is)? Maybe you need people who advance the state of the art and others who make things ready for production.
> Theory's only as useful as its implementation. If there's no sensible implementation of type inference out there, but then a new language comes out with it, is that not a significant improvement to the status quo, even if the theory behind it may be decades old?
That would be a fine argument if Go had been the first major language with type inference, but even if you're willing to ignore ML languages (because you think they're too niche and/or weird), Scala is almost 10 years older than Go.
I don't see how having better type inference interferes with the "tight clean design"? If anything I'd argue that not having it is a hinderance. Why do i need to `x :=make([]foo)` or `var x []foo` when I could just `x := []` and let the type system infer that its `[]foo` by the fact that I keep `append`ing `foo`s to it?
Sure, but they failed to deliver that.
The simplicity of the language and the stdlib is shockingly well thought out -- things like io.Reader are so obvious and yet not part of many other languages. The language has made me a better programmer. And the cross-compilation story is chefs kiss.
Working on a cross-platform project where in Go, I write code and it just builds. In Java, I fight with Gradle. In Swift, I fight the type system and the way everything's constantly deprecated.
It's not all perfect. I wish the generated binaries were smaller. I wish the protobuf library wasn't awful. And better Cgo/FFI would be nice.
But overall, I've never been so productive.
I've worked on large golang code bases that had to build on bazel, and I had the same experience fighting it. It has nothing to do with the language.
> In Java, I fight with Gradle.
So gradle's issue, not Java's. See above.
That is absolutely not the case with C, C++, Java, python.
> So gradle's issue, not Java's. See above.
The issue exists with maven too, but _not_ with go build, which is OP's point.
So you deliberately added complexity instead of using Go's own build system and it didn't work out well and that's somehow a proof that Go's cross-compilation story isn't as great as people say?
A language is a product, and the programmers writing in it are its users. The same way any other product works.
Would you be okay with buying a Tesla without the capability to charge at supercharger stations? You can only charge at home. The supercharger network is not literally part of the car after all!
I’m curious what you don’t like about it? I haven’t used Go in anger, but I love protobufs, and it’s shocking that Go, of all languages, would have a substandard implementation.
In particular, oneofs are *so* awful to work with that I'm often tempted to use an Any instead. For example:
message Image {
oneof kind {
Bitmap bitmap = 1;
Vector vector = 2;
}
}
Should, in my opinion, lead to code like this: img := &Image{Kind: &Bitmap{}}
But the reality looks more like this: img := &Image{Kind: &Image_Bitmap{Bitmap: &Bitmap{}}}
My other main gripe is that the generated structs embed a mutex, and so can't be copied, compared [ergonomically], or passed by value.Sadly, both of these issues are explained away as on the issue tracker.
(My use-case is primarily to share data structures across languages, so perhaps it's not totally aligned with what protobufs is trying to do. I just wish there was a better alternative.)
There's something so liberating about finding code samples or documentation from 5-10 years ago and it is still the correct way to solve a problem. The lack of churn in the ecosystem means you can learn the language and then focus on actually building stuff rather than focusing on the rat race of learning the latest hotness and restructuring your app constantly to account for dependencies that break things.
My first reaction when seeing Go is that it smelled of "old fart". It looked straightforward and unexciting. I'm an old fart. I like straightforward and unexciting. It tends to lead to code that I can still read 6 months from now.
I want to make stuff and not endure people boring me with this weeks clever language hack they came up with that expresses in one unreadable line what could have been expressed clearly in 3 lines.
Yes, there are people who create a mess with abstraction. This happens in every language, in Java people create FactoryFactories, in Haskell people play type tetris, in Ruby, people abuse metaprogramming and in Go, I assume some people go overboard with code-gen.
But that said, I suspect many people, when they say, "obvious code", they mean "I can easily understand what every line does". Which is a fine goal, but how does that help me with a project that has 100ks of lines of code? I can't read every single line, and even if I could, I can't keep them all in my head at once. And all the while, every one of these lines could be mutating some shared state or express some logic that I don't understand the reason for.
We need ways of structuring large code bases. There are a ton of ways for doing so (including just writing really good and thorough documentation), but just writing "obvious" code doesn't cut it. Large, complex projects are not "obvious" by their very nature.
This is one of those subjective "you know it when you see it" qualities that are going to be a function of the code itself and how well it conforms to practices you are used to. I also think that we have a tendency to not notice as much when we read code and understand what it does without having to think about it too much.
And you can get lost in Go too. You don't need a lot of language features help you complicate things.
For instance, I recently looked at some code that I had originally written, then someone else had "improved it". In my original version there was some minor duplication across half a dozen files - a deliberate tradeoff that enabled someone to read the code and understand what it did by looking in _one_ place. (This was code that runs only at startup and is executed once. It just needs to be clear and not clever).
The "improvement" involved defining a handful of new types which were then located in 3-4 different files across two new packages placed in a seemingly unrelated part of the source tree. A further layer of complexity was introduced through the use of init() functions to initialize things, which adds to the burden of figuring out which order things are going to happen in since init() functions sometimes have unfortunate pitfalls.
Yes, the code was now theoretically easier to maintain since it didn't repeat itself, but in practice: not really. Rather than look in one place to figure out what happens, you now had to visit a minimum of 3 files and 5 files in one case.
And remember those init() functions? Turns out that the new version was sensitive to which order they would get executed in. Which lead to a hard-to-find bug. Now you could say that this is unrelated to complicating things by decomposing a lot of stuff into more types, but this isn't unusual when people get a bit obsessive about being clever.
> But that said, I suspect many people, when they say, "obvious code", > they mean "I can easily understand what every line does". Which is > a fine goal, but how does that help me with a project that has 100ks > of lines of code?
These are related but different problems. At the micro-scale (what you can see in a screenful in your editor of choice), consistency in how you express yourself is key. In essence the opposite of the "there is more than one way to do it" mantra in Perl. This mantra is bad advice. You should ideally pick one way to express something and stick to it - unless there are compelling reasons to make an exception. (Don't be too afraid of making exceptions. There is a fine line between consistency and obsessiveness).
If you stick to this your brain can make better use of its pattern-matching machinery. You see a "shape" and you kind of know what is going on without actually reading every line of the code.
Also, how you name things is important. When I was writing a lot of Java you could ask me the name of classes, methods, variables, and I'd get it right 90% of the time without looking. Not because I'd remember, but because I had strict and consistent naming practices so I knew how I'd name something.
(I haven't succeeded in being as consistent when I write Go. Perhaps I can get guess the name correctly 70% of the time. I'm not sure why).
Now let's look at "how does that help me with a project that has 100ks of lines of code".
At larger scales it is really about how you structure things so you can reason about large chunks of your code. Think in terms of layers and the APIs between them when you structure your code. Divide your code into layers and different functional domains. Describe them through clear APIs with doc comments that clearly document semantics, preconditions, postconditions etc. The trick is to try to identify things that can be structured as libraries or common abstractions and then pretend that those bits should be re-usable (without going overboard).
Say for instance you are implementing a server that speaks some protocol. You want to layer transport, protocol, and state management with clear APIs between each layer. Your business logic should deal with the implementation through an API that is a clear as possible. Put effort into refining these APIs. A good opportunity is when you are writing tests. You can often identify bad API design when you write tests. If something is awkward to test it'll be awkward to use.
Also, like you would do when you write a library, give careful thought to public vs private types and functions. Hide as much as possible to avoid layer violations and to present a tighter and narrower API to the world. (Remember APIs are promises you make. You want to make as few promises as possible).
This also has the benefit that it gets easier to extend. APIs between layers are opportunities for composability. Need to add support for new transports? If you have structured things properly you already have usable interface types and unit tests that can operate on those. Need different state handling? Perhaps you can do it in the form of a decorator, or you can write an entirely new implementation.
(Look at how a lot of Go code does this. For instance how a lot of libraries, including the HTTP library in the standard library, allows you to inject your own transport. This enables you to do things the original authors probably didn't think of. I have some really cool examples of this if anyone is interested)
Over time you will probably see a lot of parts of your software that can be structured in similar ways. This allows you to develop habits for how you structure your ideas. The real benefit comes when you can do this at project or team scale. When people have a set of shared practices for how you chop systems into functional domains, layer things and design the APIs that present the functionality to the system.
So in summary: you deal with 100kLOC projects by having an understandable high level structure that makes it easy to navigate and understand how the parts fit together. When you navigate to a specific piece of code, your friends are consistency (express same thing the same way) and well documented (interface) and model types.
Years ago I came across a self-published book that taught me a lot about how important APIs are when building applications. The book was about how to write a web server (in Java). It started with Apache Tomcat (I think) and focused on the interface types.
Using the existing webserver as scaffolding it took the reader through the exercise of writing their own webserver from scratch, re-using the internal structure of an existing webserver. One part at a time.
The result was a webserver that shared none of the code with the original webserver, but had the same internal structure (same interface types). This also meant your new webserver could make use of bits and pieces from Tomcat if you wanted to. I found this approach to teaching brilliant because it taught several things at the same time: how Tomcat works, how to write your own webserver, and finally, the power of layering and having proper APIs between the different parts of your (large) applications.
I still think of the model types and the internal APIs of a given piece of software as the bone structure or a blueprint. You should be able to reimplement most of a well designed system by starting with the "bones" and then putting meat on them.
> I can't read every single line, and even if > I could, I can't keep them all in my head at once.
Keeping 100kLOC in your head isn't useful. Nor is it possible for all but perhaps a handful of people on the planet. But if you are consistent and structured, you will know where you'd put a given piece of code and probably get there (open the right file) on the first attempt 70-80% of the time. I do. And I'm neither clever, nor do I have amazing memory that can hold 100kLOC. But I try to be consistent, and that pays off.
> And all the while, > every one of these lines could be mutating some shared state or > express some logic that I don't understand the reason for.
If you have 100kLOC of code where any line can mutate any state directly, you have two huge problems. One is the code base, the other is whoever designed the code base (you have to contain them so they won't do more damage). If you have gotten to that point and you have 100kLOC or more, you are really, really screwed.
I've turned down six figure gigs that involved working on codebases that were like that. It is that bad.
In Go, mutating shared state is bad practice. This is what you have channels for. Learn how to design using channels or even how to make use of immutability. There are legitimate situations where you need to mutate shared state, but try to avoid doing it if you can.
(I've written a lot of code in Go that would typically have depended on mutexes etc in C, C++ or Java, but which use channels and has no mutexes in Go. There is an example of this at the back of the book "The Go Programming language" by Kernighan et al, though this book is getting a bit long in the tooth)
If you do have to manage access to shared state be aware that this is potentially very hard. Especially if you can't get by with single mutexes or single atomic operations. As soon as you need to do any form of hierarchical locking you have to ask yourself if you really, really want to put in the work to ensure it'll work correctly. The number of people who think they can manage this is a lot larger than the number of people who actually can. I always assume I'm in the former group so I avoid trying to implement complex hierarchical locking.
It's possible in any language and yet some languages' codebases are consistently worse than others ;)
If you create a culture of cleverness, implicitness and metaprogramming, that's what the programmers using your language will do. It's self-selection to an extent.
"I've suffered long from the Ruby ecosystem's mentality of 'look at what I can do!' of self-serving pointless DSL's and frameworks and solemnly swore to myself to stay away from cute languages that encourage bored devs to get 'creative'." [1]
"I worked at a Scala shop about 10 years ago. Everyone had their own preferred "dialect", kind of like C++, resulting in too much whining and complaining during code reviews. IMHO, the language is too complex." [2]
> And all the while, every one of these lines could be mutating some shared state
That's where the obvious code helps.
Let's circle back.
> I still don't know what people mean by "obvious" code.
The Zen of Python is a nice primer: [3]. A beautiful display of taste right there.
A few concrete examples:
- "Explicit is better than implicit."
Explicitly returning errors means we get to see every single point at which something could error out - explicit, as opposed to exceptions that could implicitly propagate from any line of code, with no way to tell.
Preferring pure functions - a pure function is a black box with a clearly drawn boundary line of input->output. Trivial to reason about in isolation.
No automatic type conversions.
No global state - any part of the code could change it.
No metaprogramming - you've learned Ruby but now some parts of the language have been changed to mean something completely different!
"The syntax has so many ways of doing things that it can be bewildering. As with Macro-based languages, you are always a little uncertain about what your code is really doing underneath." [4]
- "There should be one-- and preferably only one --obvious way to do it."
Uniform code. Iterating through an array always looks the same, so if the code you're looking at does it differently, you'll pay attention.
[1] https://news.ycombinator.com/item?id=13482459
[2] https://news.ycombinator.com/item?id=31219392
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Brian W. Kernighan
Back in the day when GCC 2.9.5 was a widely used version of GCC we had a codebase that was full of "clever" hacks to trick the compiler to generate the assembler code we wanted. A few of these hacks were used in tight inner-loops that had a huge impact on performance. I can still remember the day someone removed what seemed to be a no-op in a piece of code, compiled it and pushed it to production - only to have everything grind to a halt and fall over because CPU use per request tripled.
Sure, it was a "clever" way to get the compiler to output what you wanted it to output, and it was common knowledge among some of the programmers on the project what that no-op'ish line would do - but not all.
The smart thing to do would have been to fix the compiler and upstream the fix (from our internal branch of the GCC toolchain). The "clever" solution was to just figure out a bunch of tricks to manipulate the compiler and call it a day.
In the context of this thread, "clever" is mostly taken to mean "not as straight forward and understandable as it can be".
Young me loved cleverness.
Older me knows how frustrating it is when something isn't as straightforward as it could be. Either because I have to figure out how something someone else wrote works or because I have to explain what my code does to people who have gotten confused.
When I write code other people can't understand I see that as a failure on my part. Because it is. Code isn't merely a mechanism to convey meaning to a compiler, it is a way to communicate with other human beings. Most of which aren't that interested in indulging my cleverness.
But it's a bit boring and I'll never use it for personal projects.
Two questions:
1) how do you like the module system we have today? 2) can you expand on what you mean by boring, why it is important to you that a language not be boring, and give an example of a language that is not boring?
Just today I had a couple blocks of code which were causing erratic issues. Wanted to see if it was the second one, so I quickly commented it out. This is just an exploratory development session, no need to comply with code quality guidelines. Still, the code failed to compile because now I had to worry about 8 lines with stuff that had become unused. The stubbornness about not adding an escape hatch for, again, exploratory intermediate development iterations is unnerving.
"But code should not leave unused stuff around". I agree. That's why after a hundred iterations, in the final compilation phase for production, this kind of errors-are-warnings flag would be disabled.
What's even more frustrating is that when you search for solutions, you come across two kinds of (pardon my language) completely brain-dead responses:
First, there are those who argue that unused variables/imports lead to bugs and worse performance in production, so they should always be fixed. But that's completely beside the point; I've never seen anyone argue that allowing unused variables is good for production. It's always been about facilitating the development process and debugging. Yes, I am aware now there are unused variables, but please just let me see what does removing this part of the code do.
Secondly, people suggest using a dummy function like UNUSED or a blank variable _ to solve the problem. But again, these suggestions miss the mark entirely. Changing variable names or adding UNUSED calls to "disable" the rule is even worse than what we've been doing to temporarily "circumvent" the rule, which is simply commenting out the declarations, test, and undo afterwards. Not only it involves more effort, but more crucially, you might actually forgot to revert those changes and leave in unused variables.
Frankly, I believe this is just a bad design decision, and it seems like the Go team is stubbornly doubling down on this mistake due to ego.
(Sorry, I just have a very strong opinion on this topic, and I am deeply frustrated when the tools I am using think they know better than I do and are adamantly wrong.)
Yes, but once you allow that, you'll inevitably end up with unused variables in production code (warnings are useless). That's the core of the issue and why the Go team made the decision.
In my opinion, the real solution is to have two build modes: --devel and --release. This would allow for not bothering devs with anal checks during development while preventing substandard code in production.
Though the real advantage would come from the reduced pressure on compilation speed in --release mode which would make room for more optimization passes, resulting in faster production runtime speed and lower binary size.
The solution is all so easy also — just add a debug and a production profile. Enable your strong linter in prod, for all I care, but this is a must-have development tool.
if false {
... stuff I don't want to run right now but the compiler still has to deal with it ...
}But of course it can get tiring and not be very practical if the logic to disable is a little bit more spread out (I'd say having to "if false" anything more than 2 paragraphs or blocks of code would already start to feel annoying)
And think of what the current strict rule has done: go is the only(?!) language that has a consistently clean ecosystem. When's the last time you looked at go code that huge chunks commented out?
That said... it is annoying. More annoying the less import the script, and the faster you want to test something.
Though personally, I dislike this paternalistic approach to handling devs.
Verbosity of error handling!
There should be a shorthand for returning if last ret value is non-nil in one line. Otherwise all you code is littered with:
if err != nil {
return err
}
and it makes it four times as long and way less readable as a result. This needless verbosity really reminds me of Java.Also, go fmt is not opinionated enough! There really should be one way to line break and max width. Right now you can't rely on it to magically format it as it "should be" and fire-and-forget when typing.
It's always surprised me how negative of a reception checked exceptions had, since they provide the forced handling (or explicit propagating) of (value, err) or Result<T, E>, but with an automatic stack trace and homogeneous handling across the ecosystem
I imagine some of the disdain in Java specifically came with how unergonomic they are with lambdas. Either you don't allow them at all, like in most standard library functional interfaces, or you do, but now every caller has to handle a generic Exception. I guess what was really needed was being able to propagate the "check" generically, e.g.
<T, E> T higherOrder(Supplier<T throws E> fn) throws E {
return fn.call();
}
So a call site of higherOrder would only be checked as far as fn isI'm unsure if that's even possible to do (and if other languages have done it) or if it leads to undecidability. I'm very rusty on PLT
After that, I probably consider readability at 3am [1], defer statements, explicit error handling and fast compile time to be the most important.
[1] Readability of not just your code: being able to go-to-definition into stdlib and immediately understanding it without having to grok a million unrelated decorators/FactoryFactoryFactory/std::_Vector_iterator<std::_Vector_val<std::_Simple_types<<block>>> is incredible
- PNG: https://docs.oracle.com/en/java/javase/14/docs/api/java.desk...
- HMAC - https://docs.oracle.com/en/java/javase/14/docs/api/java.xml....
You need Jetty for HTTP/2.
What does still bother me is the lack of proper enum support. I remember when Java boosted their enum support and the way it impacted the quality of the code. Sure would love to see something similar in Go.
I disagree with this strongly. Due to this when you need to change one of these things to the opposite it involves changing every use site as well. This has far reaching implications for refactoring, wrapping external code when you really do need to expose its guts, etc. Any time you need to do this in a non-manual fashion you're required to parse all of the code exactly perfectly (ASTs and such). Even go itself has not figured out how to do this, rf is still experimental and not complete: https://pkg.go.dev/rsc.io/rf.
Example use case: https://github.com/golang/go/issues/46792
I much prefer the non-viral public/private attributes other languages use.
So you make it private, in say Rust that's an ABI break, so you need a semver bump, but you aren't changing your code. Internally normal_operation does need to enunciate spools, not to mention the acrobatic use of it in complex_operation and fancy_coroutine but that's because it knows intimately what spools are and why it's enunciating them - it's an internal design element, not an API.
In Go, you have to rename it everywhere. Maybe your tooling helps with that. OK, but, not having to do it also helps with that and for everybody.
That and reserving the verb `make`..
I'm one of those that cannot use a PL that has no generics... It kills the DX of Algorithms and Data Structures.
Now it has Generics, soft RT GC, and it might even get official Arena Allocation.
I /LOVED/ the Matklad comment of |error handling converging|, indeed -- it seems that PL community evolved to "any-error" + annotation-at-call-site.
sorry, what does this phrase mean?
As for the success, it's obviously the minimal set of language features, conformance to established paradigms, not being very broken and being backed by Google.
There's a ton of suboptimal choices in Go, but overall it can work for many applications.
In Go the type system doesn't force you to check for errors, the same way as languages with null pointers don't force you to check pointers before dereferencing them.
That's the real problem with errors and null in Go, not the verbosity (though that doesn't help)
result, _ := myfunc()
That being said, I most certainly prefer the approach that Rust takes to this problem.
a, err := f()
if err != nil {
return
}
a, err = f()
This will compile.Its biggest flaw imo, which I don't think was mentioned in the article, is that Go did not learn from The Billion Dollar Mistake in Java: null references. You have zero protection against nil pointers, and this is likely not something that can be changed now without breaking backwards compatibility.
Although I guess that feeds into eachother, sum types would eliminate any need for null types in the first place.
I agree it is not as safe as a language like Rust, however it was the right trade-off to make in my opinion.
The main protection you have against nil pointers are nil receivers, and knowing when to use reference semantics vs value semantics.
I do feel its binary size is large comparing to c and c++, and multiple Go executable can not share libraries as easily as how c/c++ uses the shared lib, when I have a few Go binaries they add up, and I do storage constrained embedded development a lot.
On the desktop side, I really hope Go can have a GUI in its stdlib, something like what Flutter/Dart does: adding a Skia kind of engine and let me do GUI cross platform, that will make Go main stream like wild fire.
https://pkg.go.dev/cmd/go#hdr-Build_modes
you can actually build with shared libraries :)
I think most people I've seen use go, only use a single application, or have it turned into a docker container; so for them this is pointless but just fyi.
I personally dislike static linking but I see why it was used so heavily with go.
Also having a single binary is good in lots of cases because then you don't have the install runtime/separately install shared library
Impeller is the Skia replacement and is in full c++ that supports all platforms.
It will be great if Go team can work with them(both are in Google) and make Impeller a render engine for Go.
With this no more bloated electron.js and no more Java/Swing or Qt, what a dream for the day.
In before if you needed to make a highly concurrent network app you had to get into asynchronous programming that generally makes code looks shit (or at least slightly worse) and harder to debug.
With Go and goroutines taking IIRC around 8k for start you can "just" spawn as many of them as there are connections and write your code as if it was serial one. Add some half-decent concurrency primitives and it's pretty easy to not fuck up highly concurrent and highly parallel code.
As a Go programmer I always thought the generics complaint was kind of silly in practice — complicated code should be simplified and made more concrete, not more generic.
I’m glad generics were implemented, if only to silence the chorus of people who didn’t even use Go but whined about the lack of them. Their inclusion has simplified the stdlib and led to some cool new functions. Nevertheless I think people implementing them in their own projects is basically code smell and they are a symptom of poorly thought-out code rather than excellent code.
Anyway. This was a good initial post and a good follow-up. Go is my favorite language out there right now for its clarity, power, and ease-of-use. And with loop variable capture coming (the biggest remaining foot gun in the language in my experience) the language is only getting better.
BulkInsert(db *sql.DB, objs []interface{})
This unfortunately meant that any time you had []Foo.. you had to allocate a new []interface{} and copy over the items. Now a function like that can look like: BulkInsert[T any](db *sql.DB, objs []T)
And we're not wasting CPU cycles to copy the slice of []Foo. I'm struggling to see how that's code smell or less excellent than using []interface{} or duplicating the code for BulkInsert for every insertable type in our application.Generics are definitely better for you. But I would say the overall pattern you're employing of bulk inserting different kinds of data structures with one function is the problem. Of course, I don't know your code so I'm sure you have a good reason for choosing what you did, but a BulkInsert of Any certainly made me raise my eyebrow.
I guess you have never written a library. It's extremely useful there, stuff like "generic function that runs a channel thru X workers doing f() on it" is now easily possible with full type safety.
> Nevertheless I think people implementing them in their own projects is basically code smell and they are a symptom of poorly thought-out code rather than excellent code.
You can say that about literally any feature used by the incompetent.
But overall yes, they are far more useful for writing libraries than actual applications
Who needs data structures right?
Now Golang is saved, with Generics it's actually an Awesome Incredible PL.
I can't think of any other less confrontational way to say/ask , but beside GO which other language have you used ?
Because the above statement is so far remove from my experiences and understand of programming that i suspect we don't we really use the same day to day tools/languages in general.
Like I write a lot of Powershell code and their stuff still returns DBNull.Value when returning a null value from the DB, when nullable value-types were introduced almost 20 years ago.
> Do not confuse the notion of null in an object-oriented programming language with a DBNull object. In an object-oriented programming language, null means the absence of a reference to an object. DBNull represents an uninitialized variant or nonexistent database column.
For all the explanation though, I couldn't fathom what its talking about.
[1]: https://learn.microsoft.com/en-us/dotnet/api/system.dbnull?v...
This DBNull.Value may be a problem for C#, but idiomatic Go has largely been untouched by the introduction of generics.
Go: amazing tooling/libs, only needs a great language :-)
But a few tweaks, like proper Enums and a Option/Return Type (to avoid excessive err != nil), maybe a compilerflag to force dealing with errors (instead of _ it) - much better. If I can wish for something, then some native map/filter/reduce to avoid excessive for-loops…? :-)
I wish there was something like Go but with an actually good programming language attached.
I miss Hindley Milner type inference, ADTs, default immutability, sane error handling, pattern matching, and functional collection manipulation.
I'm not even mentioning the time it took to add generics to the language, which should've come from the beginning, and we got a bad implementation of it.
Not saying that it HAS to have all of this, but at least 1 or 2 things of that list would already make Go have much better ergonomics.
Go has no excuse since it's a relatively new programming language, and it could've got some of those from the start.
I need some resources for evangelizing GO.
I do not use it, but...
I have a colleague who needs/wants to replace a PHP/Laravel mess. They are talking up Node.js
I think GO would be a better choice
This article is close to what I need, but is there anything better?
The server side code they are looking to replace is handling sensitive financial information and I am very queasy using Node.js in that domain
- Simple
- Highly concurrent
- Impressive networking stdlibs