I find this surprising, as GCD does insulate you from that low-level stuff. When you need to work with mutable data, just create a dispatch queue, and only ever access the data by dispatching a function to the queue. Both Swift and Objective-C have friendly syntax for anonymous functions that makes this lightweight and easy.
I think I would have less hang ups if the author just came out and said I wanted to try using Go for multi-threaded code with Swift instead of trying to make GCD sound so confusing.
Cocoa is incredibly fragile about the main thread, so you need to be super careful what runs in what queue. If you add KVO/bindings into the mix, it needs an extraordinary level of paranoia^Wdilligence.
This basically just boils down to "don't touch the UI off the main thread". There are some exceptions with CoreAnimation, but other than that the main thread checker will yell at you if you do something wrong.
That basically applies to all UI framework. At least I'm not aware about one which provides good support for accessing it from another thread.
There is obviously a reason for it, which is that UI frameworks are incredibly stateful, and trying to manipulate lots of state from multiple threads at once rarely works out well.
GCD is super great and I use it a lot (...if you had to extend AsyncTask recently: my condolences), but it doesn't safeguard you from race conditions, dealing with locks and all that fun stuff.
Channels seem like a simple thread safe stream for the most part, which you can get with Rx.
I don't see how this technique is comparable to Electron. The article does not describe anything related to a cross-platform user interface, which is what Electron addresses. You can write non-UI logic that is cross-platform in half a dozen mainstream languages and another dozen less popular ones. That's not a big deal. It's the UI component that is harder to achieve, and that is what Electron offers.
All you need is an objective-c bridging header where you do an #import "name_of_header_.h" for every header. After that, the headers are visible to all of your swift code. It's no different than mixing objective-c and swift, except here you're mixing your language of choice, compiled to callable C functions inside a static library.
To recap - drag .h and .a files the same way you have .swift files into your xcode project. Add a BridgingHeader.h file, go to it, fill it with #import "name_of_header.h" statements. Lastly, the project needs to know you're using a bridging header, that's done in the project target's Build Settings tab, under "Objective-C Bridging Header" you need to have the value set to the filename you chose for your bridging header.
This is not unique to calling Go in Swift btw - any language that can be called from C, can be called from Objective-C, and therefore Swift. One thing to be aware of is memory management - unless you're passing things by value (copying), making sense of when things can be safely deallocated across languages is non-trivial.
The approach is based on the same principle: cgo as bridge between Go and a C library. The C library is build by the Swift package manager. Blog post on Dev community with details:
https://m.youtube.com/watch?v=R0oaOohl5jk ( in french but the slides are in english)
Go, horrible as it is, doesn't lack cultists.
This comment is not about the language per-se. it is about the current goals of the people with money and weight behind the language right now. I guess the regret will come or not if those goals keep or change.
In our Perl code base, we have had so many issues with auto-vivification, lack of argument tracing (just pass around @_ everywhere!), callback hell in AnyEvent for concurrency, and more. Maybe if you use Moose everywhere, you can get some form of sanity, but I doubt it. Engineers I have full respect for have scratched their heads trying to initially dive into this perl. What I can grant however is that it is able to do a lot of work (given enough machines!).
For the Go version, I know exact method signatures and variable types. Concurrency if first class. And just about anyone can read the code and figure out what is going on. We've onboarded new grads who can help put solid features into this already large Go codebase quickly. We are seeing 20x optimization in throughput over one code base from Perl (it requires a lot of waiting on remote servers we don't control), and 100x in another.
I can't imagine regretting the choice to write in Go for networked services running in the backend.
“The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.”
-- Rob Pike
From https://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Fr...
1) regex was treated like the primary way to do things - even when it wasn't necessarily called for - at the expense of readability (and Perl supported it so well)
2) sysadmins
The two combined together (and possibly the fact that it was the early days of commercial internet service) led to the idea that anything done in Perl was destined to look like "line noise" to actual SWEs.
/just how I saw it...
All that Go delivers today, over the medium, is very similar to what perl also delivered yesteryear, on top of yesteryear medium. But also like perl, it is being born from a bunch of old unix system engineers :)
Did I miss that part of history?