And with all the help that the compiler now provides (even before swift 5), it's becoming really really hard to shoot yourself in the foot.
Eventually I just gave up and added an extra 'parent:' argument to an interface, for the one place it was actually needed. It's a bit more awkward than just keeping parent references in the tree nodes, but not too bad.
For situations with multiple queues or threads, the situation is even worse. I wrap a lot of my GCD code in extra locks, which by my reading of the documentation shouldn't be necessary. Without them, it occasionally crashes with strange memory errors that are impossible to figure out.
I felt like I wasted a few hours trying to track down my issue, the other day, and then come up with an alternative solution, while in any other modern language I could just have used a normal reference and counted on the GC to clean up the cycles when I'm done.
Backwards compatibility with Objective-C obviously has tremendous value to Apple, and ARC is smaller and faster than tracing GC, but I feel like I'm paying for it over and over. In most HLLs, once I get past the low-level parts, I'm using a language specifically designed for my task, and I never have to touch the low-level parts again. Swift feels more like fancy C, in that I don't think I'll ever be able to stop thinking about subtle memory management issues.
class ServiceLayer {
// ...
private var task: URLSessionDataTask?
func foo(url: URL) {
task = URLSession.shared.dataTask(with: url) { data, response, error in
let result = // process data
DispatchQueue.main.async { [weak self] in
self?.handleResult(result)
}
}
task?.resume()
}
deinit {
task?.cancel()
}
}
[1] http://marksands.github.io/2018/05/15/an-exhaustive-look-at-...If you regularly code C++, the "fuck, yes!" moment is there when you learn Swift.
Either way, to me Swift feels much closer to C++ than to Objective-C in spirit. I don't think it's a coincidence that there are at least two C++ committee veterans among its contributors :)
I don't understand why this is an example of an unsafe operation. Wouldn't clearly defined behavior of closures clarify the "3 or 4" question?
I believe that it's possible to specify this, and that's basically the behavior we had before SE-0176 was implemented. The issue with this is that it was slower for a dubious benefit (the semantics are obscure and non-obvious), so it was decided that it's just better to disallow this and get the benefit of clear behavior and better optimization opportunities, at the cost of removing this somewhat uncommon and not-all-that-hard to-rewrite-for-clarity pattern.
Isn't this kind of arguable? The benefit is that it avoids the need to make unnecessary copies in some cases, as is basically acknowledged in the article:
> The exclusivity violation can be avoided by copying any values that need to be available within the closure
Right? And in addition to the local cost of the extra copy, there's also the more ubiquitous cost of these run-time checks. Yes, there's the potential benefit of better optimization due to the non-aliasing guarantee, but I think it's far from clear that it's an overall performance win.
While I think it's reasonable to adopt a universal "exclusivity of mutable references" policy in order to achieve memory safety and address a fear of a (vaguely-defined) notion of "mutable state and action at a distance" (referred to in the article), particularly for a language like Swift, I think it would be improper to dismiss the associated costs, or even to imply that the costs are well understood at this point. Or to imply that this policy is, at this point, known to be an optimal solution for achieving memory (or any other kind of code) safety.