I don't think Swift's runtime exclusivity checks give you any of that.
There are some cases were dynamic checks are required (e.g. with reference counting/shared ownership: class in Swift, Rc/Arc in Rust). Swift will automatically insert checks in these cases, whereas they have to be manually written into Rust code (via tools like RefCell). Swift and the APIs inherited from Obj-C use a lot more reference counting that Rust does by default, plus it's more difficult for the programmer to reason about the checks (e.g. to be able to write code that avoids them, to optimise) when they're implicit.
In summary, Rust and Swift have essentially the same rules, but Rust requires manual code to break the default rules, whereas the Swift compiler will implicitly insert those necessary checks (in some cases).
In Rust, an alternative is to wrap individual fields in RefCell/Mutex, but that results in uglier syntax – you end up writing RefCell/Mutex and .borrow()/.borrow_mut() a lot of times – and adds overhead, especially in the Mutex case (since each Mutex has an associated heap allocation). There are alternatives, like Cell and Atomic*, that avoid the overhead, but have worse usability problems. I've long thought Rust has room for improvement here...
On other side, as discussed recently at CCC regarding writing drivers in safe languages, Swift's GC as reference counting generates even less performant code than Go or .NET tracing GC, so there is also room to improvement there.
It is quite bad for shared data, because handling lock count requires a lock and also trashes the cache, both bad ideas in today's modern hardware architectures.
Also contrary to common belief, it also has stop-the-world issues, because releasing a graph based data structure can originate a cascade of count == 0, thus having a similar behaviour.
Which when coupled with a naive implementation of destructors can even cause stack overflows, due to the nested calls of the data being released.
So when you start adding optimizations for dealing with delayed destruction, non recursive destruction calls, lock free counting, a cycle collector, you end up with a machinery similar to a tracing GC anyway.
Finally what many seem to forget, just because a language has a tracing GC, it doesn't mean that every single memory allocation has to go through the GC.
When a programming language additionally offers the support for value types, stack allocation, global segment static allocation and manual allocation in unsafe code, it is possible to enjoy the productivity of a tracing GC, while having the tooling to optimize the memory usage and responsiveness when required to do so.
Having said all of this, RC makes sense in Swift because it simplifies interoperability with the Objective-C runtime. If Swift had a tracing GC, they would need a similar optimization like .NET has to deal with COM (see Runtime Callable Wrapper).