Wouldn't that make a whole class of efficient algorithms impossible though? Like let's say you have an std::list<T> and you want a sorted "view" of the elements. In C++ you'd create an array of pointers and then sort it, and after that you can just modify whatever each slot points to. In Rust you... can't do that because they'd all need to be read-only? I don't imagine you can turn the read-only to read-write on a whim (otherwise if you do this with two of them how the hell would the compiler figure out if two of them point to the same slot?) so the whole thing is just impossible?
Here's one way to think of it: There's the kind of code that Rust "wants" you to write, and then there's the kind of code that it allows to write.
Rust "wants":
- Lots of immutable data, with relatively sparing use of mutability.
- Clear ownership and borrowing.
Rust allows:
- Pretty much anything you could write in C, with the possible exception of bitfields in structs. You wanna directly access the pic8259 chip from kernel space? Have fun: https://github.com/emk/toyos-rs/blob/fdc5fb8cc8152a63d1b6c85...
The question is, is it worth exercising that full power for "ordinary" Rust code? To use your example, if you have a data structure D and a view V, can you reach "through" V and mutate the underlying D? Of course you can do this, if you really need to. You have lots of choices, including (for example) cells, locks, tricky lifetime annotations and/or unsafe code.
But in practice, it's usually not worth the hassle. What I do in a case like this is step back and ask myself, "How would I solve this problem in a functional language like Clojure, ML or Haskell?" Oftentimes, that will give me a nice, simple solution that Rust likes. But sometimes I actually do need to do things the hard way.
Part of being happy in Rust is learning how to minimize the use of mutable data. For example, if you're writing a video game and you need to update your "world state", do you really need to mutate the existing state? Or can you write a function that takes the old world state and the player's input, and uses them to compute a brand new world state? The latter actually eliminates all kinds of subtle bugs, and it can be done efficiently.
let vec_of_refs = ll.iter_mut().collect();
vec_of_refs.sort_unstable();
The downside is you can't have this vector at the same time as you access the underlying LinkedList. Another option you can do with other container types is to have a vector of indices, but this is extremely inefficient with a LinkedList.A popular approach is to have both views be indices into an underlying vector where the actual, mutable data is stored. If that isn't good for your situation, it's probably time to use `unsafe` to build an appropriate data structure.
Correct. What it does is prevent data races; this is per se nothing new, but was done as early as the 70s [1]. There've been various and sundry approaches to the same problem over the years (the 90s and early aughts produced a lot of research in this area).
In a way, it is frustrating that so few programming languages offer prevention of data races, so I'm grateful that Rust pushes that; but in another way, it is understandable, because all such mechanisms restrict what you can do, often in undesirable ways. Google "benign data races", for example: data races can be purposely exploited for more efficient implementations.
The other problem is that data races are really the easy part to solve about concurrency. The hard parts are general race conditions, non-determinism/causality, liveness properties (such as freedom from deadlocks and starvation), and performance.
Performance is one of the trickier aspects: in a shared memory system, performance is primarily a matter of reducing contention. But virtually any system that relies on some form of mutual exclusion to prevent data races introduces contention, and you have a constant tug of war between the two concerns.
Particularly frustrating is that all of these issues are what software engineers call "cross-cutting concerns" [2], i.e. things that cannot easily be modularized. For example, performance may suffer from a serialization bottleneck in a module's implementation where you cannot avoid that bottleneck without exposing the module's internals.
https://www.eiffel.org/doc/solutions/Concurrent%20programmin...
http://se.ethz.ch/~meyer/publications/acm/eiffel_sigplan.pdf
http://www.eiffel.com/developers/design_by_contract_in_detai...
Since they have strong QA focus, they also have things like test generation from contracts specifying intended behavior. So, safe concurrency was just one benefit of the work that started in the 1980's. Rust takes things further with a simpler and more flexible approach that supports quite a few different styles of concurrency. That's on top of the temporal safety inspired by the Cyclone language for safe, C-like programming.
Rust's real success is it's the first of the safe, system languages to take off with a massive ecosystem due to their great community efforts. That's probably what Ada and Eiffel needed on top of earlier FOSS implementations.
Some stuff is impossible without using 'unsafe', although less than you'd think at first glance.
At first glance you'd think "If you have to use unsafe, why bother using rust?" But it's actually awesome. If 0.01% of your code is marked unsafe, that gives you the bandwidth to audit that part of the code very closely.
You can only do this in an `unsafe` block. If you get a bug caused by aliased mutation, then you know exactly where it comes from.
Or, as Veedrac mentions, you can do it without any tricks if you can give up accessing the list itself while you have the array around.