`Inductive False := .`
i.e., `False` has no constructors and hence is an empty type.
Anyway, this means that for any type `A`, you can construct a function of type `False -> A` because you just do this:
`fun (x : False) => match x with end.`
Since `False` has no constructors, a match statement on a value of type `False` has no cases to match on, and you're done. (Coq's type system requires a case for each constructor of the type of the thing being matched on.) This is why, if you assume something that is false, you can prove anything. :)
Isn't it really only there in case someone needs to "hook into" the drop functionality before the variable is dropped? Please correct me if I'm wrong.
EDIT: Minor editing to clarify meaning.
It's talking instead about std::mem::drop, a standard library function that drops a value before it would ordinarily go out of scope.
One cool thing about Drop (and some other cool stuff, like MaybeUninit) is that it makes doing things like allocating/freeing in place just like any other Rust code. There may be some unsafe involved, but the syntax is consistent. Whereas in C++ seeing placement new and manually called destructors can raise eyebrows.
If I do the rust equivalent of:
def add1(x):
return x + 1
x = 1
y = add1(x)
z = add1(x)
then will x have been deallocated by the first call to add1 and will the second call to add1 fail?[You can ignore the fact that I'm using numbers and substitute an object if that makes more sense in the context of allocating / deallocating memory in rust.]
However, a few types, including integers, but also things like booleans and chars, implement a trait (which for the purposes of this discussion is like an interface, if you're not familiar with traits) called Copy that means that they should be implicitly copied rather than moved. This means that in the specific example you gave above, there would not be any error, since `x` would be copied implicitly. You can also implement Copy on your own types, but this is generally only supposed to be done on things that are relatively small due to the performance overheard of large copies. Instead, for larger types, you can implement Clone, which gives a `.clone` method that lets you explicitly copy the type while still having moves rather than copies be the default. Notably, the Copy trait can only be implemented on types that already implement Clone, so anything that is implicitly copied be can also be explicitly copied as well
Compare with C++, in particular types with deleted copy operators (e.g. unique_ptr<T>). In order to call a function that takes an unique_ptr by value as argument, you must explicitly move the object into the function:
void foo(unique_ptr x) {
...
}
unique_ptr x = ...;
foo(move(x));
foo(move(x));
Linters (i.e. clang-tidy) can be configured to complain about this, but it's completely valid C++ (because move leaves the object in an unspecified but valid state). In Rust, the argument will be automatically moved in the first call, and the second call will generate a compile-time error.Also and importantly because move() only indicates that the value may be moved from (it's a fancy cast), that doesn't mean it will be moved from. The called function needs to take an rvalue reference (T&&) for a move to possibly happen. That is if you had a
void foo(unique_ptr&& x)
it may or may not move. Likewise you can call move() on things which are not move constructible (or passing the result to functions which don't care) with no ill effects (save to the reader).IIRC what happens here is that since foo() takes a unique_ptr by value the compiler will insert the construction of a temporary unique_ptr and overload resolution will select the move constructor (since we've provided an rvalue reference, and assuming the wrapped type is move-constructible).
> move leaves the object in an unspecified but valid state
Also notes that this is the general semantics, but specific types can define their exact behaviour here. In particular, unique_ptr specifies that a moved-from unique_ptr is "nulled" (equal to nullptr).
(I realize you, saagarjha, probably know this already -- this is more a note to other readers.)
But I remember the words "drop" and "dup" being used since the early days of linear logic too. I believe they come from Forth, where they do pretty much the same thing! [2]
[1] http://homepages.inf.ed.ac.uk/wadler/papers/linear/linear.ps
|_| ()
[1] https://twitter.com/myrrlyn/status/1156577337204465664In which case there’s only one situation where I could see this useful, and that’s when you are building a large object to replace an old one.
The semantics of
foo = buildGiantBoject();
In most languages is that foo exists until reassigned. When the object represents a nontrivial amount of memory, and you don’t have fallback behavior that keeps the old data, then you might see something like drop(foo);
foo = buildGiantBoject();
Most of the rest of the time it’s not worth the hassle.It's not just a matter of memory use. References and mutable references form a sort of compile-time read-write mutex; you can't take a mutable reference without first dropping all other references. See https://stackoverflow.com/questions/50251487/what-are-non-le... for more.
NLL only affects values without destructors.
Now actually this is a quite nasty object with implicit global side effects which you should avoid in the first place, but for the mutex case i don't know of a better option, maybe Rust has a better way to handle this?
fn main() {
let mut a = 1;
let b = &mut a;
*b = 2;
println!("{}", a); // prints "2"
// *b = 4; // If this line is uncommented, compile time error.
}
Because b is a mutable reference to a, this means that a cannot be accessed directly until b goes out of scope. In this sense, b goes out of scope the last time it's used. _However_, AFAIK, b isn't actually de-allocated until the end of the function.Of course, it doesn't matter in this trivial case, because b is just some bytes in the current stack frame so there's nothing to actually de-allocate. But if b were a complex type that _also_ had some memory to de-allocate, this wouldn't happen until the end of main(). But in this case, b's scope also lasts until the end of main, which is kind of like adding that last line back in...
This can be seen in the following example, where b has an explicit type:
struct B<'a>(&'a mut i32);
impl<'a> Drop for B<'a> {
fn drop(&mut self) {
// We'd still have a mutable reference to a here...
// If B owned resources and needed to free them, this is where that would happen
}
}
fn main() {
let mut a = 1;
let b = B(&mut a);
*b.0 = 2;
std::mem::drop(b); // Comment this line out, get compiler error
println!("{}", a); // prints "2"
}
In this example, without the std::mem::drop() line, the implementation for Drop (i.e., B's destructor), B::drop would be implicitly called at the end of the function. But in that case, B::drop() would still have a mutable reference to a, which makes the println call produce a "cannot borrow `a` as immutable because it is also borrowed as mutable" compile time error.In other words, this "going out of scope at last use" is really about rust's lifetimes system, not memory allocation.
IMHO... this is one of the rough edges in rust's somewhat steep learning curve. Rust's lifetimes rules make the language kind of complicated, though getting memory safety in a systems programming language is worth the trade-off. There's a lot of syntactic sugar that makes things a LOT easier and less verbose in most cases, but the learning curve trade-off for _that_ is that, when you _do_ run into the more complex cases that the compiler can't figure out for you, it's easy to get lost, because there are a few extra puzzle pieces to fit together. Still way better than the foot-gun that is C, though. At least for me... YMMV, obviously.
template<typename _Tp>
constexpr typename std::remove_reference<_Tp>::type&&
move(_Tp&& __t) noexcept
{ return static_cast<typename std::remove_reference<_Tp>::type&&>(__t); }The compiler actually implicitly adds drop glue to all dropped variables!
It's a wonderful language but there are still some PITAs. For example you can't initialize some const x: SomeStruct with a function call. Also, zero-cost abstraction is likely the biggest bullshit I've ever heard, there is a lot of cost and there's also a lot of waiting for compiler if you're using cargo packages.
That said, I wouldn't rather use C/C++/Go/Reason/Ocaml/? - that is probably the love part.
BTW: I've recently stopped worrying about unsafe and it got a bit better.
So my message is probably: - keep your deps shallow, don't be afraid to implement something yourself - if you get pissed off, try again later (sometimes try it the rust way, sometimes just do it in an entirely different way)
"(...) there are two factors that make something a proper zero cost abstraction:
No global costs: A zero cost abstraction ought not to negatively impact the performance of programs that don’t use it. For example, it can’t require every program carry a heavy language runtime to benefit the only programs that use the feature.
Optimal performance: A zero cost abstractoin ought to compile to the best implementation of the solution that someone would have written with the lower level primitives. It can’t introduce additional costs that could be avoided without the abstraction."
https://boats.gitlab.io/blog/post/zero-cost-abstractions/
It's not about compile time...
You can if it's a `const fn`. The set of features you can use in `const` is small but growing.
While someone else is right that "zero-cost" refers to runtime cost rather than compilation cost, dependencies are the biggest problem.
The program `spotifyd` takes over an hour to compile on my X200 laptop. This is, for reference, the same amount of time that the Linux Kernel and GCC takes to compile (Actually, I think GCC takes less time...). Most of the compilation time is on the 300+ dependencies, that simply wrap C (and in some places, replicate) libraries that I already have installed on my system!
Your comment was the first I've heard of spotifyd, so I downloaded it from github and ran "cargo build". I spent 3 minutes figuring out that I needed to install the "libasound2-dev" package, then I spent less than 2 minutes compiling. I watched the clock to be sure, but after only 2 minutes, Cargo produced a target/debug/spotifyd executable. Maybe you're referring to --release compilation?
I don't have the fastest PC, it's just an 8 year old desktop with a 6 core Intel i7 980x running Ubuntu 18.04. Rust compilation maxes out all the cores, which is really nice.
With the RustEnhanced package added to Sublime Text, editing Rust code is a very interactive experience with practically instant feedback. Running tests is fast too.
I wish I understood why some people are apparently having a very different experience. Maybe it's slow on some operating systems.
You can get a $200 CPU that will blast through a kernel compile in 3 minutes.
Swift is still missing decent async / await support, generators and promises. Some of this stuff can be written by library authors, but doing so fragments the ecosystem. Its also still harder than it should be to write & run swift code on non-mac platforms. And its also not as fast as it could be. I've heard some reports of swift programs spending 60% of their time incrementing and decrementing reference counts. Apparently optimizations are coming. I can't wait - its my favorite of the current crop of new languages. I think it strikes a nice balance between being fancy and being easy of use. But it really needs some more love before its quite ready for me to use it as a daily workhorse for http servers and games.
For example, use an atomic integer: https://play.rust-lang.org/?version=stable&mode=debug&editio...
You can also use types built on std::sync::Once and UnsafeCell, like once_cell::Lazy<std::sync::RwLock<T>>. This will get even easier as we get `const fn` constructors for locks in the future.
Zero cost refers to runtime cost, not compilation cost. Zero cost abstraction is not bullshit.
Maybe just dig into some sources and see what macros are expanded to to see the overhead.
let x = String::from("abc");
std::mem::drop(&x);
std::mem::drop(&x);
std::mem::drop(&x);
std::mem::drop(&x);https://play.rust-lang.org/?version=stable&mode=debug&editio...
On my phone so I can’t triple check, but there’s no bound on T, you can pass in any type.
The beauty of programming language design is not building the most complex edifice like Scala or making the language unacceptably crippled like Go - but giving the programmer the ability to represent complex ideas elegantly and safely. Rust really shines in that regard.
i'm fairly ignorant on the various differences but my general feeling was that Go is quite useful?
See "less is exponentially more".
The same way some electric bikes are restricted to a given speed to keep their user safe. Some people call it "crippled", while some other call it "simple and safe to use".
> I guess I shouldn't be promoting language-bashing, but +1 for calling Go an unacceptably crippled edifice :)
https://old.reddit.com/r/rust/comments/dh4rcz/my_favorite_ru...
Less common complaints are that it's missing: object oriented features like inheritance, a configurable gc (as java has), the ability to work with OS threads, c-compatible stacks for fast c-interop, ownership semantics, type-inference for arguments (e.g. as haskell does), operator overloading, dependent types, etc.
The list of things in the first set can mostly be summed up as "go has a worse type-system than C++/rust/etc, something much closer to java 1 before generics, or c". Basically, the language is intentionally crippled because it intentionally ignores advances in type-theory that have been shown to allow expressing many things more safely.
For example, sum types and match statements make modifying code much safer. People will write switch/if-else-ladder code to do exactly the same sort of thing even without them, the code will just fail at runtime rather than compile-time when a new variant is added or one is not handled by accident.
Lack of generics is a big part of the issue. But more generally the focus on "simple" code means that more sophisticated abstractions are actively eschewed, and personally I find this makes writing Go code quite frustrating.
Someone describes Go as "unacceptably crippled" while Uber engineering has 1500 microservices written in Go, making it their primary language.
Go was explicitly designed to be a poor language of this sort, as evidenced by the infamous Rob Pike quote:
> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
In other words: a language that makes programmers fungible. And how did they accomplish this? By omitting features and barely going for the lowest-common denominator of the languages listed, disregarding any reasons other languages may have for their more ‘advanced’ features like generics. Hence, crippled.
(Frankly, I object more to describing Scala as ‘the most complex edifice’.)
I also find Go-the-language unacceptably crippled. Unfortunately, Go-the-runtime is the only best runtime for microservice-based applications, if you want a GC, especially if you want to run the system on limited hardware.
All the other options are either too bloated to comfortably spawn in large numbers (Java, C#, Node, Python, Ruby take too long to start, use too much RAM , and/or give you too large containers), or are too new and unproven (Nim).
So, we chose to use the inferior language and line with a small hit to productivity (language choice has little impact on productivity after the learning stage anyway) for the superior runtime system.
I just spit mezcal on a stranger.