> All you need to do is take the address of the GPIO register then toggle the bit.
Yes, and that is unsafe. Rust makes you put that in an unsafe block, to encourage you to build safe interfaces. This is a good thing.
> No name mangling.
The alternative to name mangling is not having a module system, with all the fun name conflict issues that come with it. Rust made the right decision here.
> No error handlers to override.
C has no concept of a panic handler because what would be panics in C are undefined behavior. Undefined behavior is bad for safety and understandability of the code. Having to declare an error handler is a small price to pay.
> It does seem odd to me that so many of what would be compiler options in C are hard coded.
Because Rust is safe by default. That's one of its most important features. Being more like C in the ways you mention would mean compromising that principle.
Just because Rust can't easily do things that C can doesn't make it a bad language, but it does mean that, shockingly, there are somethings that Rust isn't as well positioned for as other languages are.
Similarly, there are things that C is bad at. Both at the high level and the low level. At the low level it hides too much of the CPU's features so for some very low level programming you need to drop to assembly. At the high level it has obvious deficiencies, many of which Rust addresses, but for interfacing with the actual hardware it is tough to beat C in terms of convenience.
That's a useful language feature for low-level programming. It tells the compiler that the address space is special - it can change without help from the program. Other features of device space can include that the write width can matter (some registers have to be written as a single byte, world, or double word) and some registers are read-only or write-only. The compiler needs to know this stuff. Especially because Rust makes some strong assumptions about memory.
Rust probably needs to know about both device registers and memory shared with peripherals to operate at this level. For the blinky light program it doesn't matter; for the network stack it matters a lot.
I don't do much C/++ these days, but when I do (and it's a large codebase which is hard to debug), `rr` is invaluable. It works with Rust too.
I can't imagine the full C example for this is any prettier or easier to follow?
[ed: As for "no name mangling" - having such an easy way to turn it off when needed, and yet avoid collisions when you do need it seems pretty good to me. Perhaps just having something that's "extern" be umangled by default would be better -- but it's not like one needs to bend over backwards to get a "mostly safe" bare metal program here.]
Don't get me wrong. Rust is an interesting language. The thing described in this post is well within its capability, i.e. IMO there isn't really anything that worths bragging about. Such trivial thing neither demonstrates the real potential of Rust, nor answers important questions from real world engineering perspective.
I'm all for having better tool to write low level stuff. I have dabbled with Rust and the experience was eye-opening. I think Rust still have a lot to catch up though.
The question is what does Rust have to offer as your embedded program grows.
The HN upvote says otherwise. :-)
But I guess had there been an HN-equivalent in assembly age people would get excited about C, too.
> The question is what does Rust have to offer as your embedded program grows.
Exactly.
We are aware of the good, bad and ugly bits of C. The industry has built extensive tooling around it. Rust has a long way to go. I certainly wish to see more pioneering projects from Rust.
I read it as a primer for writing simple Rust programs that can run baremetal on the Pi (I've cross compiled for a Pi-with-OS before but never tried this) and access GPIO.
If you're talking about the HN upvotes, HN just tends to upvote posts mentioning Rust a lot :)
Have you looked at https://zinc.rs/?
Does anyone know if the RPi GPIOs can be driven at around 80KHz? I've seen reports that this is possible, but that the USB or video driver tends to lock the CPU for long times, messing with timings - but hopefully running on bare metal would take care of that.
Careful, this is strongly dependent upon the hardware, not Linux.
I seem to recall that people doing SWD programming can barely get the GPIO's to move faster than a couple of KHz.
I can tell you that, at least on the BeagleBone Black, you have to do some Linux driver voodoo to get better than a couple of KHz.
Edit: I stand corrected. Apparently the Broadcom chip in the RPi does much better than the TI one in the BeagleBone Black when driving baseline GPIO's.
http://www.erlang-factory.com/static/upload/media/1394631509...
There is a video for the presentation as well.
https://www.youtube.com/watch?v=OBGqVmzuDQg&list=UUKrD_GYN3i...
The trick is Bealgebone black has 2 co-processors PRUs and they can be used for realtime work:
http://elinux.org/Ti_AM33XX_PRUSSv2
It is a programmable 200-MHz, 32-bit processor with single-cycle I/O so it can be used to toggling GPIO pins.
So GPIO performance won't be a problem, but they note that OS multitasking can cause issues:
"What is not evident from the snapshots, however, is that due to multitasking nature of Linux, the GPIO manipulation is constantly interrupted for short periods when the CPU is doing something else, such as receiving or sending data over network, writing log files, etc"
Running on bare metal should take care of that, so the only remaining problem is how to synchronize the signal generation on the RPi with the recording of data on the host computer. (Sigh, if only RPi had a USB3 port...)
My project involves generating two analogue signals to drive a mirror (a sin wave on the x-axis and a staircase pattern on the y-axis), and a synchronized digital trigger signal for the camera. There are a number of configuration options that make this slightly more interesting than it looks at first (frequency, number of sin periods per staircase step, duty cycle and number of triggers per sin).
An implementation of this on a NI DAQ took the better part of a day and an implementation on a FPGA took roughly two weeks (mostly spent on familiarizing with the tools and communicating settings from the host.) I actually think an implementation on the RPi would be simpler than either, including wiring up simple DAQ.
Go is not going to replace Java in the future. For a language which cannot even lock dependencies to a version, people sure like to pretend it is the holy grail because Google made it.
- Nobody in their right mind would choose Rust or Javascript for a project where the other is more appropriate.
Though (AFAIK) Scala and Clojure both interop quite well with Java, and Scala/Clojure interop isn't terrible either, plus there's definitely a culture of reusing in Clojure/Scala all the stuff that the Java ecosystem already had. So this cluster isn't causing much fragmentation at all.
You have something of an argument that Go and the JVM languages clash for mindshare, but at that point the argument around really annoying fragmentation has kind of lost steam.
Controversial: Go (if someone ports the runtime to bare metal), embedded JVMs, embedded, Swift (depending how Apple drives it), .NET Native (if C# gets missing features from System C#)
Faded away: Algol, PL/I, CPL, Mesa, Modula-2, Modula-2+, Modula-3, Oberon, Oberon-2, Active Oberon, Component Pascal, Turbo Pascal, Forth, Sing#, System C#
It could probably be argued that Rust doesn't add much on top of D (as a "better" C++) -- but between the mind-share and the focus on "safe" (and boxing in unsafe) memory access, I think Rust is really interesting. Not sure how run-time free D is coming along - believe there were some issues with the standard lib?
Ada was troubled by a confusing split between FSF LGPL Gnat trailing Ada Core GPL/commercial compiler with a release -- leaving people a little confused if there was a good Free Ada compiler that could be used for commercial (or just non-GPL) development. (Yes, it was a real issue, just as one needs an LGPL libc for c development etc).
But as I understand it the GNAT compiler has matured to the point that one can now use modern Ada without having to worry about that. Unfortunately Ada probably lost a lot of potential developers due to the confusion/issue.
Has that changed?
Binary size isn't a problem, though. A lot of the threads talking about sizes aren't doing all the stuff you need to get truly small binaries. That's because, as this stuff is on nightly, it has much worse (or no) docs, so it's easy to miss things.
I'm working on a little kernel, the one linked in the first line of the post, and it's great.