As others have said, it does seem like a misconfiguration (perhaps in the defaults shipped by their distribution) that the correct arch is not picked by default when building on the Raspberry Pi B+ itself.
IIRC the original Pi used leftover chips from a TV box, which is the kind of product that IME never ships more compute than they have to, for price reasons.
Raspberry Pi's actually boot on a really fringe processor called a VideoCore. Arguably the GPU bootstraps the CPU, which makes my brain hurt.
ARM keeps releasing newer slow cores that support the latest instructions; for example the Cortex-A5 was available and the RPi 1 really should have used that.
Quote:
I started trying to take binaries from my "build host" (a much faster Pi 4B) to run them on this original beast. It throws an illegal instruction.
This is like building something with the latest MSVC on Windows 11 and trying to run the .EXE on an old PC running Windows XP. :)
I suspect the entire Pi distro she's running on the Pi 4B itself won't run on the B+, since all of it is probably compiled the same way, possibly down to the kernel.
Edit: oddly, after searching LLVM bugs, I found a bug that sounds pretty much exactly like this issue... but it's from 2012 and is closed (although the final couple of comments make it sound like maybe it wasn't actually fixed--note: I only skimmed the comments and I probably misunderstood):
https://github.com/llvm/llvm-project/issues/13989
Edit again: I forgot about the comment at the end of the article that clarifies that explicitly passing the target results in a working program. In that case, it sounds like some sort of configuration bug--I would assume (but am not certain) that the default target would be the current processor, at least on Unix. That bug I linked was probably about producing incorrect code even when the target was set correctly which, thankfully, isn't happening today.
The weird clang install on a fresh B+ install is more puzzling, unless there's some user error somewhere.
I find it generally hard to strike a good balance between backwards compatibility and usage of modern CPU features in newer AArch64 generations (https://en.wikipedia.org/wiki/AArch64). We found that there are surprisingly many institutions on a shoestring budget (universities in emerging countries) or hobbyists that can't afford to upgrade their hardware.
On a technical note, what I found quite cumbersome is that the cpu flags in /proc/cpuinfo don't always correspond with the flags passed as -march= to the compiler, e.g. "lrcpc" vs "rcpc". To make all of this work, one really needs to maintain two sets of flags.
Specifically because under bullseye (and clang-11) the default target is armv6k-unknown-linux-gnueabihf while under bookworm (and clang-13) the default target is arm-unknown-linux-gnueabihf.
Or maybe the default changed for the given build configuration on the LLVM side?
But, when comparing [1] to [2], the rules file has a nice test that says "if DEB_HOST_ARCH is armhf, set the LLVM_HOST_TRIPLE to armv6k..." which seems to confirm a build configuration change.
[1] http://raspbian.raspberrypi.org/raspbian/pool/main/l/llvm-to...
[2] http://raspbian.raspberrypi.org/raspbian/pool/main/l/llvm-to...
This might mean there are no arm v6 buildbots running, or it might mean there are ones running but the implicit configuration is still working on them.
LLVM is a really good cross compiler. Build for any target from any target, no trouble. Clang is less compelling - if it's built with the target, and you manage to tell it what target to build for, it'll probably do the right thing (as in this post - it guessed wrong, but given more information, did the right thing). Then the runtime library story is worse again - you've built for armv4 or whatever, but now you need to find a libc etc for it, and you might need to tell the compiler where those libraries and headers are, and for that part I'm still unclear on the details.
Why would CLANG do this ?
My Gentoo ARM SBC based on an even more ancient armv4 arch has been chugging along just fine with the latest gcc/clang updates:
grep CTARGET /etc/env.d/gcc -r
/etc/env.d/gcc/armv4tl-softfloat-linux-gnueabi-11.3.0:CTARGET="armv4tl-softfloat-linux-gnueabi"Without that information, it’s pretty pointless to make claims about the instruction set LLVM compiles to because that’s a matter of what native target LLVM has been configured for.
FWIW, in Debian, llvm-toolchaim-snapshot still supports armel which uses ARMv5T as the baseline (there is currently an unrelated bug in LLVM’s OpenMP library though which prevents a successful build).
Presumably the image is Raspbian. I don't see a reason why not to assume that.
A default for targeting is incorrect, and/or an architecture identification is buggy. But binaries built for Pi B+ - when using correct targeting arguments - can be run on Pi B+.
Now if the title is using wording that suggest a functionality is not there anymore vs the reality, where defaults or identification are incorrect, wouldn't that mean that is hunting for sensation?
I haven’t debugged it because I found a work around (enable development mode, change build settings so mono isn’t used). I should return to it at some point, just to learn more.
In this particular case though, the end processor/native detection seems to be failing and clang feature detection gets armv7l as native (or could just be the default generation option). Looks like a good bug to report, if only we get the good clang folks who will take the time to land a fix.
I have been playing around with zig. My current focus will be on not using broken compiler backends for a while.
It now sounds like it is completely broken. But you can just fix it with a flag. And the change of default was probably an unintentional bug.
There's plenty of evidence to the contrary, but since when has evidence mattered when it comes to defending the right of big business / big distro to do whatever they want? ;)
Really, this is just laziness and sloppiness on the Linux distro makers' part. Any amount of testing would catch this. Thanks, Rachel!
1/ Old hardware is free to support because the software for it just keeps working
2/ Lazy Linux people didn't test the software that stopped working on old hardware
Those two things you believe to be true are inconsistent with one another. For example, in this context.What you're missing is that code changes to do new stuff and sometimes those changes are incompatible with old hardware or operating systems. If noone is testing said old systems and the developer doesn't remember said eccentricities, the old systems will break when the new stuff lands.
If anything it might be better to spend the resources deleting the support for old hardware (probably at the point where people stop testing on it) so that people using the old stuff get a much clearer message that they also need to use old tools with it. It's hard to get sign off to do that either, leaving the probably broken stuff lying around is the spend-no-time-now choice.
The Raspberry Pi Foundation should be running two Pis of every kind with the newest version of their official OS and the next older one, running automated tests. This isn't hard, nor is it time consuming after it's set up.
The real issue is that people change things in ways that affect older hardware, which is fine, but they should test those changes. If they don't want to, they shouldn't be making those changes. Period.
Eg, this article.
> Any amount of testing would catch this.
Who is paying for the testing? I'm not suggesting supporting old hardware is bad, but we must recognize it takes some effort to uphold backwards compatibility. Stuff gets broken accidentally always, and testing isn't free.
In case of refactoring / restructuring, that's exactly so.
But "drop support for this old hardware" is meant to be an intended decision with a clear deprecation warning, not accidentally.
Open source is already a miracle. The fact that something works somewhere is a miracle. I don’t tempt the powers and I don’t demand even more miracles to satisfy some perfectionist urges.
Armv6 isn’t really “old hardware” in the “disused, actively rotting” sense. That’s reserved for things like Itanium or HPPA, which distributions (and upstreams) would do perfectly well to remove unless paid buckets of money by their respective corporations.
But the beauty of free software is that you can always do it yourself. (Or pay someone to do it)
All pieces of the puzzle were open enough that the author could track down the problem and correct it. That, not indefinite support for no-longer-manufactured hardware, is the benefit of open source. It's the thing that enables the other thing.
And thanks to the magic of the internet, blogs, and search engines, now that one person has solved the problem there's a cracking chance that the next person to have the problem will find the solution.
https://github.com/Feodor2/Mypal68 https://github.com/win32ss/supermium
Anything else about your alternative reality?
Also, why don’t you go and take on support for this given target, if it’s so important for you? Or pay for someone to do it? I’m sure the project wouldn’t mind supporting it if someone would have stepped up, but I’m sure it’s still not too late.
It has been my experience, over the last 6 decades or so, that it almost always boils down to "doing anything except what I want is a waste of time and resources".
When it comes to free software, you do what you do and learn to ignore the "but what about ME!" demands from those who contribute nothing else. Or you move on and put your energy into something else.
I have one running BSD UNIX-like OS as I type this comment.