https://github.com/illumos/illumos-gate/blob/master/usr/src/...
The trick uses a branch in the delay slot of a jmp--a decidedly unusual construction. At the time I found this to be extremely clever and elegant... but apparently not so clever as to warrant a comment.
SPARC has two attributes (I hesitate to call them features) that this code interacts with: register windows and a delay slot. Register windows are a neat idea that leads to some challenging pathologies, but in short: the CPU has a bunch of registers, say 64, only 24 of which are visible at any moment. There are three classes of windowed registers: %iN (inputs), %lN (local), %oN (output). When you SAVE in a function preamble, the register windows rotate such that the callers %os become your %is and you get a new set of %ls and %os. There are also 8 %gN (global) registers. Problems? There's a fixed number of registers so a bunch of them effectively go to waste; also spilling and filling windows can lead to odd pathologies. The other attribute is the delay slot which simply means that in addition to a %pc you have an %npc (next program counter) and the instruction after a control flow instruction (e.g. branch, jmp, call) is also executed (usually, although branches may "annul" the slot).
This code is in DTrace where we want to know the value of parameters from elsewhere in the stack, but don't want to incur the penalty of a register window flush (i.e. writing all the registers to memory). This code reaches into the register windows to pluck out a particular value. It turns out that for very similar use cases, Bryan Cantrill and I devised the same mechanism completely independently in two unrelated areas of DTrace.
How's it work?
We rotate to the correct register window (note that this instruction is in the delay slot just for swag): https://github.com/illumos/illumos-gate/blob/master/usr/src/...
Then we jmp to %g3 which is an index into the table of instructions below (depending on the register we wanted to snag): https://github.com/illumos/illumos-gate/blob/master/usr/src/...
The subsequent instruction is a branch always (ba) to the next instruction. So:
%pc is the jmp and %npc is the ba. The jmp sets %npc to an instruction in dtrace_getreg_win_table and %pc is the old %npc thus points to the ba. The ba sets %npc to be the label 3f (the wrpr) and %pc is set to the old %npc, the instruction in the table. Finally the particular mov instruction from the table is executed and %pc is set to the old %npc at label 3f.
Why do it this way? Mostly because it was neat. This isn't particularly performance critical code; a big switch statement would probably have worked fine. In the Solaris kernel group I remember talking about interesting constructions like this a bit around the lunch table which is probably why Bryan and I both thought this would be a cool solution.
I'm not aware of another instance of instruction picking in the illumos code base (although the DTrace user-mode tracing does make use of the %pc/%npc split in some cases).
* Namespaces don't come close to FreeBSD jails or Solaris / Illumos Zones. There is a reason Docker hosters put their Docker tenants in different hardware VM's. Because the isolation is too weak.
* Due to CDDL and GPL problems ZFS on Linux will always be hard to use making every update cycle like playing Russian roulette.
And there are other benefits. Like SMF offers nice service management while not providing half an operating system like systemd.
I completely agree. I love Linux and it’s easily my preferred desktop OS but when it comes to stuff like ZFS, containerisation and other enterprise features, FreeBSD and Solaris are just more unified and consistent. A lot of it has to do with Linux being a hodgepodge of different contributors resulting in every feature effectively being a 3rd party feature. Which I think is the problem Pottering was trying to solve. And in many ways that’s quite a strength too. But ultimately it boils down to the old Perl mantra of “There's more than one way to do it” and how it’s fun for hackers but FreeBSD et al add the “but sometimes consistency is not a bad thing either” part of the mantra doesn’t too.
https://en.m.wikipedia.org/wiki/There%27s_more_than_one_way_...
This is largely a myth, please provide an namespace-related CVE that has gone unpatched to support your argument. The reason they run as VMs is that hypervisors run on ring 0 and require higher privileges than the kernel, therefore they are naturally more secure. Like Namespaces, Zones and Jails are also managed by their respective kernels. If there were any major hosters running managed services for Zones and Jails, you can bet they would implement them in a similar way.
> Due to CDDL and GPL problems ZFS on Linux will always be hard to use making every update cycle like playing Russian roulette.
You're right in that the CDDL causes complication but I don't consider this to be a compelling reason to use Illumos. Many who want to use ZFS on Linux will use it and get it to work despite the licensing issues and complications.
> Like SMF offers nice service management while not providing half an operating system like systemd.
SMF is relatively nice (apart from the use of XML) and like you, I would not touch systemd barge pole. Despite systemd making a lot noise in major distros, there are plenty alternative distros for those of us who don't want to use it.
Don't get me wrong, I'm a Solaris guy, it made my career. I just fear that by dropping SPARC, Illumos have put the final nail in their own coffin.
With Linux that will eventually happen in ARM, but currently only Android is adopting it, who know when it will ever come to upstream enabled by default like on Android.
its been said in the thread already but this was always a non-starter. Torvalds even said so himself. CDDL was the last poison pill of a dying giant who couldnt pull its foot from the well.
What we, er, the linux community, chose instead, was BTRFS. It isnt ZFS, but its made incredible strides. for most use cases, it is a reasonable and working replacement for ZFS.
That is a huuuuge overstatement for the current state of Btrfs. In some specific domains it is a working replacement. But for most domains it still falls far behind ZFS in terms of stability, resiliency or even ease of use.
By all means if you want to use btrfs then go for it. But the favourable comparisons people make when comparing btrfs to ZFS is a combination of wishful thinking and not having really bullied their fs into those extreme edge cases where the cracks begin to show. And frankly I’d rather depend on something that has had those assurances for 10+yrs already than have the hassle of explaining downtime to restore data on production systems.
Speak for yourself. As a part of “the Linux community” I gave btrfs a fair chance, but stopped using it because it constantly failed on me in ways no other fs had done before and didn’t protect my data.
ZFS is rock solid and I’ve never had any of the issues I had with btrfs.
So as a member of “the Linux community” you claim to unilaterally represent I put such petty license-politics aside and choose the file system which serves my needs best, and that is ZFS.
Isn't that putting politic before technical excellence, something the Linux crowd is proud of? Other than in place volume expansion, there is no technical reason to choose BTRFS over ZFS (for now.)
I don't really see a killer feature from BTRFS that would persuade me to take a chance with it.
While I mostly use Linux these days, for file servers it must be ZFS, which means whichever OS has first-class support for ZFS. I'm still on Illumos but perhaps will move to FreeBSD at some point.
Torvald's comment about ZFS was as uninformed as it gets...and he calls himself an FS-Guy ;(
I used it for hosting a lot of java over the years but these days everyone wants a k8s endpoint and really the kind of hypervisor you are running doesn’t really make a difference.
Shame, it was nice tech.
Lxc containers vs Solaris Zones... zones clearly wins.
SMF vs systemd (I know that you didn't include this, but it matters)... SMF is clearly superior as well.
Alpha’s loosey-goosey memory model makes multithreaded code on SMP systems more challenging. Linux utilizes its Alpha port as a worst-case testbed for data race conditions in its kernel.
SPARC’s register windows are anachronistic and complicate the implementation of CPUs, and I’d guess also make it more difficult to build OoOE cores (so many SPARC chips are in-order, why?)
POWER isn’t so bad though. It’s open enough where you could build your own lower-cost core if you wanted. There’s nothing intrinsic to the ISA that would mandate an expensive chip other than volume constraints.
PA-RISC put up some great numbers back in the day but between the Compaq acquisition (bringing with it Alpha) and Itanium it was chronically under-resourced. They had a great core in the early 90s and basically just incrementally tweaked it until its death.
https://github.com/antonblanchard/microwatt
(Disclaimer: minor contributor)
I really liked PA-RISC. I thought it was a clean ISA with good performance at the time and avoided many of the pitfalls of other implementations. I think HP didn't want to pour lots of money into it to keep it competitive, though, and was happy to bail out for Itanium when it was viable. My big C8000 is a power hungry titan, makes the Quad G5 seem thrifty.
If you look at ARM, particularly the 64-bit version, you'll notice it attempts to squeeze multiple operations into a single 32-bit "instruction". It's still called RISC, but not really "reduced" anymore.
I thought Alpha and ARM were the same with respect to that.
ARM had some fairly nasty to track down XFS file system corruption bugs for quite a while for exactly this reason.
The issue has always been that x86 goes out of its way to generally be more forgiving than the spec.
Is that still true in the present tense? Anybody doing this in 2021? Seems like alpha has been dead for a long time.
That's not correct. s390x is big-endian and well supported in all enterprise distributions such as SLE, RHEL as well as Debian and Ubuntu.
Serialization formats like JSON/YAML/protobuf/etc. would be much more costly by comparison.
ARM: 1985 ARM64: 2011
RISC V: 2010
It took x86 about 10 years (1988) to become the most popular, and until 2005 to cause Apple to switch (another 17 years)
It took ARM about 25 years (2010) to become the most popular, and until 2020 to cause Apple to switch (another 10 years)
The Newton, then the iPod, then the iPhone, and now the M1.
The iPhone is a more important device for Apple than the Mac from a revenue point of view, and they've sold more devices with ARM chips in them than they have 68k, PowerPC, or x86. They've sold 2.2 billion iPhones. I can't find an easy number on how many Macintoshes they've sold totally, but I can't imagine it's close to that.
In fact, they used ARM in the Newton (1993) before they used PowerPC in the Power Mac (1994).
You can buy them used or new in various kind of servers.
> How many of those who have/can have it are running Illumos and are putting money/time in it?
Dunno, I'm not really a Solaris guy. I use Solaris as a hypervisor for Linux and BSD LDOMs.
> And more importantly what's the outlook for SPARC?
Well, you could make the very same argument about Illumos. The Python developers wanted to drop support for Solaris already and OpenJDK upstream did actually drop it.
OpenBSD will eventually face the same issues with older systems, and I believe they already dropped platforms because hardware couldn’t be replaced.
For newer SPARC system you could “just” buy one. Oracle doesn’t need to donate them, it would be nice if they did, but the community around Illumos, Debian and OpenBSD could raise money to buy theses systems.
So, while I don't really have a problem with removing SPARC support from Illumos which I wouldn't be using on SPARC systems anyway, the reasons mentioned in the document aren't convincing me at all.
FWIW, we still support sparc64 in Debian Ports:
As one of approximately 2 people who actually build illumos on SPARC, I can testify that the whole thing is enough of a maintenance burden that it's causing major problems.
(And I'll just fork a copy and use that; it's not as though the SPARC ecosystem is a moving target.)
This was actually one of my first questions on seeing the announcement - is Tribblix SPARC going to continue, or will this upstream change eventually EOL that as well?
Sparc64 support is rocky; yes, it has a modern gcc, but stack unwinding in gdb is totally broken. (Not sure if that's a gdb or a gcc problem, but just try building some trivial program where main() calls some trivial function, and then try setting a breakpoint inside that function, and then try getting a stack trace.) This made finding the root cause of the alignment bug-induced crash much harder, but at least sparc64 served as a canary in the coal mine. Supporting niche architectures is great from a software quality perspective.
[1] https://buildd.debian.org/status/fetch.php?pkg=e2fsprogs&arc...
I'm guessing the problem isn't that newer GCC lacks SPARC support, but that their (now very old and bitrotted) SPARC support relies on some kind of undefined behavior or nuance of GCC 4 that prevents newer versions from building the kernel.
Solaris on Sparc was great until ca. 2006. After that it started dying.
For anyone that got to use a T3, T4+, etc. performance was obviously and substantially improved.
You're also ignoring significant innovations such as ADI.
Regardless, it doesn't matter anymore.
Solaris was portable by design all along, in the later Unix fashion. Sun actually sold their first x86-based system running SunOS all the way back in the 1980s, as a low-end complement to their new high-end SPARC machines: https://en.wikipedia.org/wiki/Sun386i
That's a lot of work and I don't see IBM making a machine available.
More on topic, as someone sentimental for SPARC hardware and to an extent solaris, this is sad to see, but it feels like just a formality. I don't think illumos has worked properly on SPARC hardware since ... Ever? There were a few illumos distributions with SPARC builds but I always had trouble getting them to run on even a T2, seems there was little work done post rebranding from OpenSolaris for SPARC. Linux and OpenBSD have been much, much better on SPARC than Illumos, tinged with bitter irony...
We tried to convince Oracle to donate a SPARC machine for Debian but that unfortunately never happened for various reasons.
But IBM wouldn’t help someone to build an AIX-killer OS.
SPARC or not SPARC, I would love to help with that!