I'm curious if anyone here currently works (or has very recently worked) somewhere where proprietary Unix is still used for production. If so, can you tell me what they're used for and why those deployments haven't been moved to an appropriate Linux distribution?
Not suggesting Linux is necessarily better for all use cases, just wondering what keeps these small number of entities clinging to closed-source Unix with presumably pricey license costs.
This used to run on a Compaq Proliant server (huge noisy Intel 486 tower) until the end of the millennium or so, then was converted into a VM. First on VMware, then on Hyper-V, where it has been running comfortably on various hardware (Intel Dell PowerEdge, AMD SuperServer) since.
Access is the biggest issue, as the OS only supports telnet, and serial access. So ever since this has been converted to a VM, it runs on a dedicated VLAN (666, just to make sure nobody ever misunderstands the true evil underneath...), with an AD-authenticating-HTTPS-to-Telnet bridge (coded up in Visual Basic.NET using some long-long-deprecated libraries) connecting it to the outside world.
That VB.NET kludge was recently upgraded to .NET 6, in order to get TLS 1.2 support. This was surprisingly uneventful, and I'm pretty sure this abomination gets to live another decade or so.
Ah, yes, a career in IT... Always on the forefront of cutting-edge tech...
(Later edit to, like, actually answer the question: licensing costs are nonexistent: SCO is gone anyway, and we don't require any support/updates. Migrating to Linux might be an option, but is most likely going to be hugely painful, and the existing VM scenario Just Works for everyone involved. Security and such is not a real issue: only a handful of internal users have highly-restricted access via a proxy)
My first job in high school was at a company with the entire business running on SCO Unix. I want to say OpenServer 3, maybe? It was essentially a terminal server with dozens of Wyse 60 terminals attached.
Anyway, as a Linux enthusiast I promptly setup a RedHat 7 install on some old hardware they had lying around. IIRC correctly it was a low-end Pentium but it could handle a PCI 100mbit ethernet card just fine.
Anyway, the goal was to get data to/from the SCO system to something with a TCP/IP stack (RedHat machine) so it could go somewhere - samba shares on the rapidly growing ethernet network, maybe even the internet!
We ended up using UUCP over serial, scripting, and cron jobs to push/pull from directories on each side. The RedHat machine was promptly connected to a 56k modem to do dial on demand and IP masquerading for the ethernet network and uploads of specifically formatted files from the SCO system via FTP to vendors and partners.
Fun times!
OpenSSH works on it. This page has links to precompiled packages:
https://scosales.com/knowledge-base/how-to-install-ssh-for-s...
links:
ftp://ftp2.sco.com/pub/skunkware/osr5/vols/openssh-3.4p1-VOLS.tar
ftp://ftp2.sco.com/pub/skunkware/osr5/vols/prngd-0.9.23-VOLS.tar
ftp://ftp2.sco.com/pub/skunkware/osr5/vols/zlib-1.1.4-VOLS.tar
Probably better to grab the sources and compile them yourself if you can, though.
> Migrating to Linux might be an option, but is most likely going to be hugely painful.
Probably. At least it sounds like you only have a few users for it, so getting them to adapt to a change of software might be easier.
The plan is definitely to retire the system Real Soon Now, but with the subjects of the underlying data springing new generations with new lawyers, ensuring some kind of Y2K38 compliance might be wise...
Apache Guacamole supports AD auth and (surprisingly) telnet.
Good times
Usually with systems of this vintage, "just dump all the data to Excel or PDF and get it over with" is a good option, but in this case both the volume (with the requirement to run queries on it) and the limited options available for export (the system can only print predefined reports, and they don't contain everything required for filtering) prohibit that.
So, next stop would usually be "reverse engineer the application data format and convert it", but the unholy collection of binary files used by the accounting software here has defied analysis: it's not Btrieve or MS-ISAM (popular semi-database formats for COBOL and BASIC apps of the time), and decompiling the binaries only yielded some generated-by-another-set-of-tools braindamage that didn't clarify anything either.
The choice then became spending huge amounts of money, or wallpapering over the tirefire and keeping it running. Unsurprisingly, the outcome was the latter, which is perfectly OK in this case, as the system is not exactly load-bearing, and actually sort of fun to maintain.
They released a major update in 2020 that allowed you to move windows around the screen. It was groundbreaking.
But let me tell you, this system was absolutely terrible. All the machines were full x86 desktops with no hard drive, they netbooted from the manager's computer. Why not a thin client? A mystery.
The system stored a local cache of the database, which is only superficially useful. The cache is always several days, weeks, or months out of date, depending on what data you need. Most functions require querying the database hosted at corporate HQ in Cleveland. That link is up about 90% of the time, and when it's down, every store in the country is crippled.
It crashed frequently and is fundamentally incapable of concurrent access: if an order is open on the mixing station, you cannot access that order to bill the customer, and you can't access their account at all. Frequently, the system loses track of which records are open, requiring the manager manually override the DB lock just to bill an order.
If a store has been operating for more than a couple of years, the DB gets bloated or fragmented or something, and the entire system slows to a crawl. It takes minutes to open an order.
Which is all to say it's a bad system that cannot support their current scale of business.
It just seems that the actual software running on the OS, with the text UI, seems to be profoundly terrible in your case.
The ancient software also wasn't bad. After a few months learning the hotkeys and menu structure, the speed with which you can enter and process data was absolutely incredible. It had problems, but usually minor and patched in a reasonable time for corporate IT.
The real problem was their database management. I don't have any information, so I'm assuming here, but my impression is that they're using some positively ancient database software. Doing a backup of the local cache took multiple days, though it didn't lock the DB. Requests to HQ were incredibly slow, about 30 seconds to pull an account record. Larger queries like neighboring store inventory took a minute or two. Running a report on local inventory would regularly take tens of minutes, and it only had to read the local cache.
The database was a few tens of GB on disk. Granted, I don't know much about databases, but if running something like "SELECT * FROM inventory WHERE sales < 100 ORDER BY lastSaleDate" on a 30gb database takes 15 minutes, something is wrong.
There were a lot of problems we ran into on a daily basis, and almost all of them related to database functions. Particularly when a record failed to unlock, sometimes we'd have to reboot the local server, which caused all terminals in the store to reboot. That usually took a good 15 minutes.
Personally, I rather enjoyed not having Windows at work. For the most part, everything Just Works, and given the hardware, it ran ten times faster than windows would have.
My current job is a Windows development shop, and I don't have enough curses to describe the pure rage I feel every time windows does something stupid (which is approximately every three hours).
The vendor went into a cycle of refunding and re-billing my store for that part every few months for years.
Fortunately both our books came out even in the end, but Jesus what a stupid thing to happen.
This is half of the reason everyone who worked at my store walked out on the same day. The other half is that the only people who worked there were me and the manager, and it had been that way for six months.
My advice is to avoid SW these days and go to Lowe's. SW is contractually obligated to ensure that Lowe's always has inventory. But do spend a little extra money for their mid-teir product. The cheapest stuff is trash and you will regret it.
There’s a bunch of enterprisey technology that is not yet dead, but dying very slowly.
Another similar client is using Solaris (Sparc) for an analogous application; they are using it since 1996, I think because Sun (Oracle) always provided an easy migration path, so the applications didn't need to be ported.
As in most medium/big enterprises, in both cases the hardware/software price is not the main decision driver, but (IMO) it is the support/SLA, compliance checklists, and overall risk management.
BTW, in these cases Linux is also used for more "internet oriented" applications.
HANA is already Linux on POWER so it will be nice not to have the AIX/SUSE split once we fully migrate over.
We tend to run pretty up to date config wise so it's helped quite a bit while planning upgrades. Only AIX 7.3 and SLES 15 SP3. Already moved from P7 => P8 => P9.
https://www.usa.philips.com/healthcare/solutions/radiation-o...
Last I heard they were desperately trying to get it on Linux. Why it isn't is because its a huge legacy application with some very horrible hacks specific to the OS.
the software being run is SAP R3/Oracle and there is plans to replace it, but that is not happening anytime soon due to the usual delays associated with ERP migrations.
License cost is a red herring here especially when dealing with enterprise applications from the likes of like SAP, Oracle and IBM, heck were probably paying as much for our SAP on SUSE subscriptions as we do for our HPUX licenses, and the real license costs is with the applications and databases.
And it's not that long ago(say 2014) that there were niches where the only real cost effective way to get enough single box io performance was to bye an non x68 box that came with it's own unix, so there is a lot of systems out there where the hardware aren't actually old enough for an 1:1 migration to make commercial sense and rewrites/redesigns of ERP software is risky with most projects overrun both the time and budgets by an order of magnitude, if they don't outright fail to deliver an new system.
Or so they thought. Now it looks like most of the engineers moved on to the slate green pastures of Wintel, but occasionally an old format or workflow tends to pop up. I know of some software still being updated for those old V4 machines 5 years ago, but I've been out of the loop since.
Same software is used in aerospace, too. Where you're not switching to totally new models as fast, so I wonder how legacy-laden their software infrastructure is. "Who here knows both French and early 00 SGI admin?"
When I worked for $(LargeDefenseContractor) we used Solaris for a defense system we were developing. Over time the older units (based on older hardware) would be passed down to the national guard. I would not be surprised if Solaris was still being used in obscure places in the military.
Solaris 7 was a pretty awesome OS as I recall, but pretty soon Intel and AMD started supporting linux as a workable OS option for their server chips. Then linux on the cloud took off and the rest is history.
I can't say bad things about the whole Sun/Solaris combo though. It's rock solid and requires practically no maintenance whatsoever.
Also, since it's completely off the internet, it's not like it could be compromised in any way.
The stepper is a tool that is too expensive to retire, and if routinely maintained, can remain operational and happily expose wafers for more than twenty years[1]. No surprise to see it contains some ancient (by consumer electronics standards) software.
[1] https://www.asml.com/en/news/stories/2021/three-decades-of-p...
Also, I used a HP 86142B optical spectrum analyzer last year, which runs HPUX.
Most customers were on Ubuntu/RHEL/Windows. There was very little on FreeBSD, AIX or Solaris. We had zero interest for HP-UX, I think that is dead enough to be ignoreable. Banks and financial services have a tendency towards AIX, while Solaris I think was primarily one customer that had a lot of legacy. AIX and Windows were the biggest pain in the ass, but every time we tried to kill support for it, people discovered sizable contracts that had been signed with us (yeah, our tracking in salesforce was bad).
My background is that I learned C and Unix on a Tandy 6000 back in 1989/1990, then in college used and worked on a wide variety of O/Sen (Dynix, BSD4.3, Digital Unix 4.0, SunOS 4.1.4, Solaris 2.4+, Irix 5.x/6.x (i think), something that ran on a VAX, NetBSD and later Linux). I ported NMAP to a bunch of those and did the original GNU autoconf work on it. I've been mostly Linux since 2001 (Amazon from 2001-2006).
You are kind of combining two things though: legacy systems, and proprietary systems.
There are modern proprietary systems as well. RHEL is a good example.
I'd argue it's not a "small number of entities" though. You'd be shocked by what legacy systems are running in the most important places on the planet...maybe scared. Unfortunately/Fortunately, nuclear facilities aren't running Linux Kernel 6.X
The fact is, a lot (probably most) problems solved with a computer don't need further updates. As long as the hardware continues to function, all is well.
Something you may have not considered is that when the time does come, and the hardware does fail, I'd guess most organizations will opt -- and even go out of their way -- to source those same legacy components they had before to keep things running exactly the same instead of upgrading to a more modern solution.
I've had to do this a number of times for clients. Not long ago, I had to source an old mainboard for a system that was 20+ years old...in doing so I did realize there is some good money to be made if you can source parts for systems about 20 years in the past because the board was like $300 (this was 2017, and the board had a 33mhz processor and like 8MB RAM)
If you don't have to touch these systems, count yourself lucky.
If you do touch these systems, thank you for your service.\
In regards to the modern proprietary systems, there are many, but if you consider RHEL for instance, there is a lot of value for large organizations. They can reduce the number on on-hand personnel whom probably would be less efficient at solving an OS issue than a RHEL engineer. As an example, the Federal Reserve runs modern RHEL...but I'd guess if you dig deep, they have some really old stuff too...
RHEL is free software, isn't it?
(it's just that if you decide to fully use the rights given by the licenses, bye bye support and updates from Red Hat IIRC)
The ‘X’ and capitalization went away when they finally bumped the version number to 11.
They usually are just running some closed source service that is too expensive/impractical to replace, and aren't causing enough pain and suffering to anyone so there is no business case for replacing them.
I like having them around. Sure, projects could be created to replace them with some modern webshit on Linux, but it would probably run into the tens of millions of euros, take years, and work less reliably than the shit that's been chugging along just fine for longer than I've worked in tech.
Previous to that (late 90's, early 2000's) it was mainly Solaris. One place was fairly heterogeneous, and had Solaris, HP-UX, Digital Unix (aka OSF/1, Tru64), and a couple others.
The reason are maintenance contracts for very long-term systems that never upgrade (they are ultimately completely replaced by something else).
They are similar to that old bulb in a fire station[1]: it is strictly forbidden to breathe when around these systems, if you sneeze you are fired on the spot :)
We had to move them twice between data centers. I had some popcorn with me when they were powered off, transported as if it was Mona Lisa and then restarted with the sysadmins not watching and asking for the flashing numbers. Good memories.
Proprietary Unixes are frequent and have probably made up 75% of my career including current massive project. Note these are modern and up to date, usually purchased brand new for implementation project hardware and software, not inherited legacy stuff (which seems to set me apart from most respondents here)
Several reasons but note I have very bottoms up perspective.
1. Support. I think this is the major thing. Having a reliable long term vendor with pricey well written steady support model is important to companies who use erp.
2. Related is perception and reality of stability. Aix on power is as proprietary as it gets. These things get rebooted once or twice a decade. Hardware upgrade to another frame is live migration through firmware. It is not fancy or pretty but God dammit baby it works. Perception is there too - that Linux scales well out but not up, that it's vendor support is not at same level, that it moves too fast and breaks things, etc.
3. Deals and contracts. There may be legacy hardware footprint or client may get package deal with application middleware database and hardware.
My personal perspective? Proprietary Unix is ahead on internals, behind on shiny,boring and reliable. There's a lot to be said for distributed cheap boxes over proprietary big boxes. But I don't think modern SREs fully grok how much I never had to deal with hardware or OS issue or outage on these things. Anecdata sample size =1 + gossip, but it's just a very different mindset and, here's the trick, there's nothing inherently wrong with that mindset, even if it's not currently in vogue.
I prefer to work with Linux for a few reasons, including shiny and resume-helpful, but honestly, from business / management perspective, a grouchy experienced aix sysadmin on Power stack makes my job a lot easier.
Edit / p.s. Again these are modern os'es and support Gui and tunnelling.... But I don't think anybody ever uses them. The application stack running on top is certainly modern and gui/Web, but installing and supporting OS, database, middleware and apps is all cli, very obscure, very efficient, very powerful.
My take on each of the OSes was:
AIX and the associated IBM stuff is kind of a mess. I encountered a bug where /etc/filesystems (fstab equivalent) was parsed differently during boot than when using the mount command manually. The focus seemed to be on the use of the menu-driven smit utility as the primary admin tool, with automation of admin tasks an afterthought. The builtin commands are often not very practical, requiring multiple steps to do things that you're used to do in one on Linux. Installing some open-source tools is essential to sanity. Some of IBM's own tools are using expect on their own software (looking at you lpar_netboot).
SCO is clearly unmaintained stuff that looks like it dates from 30 years ago. At least it's simple to use.
Solaris had some nice features, like Zones or ZFS, but much to my dismay I couldn't play with them as I was made to install an old version of the OS as the newer version wasn't listed as supported by the version of Sybase that was to be installed on it.
I worked at an ISP in 2007 which was running mostly on Sun hardware and Solaris. This was because of huge discounts provided by Sun. Most devs ran Linux on their workstations. In 2014 I got to work with some guy whose previous project had been at that ISP, who was at that point desperately trying to move off Solaris because they had to start paying list price for the OS and it was much too expensive.
Correction: It has been pointed out to me that I'm currently using macOS which, Darwin non-withstanding, is technically a proprietary UNIX.
I recently rewrote the system we use to push user accounts and passwords to systems that don't support LDAP. It was amusing to write an app using a current-day stack on RHEL 8 that purely exists to handle these very legacy systems.
One of my favourite systems I've had to work on is running Solaris 2.5.1. Users are added to the program by editing the source code and recompiling it. How times have changed.
I find the sysadmins at telcos come in three flavours: the ones who will work with me to secure their cool old shit, the ones who want to get rid of it, and the ones who have fucking meltdowns if I consider to touch their weird old shit.
What's neat is the weird old shit usually gets support forever, whereas support for modern shit tends to be short.
Turns out most of the wet plant (a fancy way to say something is submerged under the water) were HP/UX systems.
Submarine cables must last for 20 years minimum to be financially viable... So in 2035 they will still running.
and it was kind of great. kept things interesting, at a minimum.
once linux and red hat starting gaining real traction in industry, i felt like losing all these high-priced unix distributions was kind of... lame.
i always had this idea, for instance, that working on/with Solaris -- i was driving this high performance _machine_ that was capable of doing almost anything, as long as i was up to the task - the Mercedes of OSs.
losing all those -- i would kind of compare it to how the English language - like the Linux OS - is taking over the world. at the same time that is happening, either as a direct result or something less than that, we're losing all these other languages. ditto biodiversity loss. ditto city gentrification / sameness / sterility. it feels wrong / unhealthy.
¯\_(ツ)_/¯
https://www.theguardian.com/news/2018/jul/27/english-languag...
what i'm saying is i miss the days of BeOS. :)
Back in the 00s it was common to have to work on multiple proprietary platforms. I did a lot of platform engineering work for one product that ran across Solaris (Sparc and x86), AIX (POWER), HPUX (PA-RISC and Itanium), Linux (x86 and System Z) and Windows.
Now ... if I'm lucky I don't have to care about the platform at all, I just write lambdas in my language of choice and throw them at AWS. It's a very different world!
There was a common set of cshell based tools used between those two environments, among other 90s style Unix tools, like software written againt the SunOS Open Look widgets, and tools written in Tcl/Tk.
I wonder if those machines are still in use!
What am I missing?
One of my college courses ages ago was about working with MPI which we got to run on the hpc cluster.
The last time I needed to run N copies of something I asked kubernetes to do it. The time before that, I asked $cloud_vendor for N identical VMs with the same cloud-init script.
Supposedly Google's in-house stuff that kubernetes and map-reduce (the product, not the concept) are public versions of, is all about running stuff well on huge groups of machines.
For example, large banks need to securely, reliably, and very efficiently process an unfathomable amount of transactions[1]. In this case, kubernetes would be a giant waste of resources and complexity. The former one hampers throughput, the latter one means security and reliability suffer.
For people not familiar with it (me included, actually), it can be mind-boggling what throughput is achieved, and what mechanisms for reliability are in place. Not just in software, in actual hardware; this goes way beyond ECC memory.
[1] Transactions in the bank sense, not in the computer sense, because I don't want to confuse matters more. In the mainframe world for example, there is a difference between "batch processing" and "online transaction processing", but both could be applied to bank transactions. Note that I'm not advocating for the mainframe world here.
[1] https://gnqs.sourceforge.net/docs/papers/mnqs_papers/origina...
[2] https://www.parsec.com/wwwDocuments/ClusterLoadBalancing.pdf
We also apparently still have some AIX, but I'm not sure what it supports. AIX is still somewhat popular in financial services; probably others as well.
I know of a couple large HP/UX shops left.
That’s an understatement!
Earlier this year I helped our final Solaris SPARC customer move over to X86/Linux. Every year for the last 5+ years they’ve asked us to extend support for just one more year; they were almost done with the migration! In the end, we had to compile them a few pieces of software with weird configurations because they took the standard AWS Linux and, well, I have no idea what on earth they did to it.
For awhile after then, I could imagine some HP account manager for Boeing being able to pull some strings to get the patch, even if it was from another division. And since HP got DomainOS onto an HP 9000/4xx post-Apollo-acquisition, I'd guess that HP probably retained the code and know-how for awhile.
(Though I couldn't guess how long they kept their Domain lab systems around, especially if there were business reasons to push customers to HP-UX on HP-PA, rather than gouge a handful of people who'd still pay support contracts for Domain. I've seen companies run sustaining engineering labs for a while, but then eventually discard key tooling, like the test rigs for a product, which irreversibly marks the end of that, even if the engineers are still on-staff.)
I wonder whether HPE now has the Apollo/Domain IP, or who has it?
Anyone running VMS? RSTS/E ? Or on rare hardware, OS-32 on a PE 8/32, or MPX on any SEL 32 family? MPE on Harris ?
the machine has to be at least 20 years old at this point. but it feels fast. bash. find. xargs grep. command line stuff very responsive.
it felt fast back then. i do remember that.
Later on in eh, 2015 or so I worked at a company providing backup software that was tested and worked with every niche unix hardware under the sun. Usually to support large legacy industrial companies, think defense, materials, etc. who still used the hardware.
Banks will also use these things mostly for legacy reasons. The software got written once and has been working and validated for decades, no reason to rewrite it for a different OS just because.
Price is usually not a major factor when compared to the size of the business and number of employees.
You can draw a continuous—if very torturous—line between Bell Labs’ UNIX original codebase and modern macOS/iOS.
In rooted devices everything goes, on random Android device you application might get killed for trying to access private APIs, meaning anything that isn't documented as official NDK API.
Both were excellent systems to work on in dev and prod.
[1] https://github.com/zowe/zowe-common-c [2] https://github.com/zowe/zss
I’m pretty certain most if not all are still running.
The reason I'm chiming in is to give my advice to those still dealing with these; learn vi, at least enough to edit config files. It's the only editor you can find on all of these, and you often can't install your own software.
Several times we looked at moving the DB to Linux/x86 but Oracle's pricing made it non-economical, or so I was told. All the app-tier servers (Java) ran Linux.
Haven't used a non-Linux system in production since leaving that company.
But later Oracle shell out JVM/Solaris integration/QA team, and shortly after that discontinued Solaris for new hardware completely.
Linux it is now.
Luckily it had bash, so it felt like a Linux system for my scripts. There is a Command to enumerate all the hardware running on it, I remember running that command to see what was assigned to the LPAR the Aix instance was running on. It took 3 minutes to run and complete :)
So yeah, a mainframe. Twlco. IBM obviously. It was used as a massive Oracle database.
If it was AIX, it wasn't a mainframe. AIX runs on IBM's POWER systems, which are their "midrange" line. (There was, for a brief time, AIX for S/370 I believe... but that disappeared long ago.)
Mainframe would be a System z (the S/360->S/370->S/390->z/Arch lineage), typically running z/OS (or Linux for z!)
And it isn't the same as AIX. IBM has two alive commercial UNIXs, which is kinda wild.
Most servers were already Linux/x86 based at the time (2012), but I vividly remember SSHing to that one machine where things felt just... different.
I'll bet this isn't even the source code Apple uses, but rather they have a private fork with extra patches (similar to how Microsoft publishes OSS VS Code, but then uses their own proprietary version for releases).