I've only ever worked at startups before, but HashiCorp itself left that category when it IPO'd. Each phase is definitely different, but then again I don't want go back to roadmapping on a ridiculously small whiteboard in a terrible sub-leased office and building release binaries on my laptop. That was fun once, but I'm ready for a new phase in my own life. I've heard the horror stories of being acquired by IBM, but I've also heard from people who have reveled in the resources and opportunities. I'm hoping for the best for Nomad, our users, and our team. I'd like to think there's room in the world for multiple schedulers, and if not, it won't be for lack of trying.
Every IBM product I've ever used is universally reviled by every person I've met who also had to use it, without exaggeration in the slightest. If anything, I'm understating it: I make a significant premium on my salary because I'm one of the few people willing to put up with it.
My only expectation here is that I'll finally start weaning myself off terraform, I guess.
During my time at IBM and at other companies a decade ago, I can name examples of this:
* Lotus Notes instead of Microsoft Office.
* Lotus Sametime Connect instead of... well Microsoft's instant messengers suck (MSN, Lync, Skype, Teams)... maybe Slack is one of the few tolerable ones?
* Rational Team Concert instead of Git or even Subversion.
* Rational ClearCase instead of Git ( https://stackoverflow.com/questions/1074580/clearcase-advant... ).
* Using a green-screen terminal emulator on a Windows PC to connect to a mainframe to fill out weekly timesheets for payroll, instead of a web app or something.
I'll concede that I like the Eclipse IDE a lot for Java, which was originally developed at IBM. I don't think the IDE is good for other programming languages or non-programming things like team communication and task management.
I've seen a lot of failed projects for data entry apps because the experienced workers tend to prefer the terminals over the web apps. Usually the requirement for the new frontend is driven by management rather than the workers.
Which is understandable to me as a programmer. If it's a task that I'm familiar with, I can often work much more quickly in a terminal than I can with a GUI. The assumption that this is different for non-programmers or that they are all scared of TUIs is often a mistaken assumption. The green screens also tend to have fantastic tab navigation and other keyboard navigation functionality that I almost never see in web apps (I'm not sure why as I'm not a front end developer, but maybe somebody else could explain that).
I'll defend green screens all day long. Lots of people like them and I like them.
Everything else you listed I would agree with you about being terrible and mostly hated though.
We were given old Macs running Classic to run Notes so we had two computers. One being MacOSX. Notes was the biggest pile of crap I’ve ever had to use. With one exception…
On the OSX box we were happily running svn until we were forced to use some IBM command-line system for source control. To add insult to injury, the server was in Texas and we were in Boca Raton (old PC factory as it happens). The network was slow.
It had so many command-line options a guy wrote a TCL for it.
Adding to that was the local IBM lan was token ring and we were Ethernet. That was fun.
I have no idea how/why IBM of all places developed or sold this software but it badly needs to die in a fire.
Database technology which would seem outdated in 1994 with a UI and admin management tools to match.
It works great for Python and C++, honestly. If you're a solo dev, Mylyn does a great job of syncing with your in-code todo list and issue tracker, but it's not as smooth as the IDE side.
However, its Git implementation is something else. It makes Git understandable and allows this knowledge to bleed back to git CLI. This is why I'm using it for 20+ years now.
https://trends.google.com/trends/explore?date=all&q=terrafor...
https://trends.google.com/trends/explore?date=all&q=terrafor...
That being said, it'll be interesting to see if it's still a rounding error 2 years from now.
Not a product, but a service: is Red Hat Linux a counter example?
I worked for a company acquired by IBM, and we held hope like you are doing, but it was only a matter of time before the benefit cuts, layoffs, and death of the pre-existing culture.
Your best bet is to quit right after the acquisition and hope they give you a big retention package to stay. These things are pretty common to ease acquisition transitions and the packages can be massive, easily six figures. Then when the package pays out you can leave for good.
none of that has happened for us at Red Hat. Other than the one round of layoffs which occurred at the time that basically every tech company everywhere was doing much larger layoffs, that was pretty much it and there's no reason to think our layoffs wouldn't have been much greater at that time if we were not under the IBM umbrella.
Besides that, I dont even remember when we were acquired, absolutely nothing has changed for us in engineering; we have the same co-workers, still using all Red Hat email / intranets / IT, etc., there's still a healthy promotions pipeline, all of that. I dont even know anyone from the IBM side. We had heard all the horror stories of other companies IBM acquired but for whatever reason, it's not been that way at all for us at least in the engineering group.
We had a really fun time where the classic s-word was thrown around... "s y n e r g y". Some of the folks I got to meet across the aisle had a pretty strong pre-2010 mindset. Even around opinions of the acquisition, thinking it was just another case of SOP for the business and we'd be fully integrated Soon™.
They key thing people need to remember about the Red Hat acquisition is that it was purely for expertise and personnel. Red Hat has no (or very little) IP. It's not like IBM was snatching them up to take advantage of patents or whatnot. It's in their best interest to do as little as possible to poke the bear that is RH engineering because if there was ever a large scale exodus, IBM would be holding the worlds largest $34B sack of excrement we've seen. All of the value in the acquisition is the engineering talent and customer relationships Red Hat has, not the products themselves. The power of open source development!
It's heartening to hear that your experience in engineering has been positive (or neutral?) so far. Sales saw some massive churn because that's an area IBM did have a heavier impact in. There were some fairly ridiculous expectations set for year-over-year, completely dismissing previous results and obvious upcoming trends. Lost a lot of good reps over that...
I know there are horror stories around this acquisition and lots of predictions about what will happen, but only time will tell. On a minimum, it has been a delight to use the Hashicorp software stack along with the approach they brought to our engineering workflow (remember Vagrant?). These innovations and approaches aren't going away.
I used it literally this year to create a test double of the NUC that runs my home automation stack. I also used Packer to configure Flatcar and create the qcow2 that Vagrant consumes.
Vagrant is still the best tool for creating a general purpose VM on your machine. It got kind-of forgotten in the containers and Kubernetes hype, but it still gets the job done. Packer is also the best tool for creating VM images that got buried for the same reasons.
The datacenter is coming back, though. IBM would be smart to invest in these tools as loss leaders to TFE and Vault and monetize the providers, IMO.
There are worse companies to get bought by, but if you've only ever worked at startups then you're not likely to enjoy what this becomes.
When IBM acquired that company, after a few weeks, this guy had a meeting with new engineering people. The very first meeting, they changed things for him. Instead of a single winding road of development, they wrote out a large spreadsheet. The rows were the distinguishable parts of his large'ish and clever architecture; the columns were assignments. They essentially dismantled his world into parts, for a team to implement. He was distraught. He didn't think like that. They did not discuss it, it was the marching orders. He quit shortly afterwards, which might have been optimal for IBM.
Regardless of the general sentiment, hoping for the best outcome for all of you.
I know we aren't perfect. In fact it's my turn this week, and I've utterly failed at keeping up with triage!!
We get tremendous joy and value from engaging with our community. Thanks for your patience (with me in particular!)
I really love these stories. Our customers constantly shock me with how large of clusters they're able to manage with just a few people. >10k nodes by <10 people isn't uncommon although we are not good at rigorously collecting data on this.
Nomad is far from perfect, but we really strive to ease the pains of managing your compute substrate.
No matter what they tell you, your day to day will not improve. For my area, it was mostly business as usual, but a net decrease in comp because IBM's ESPP is trash.
As you know the layoffs that happened were around the same time as the rest of the industry layoffs were happening (fashion firing), I don't feel like it had a significant effect on the culture.
I am fully remote though, and have been for 15 years.
I'm not going anywhere! My grammar isn't always the best, so I apologize for the confusion.
Then they did the license change, which didn't reflect well on them.
Now it's being sold to IBM, which is essentially a consulting company trying to pivot to mostly undifferentiated software offerings. So I guess Hashicorp is basically over.
I suspect the various forks will be used for a while.
There have been lifecycle rules in place for as long as I can remember to prevent stuff like this. I'm not sure this is a "problem" unique to terraform.
Terraform's provider model is fundamentally broken. You cannot spin up a k8s server and then subsequently use the k8s modules to configure the server in the same workspace. You need a different workspace to import the outputs. The net result was we had like 5 workspaces which really should have been one or two.
A seemingly inconsequential change in one of the predecessor workspaces could absolutely wreck the later resources in the latter workspaces.
It's very easy in such a scenario to trigger a delete and replace, and for larger changes, you have to inspect the plan very, very carefully. The other pain point was I found most of my colleagues going "IDK, this is what worked in non-prod" whilst plans were actively destroying and recreating things, as long as the plan looked like it would execute and create whatever little thing they were working on, the downstream consequences didn't matter (I realize this is not a shortcoming of the tool itself).
That's what I expected lifecycle.prevent_destroy to do when I first saw it, but indeed it does not.
like, what happens if you forget to free a pointer in c? sorry for snark but there are an unbelievably numerous amount of things to complain about in tf, never heard this one.
Of course you're going to hurt yourself. If you didn't put lifecycle blocks on your production resources, you weren't organizationally mature enough to be using Terraform in production. Take an associate Terraform course, this specific topic is covered in it.
$terraform console
>var.whatever
"its value"
>whatever_resource.foo.whatever_attr
"its value"
If you mean somehow printing things when the configuration is being applied... I think you just need to understand that it's neither a procedural language (it's declarative) nor general-purpose (it's infrastructure configuration).Plus, there are many times I don’t want to have to use the REPL. Maybe I’m in CI or something. The fact that I cannot iterate over values of locals and variables easy to see what they are in say, some nested list or object, easily and just print out the values as I’m going along for the things terraform does know is just crappy design
So no. Terraform has the information internally in many cases. There’s just no easy way to print it out.
I find this to be a very strange criticism and is probably indicative of a poor workflow or CI/CD system if anything.
No serious organization with any scale is going to have the only thing standing between them and a production database deletion being two over tired engineers rubber stamping each other's code changes.
Bad config pushes to prod do happen and they can cause outages like the 2024 Cloudstrike outage. You don't want a tool that takes a minor (but significant) error and turns it into a catastrophic one because of poorly thought out semantics. It's better to just start with a tool that requires at least two engineers to explicitly sign off on deletion.
Still not envisioning what your expected solution would look like.
You check the plan, you asses the actions it will take and you either abandon the plan or apply it. You can't roll back destructive changes in cloud environments that's not possible today for obvious reasons.
What IaC does provide is a mechanism to quickly rebuild if you do accidentally wipe out resources.
I've worked in environments were our entire fleet was wiped out by AWS (thousands of VMs) and we were able to rebuild in hours because of IaC.
> No serious organization with any scale is going to have the only thing standing between them and a production database deletion being two over tired engineers rubber stamping each other's code changes.
Most "serious organizations" either have policy as code (Sentinel) or are running Terraform with credentials/roles that have reduced capabilities.
> Bad config pushes to prod do happen and they can cause outages like the 2024 Cloudstrike outage. You don't want a tool that takes a minor (but significant) error and turns it into a catastrophic one because of poorly thought out semantics. It's better to just start with a tool that requires at least two engineers to explicitly sign off on deletion.
This is a criticism of all software/infrastructure deployments with no guardrails. There's nothing stopping you from having two engineers sign off on a TF plan. You can absolutely build that system on top of whatever pipeline you are running.
* Funny you mention databases because that's one of the few AWS resources that can be guarded in TF directly - https://registry.terraform.io/providers/hashicorp/aws/latest...
Except for not "feeling" secure, the only thing everyone wants is a Windows AD file share with ACLs.
Just no one realises this: all the Vault on disk encryption and unsealing stuff is irrelevant - it's solving a problem handled at an entirely different level.
Actually for me, the company I was at that IBM purchased was on the verge of folding, so in that case, IBM saved our jobs and I was there for many years.
Now, we are actively hiring for numerous positions.
Personally, I am not planning to stay much longer. I had hoped that our corp structure would be similar to RedHat, but it seems that they intend to fully integrate us into the IBM mothership.
End of an era.
---
[1]: https://blog.webb.page/2018-01-11-why-the-job-search-sucks.t...
I'm so sorry that happened to you :( I hope you found somewhere else that filled you with excitement.
Still broadly correct.
If never profitable (or terrible return on equity), why would you call the layoffs "arbitrary"? It seems pretty reasonable to me.
Q: What do you get when you cross Apple and IBM?
A: IBM.
But then the joke was on me when I finally worked for a company owned by Apple and IBM at the same time, and experienced it first hand!
I gave Lou Gerstner a DreamScape [4] demo involving an animated disembodied spinning bouncing eyeball, who commented "That's a bit too right-brained for me." I replied "Oh no, I should have used the other eyeball!"
Later when Sun was shopping itself around, there were rumors that IBM might buy it, so the joke would still apply to them, but it would have been a more dignified death than Oracle ending up lawnmowering [5] Sun, sigh.
Now that Apple's 15 times bigger than IBM, I bet the joke still applies, giving Apple a great reason NOT to merge with IBM.
[1] https://en.wikipedia.org/wiki/Kaleida_Labs
[2] https://en.wikipedia.org/wiki/Taligent
[3] https://en.wikipedia.org/wiki/AIM_alliance
I'm a heavy user of Terraform and Vault products. Both do not belong to this era. Also worked for a startup acquired and dumped by IBM.
So do you find Terraform and Vault good or bad? (sorry, not a native English speaker and I had problems to transcript the sentence)
Secrets whatever your cloud provider has (Google secrets manager etc).
Crossplane is excellent but you need to understand CRDs and kubectl at what I'd consider n intermediate level to really grok it whereas Terraform's CLI is almost fool-proof.
Relying on cloud key vaults is expensive and locks you in. Vault and Consul can run anywhere, even in your toaster. They also support those same KMS. Also, dead easy TUI and GUI with Vault Enterprise
What, in this era, replaces provisioning cloudy stuff that doesn't require heaps of YAML or a bootstrap Kubernetes cluster for operators to run within?
Some of this is obvious (linux and mainframes aren't a bad combo). Some of it I'm a bit surprised by (openshift revenue seems strong).
Probably already basically returned purchase price in revenue and much more than purchase price in market cap.
A noticeable thing is
Most of the these type plays the home page has stacked toolbars / marketing / popups / announcements from the parent company and their branding everywhere (IBM XXX powered by Redhat)... I see very little IBM logo or corporate pop-up policy jank on redhat.com.
People who worked at companies acquired by IBM and could not afford going anywhere else.
A mixture of both will be involved from now on in decision making regarding your platform formation core products.
And Hashicorp are experts in HCL so I am sure they will love it.
I only correct you because it's an even bigger indictment of Notes that IBM switched off of it.
I knew the company had lost the plot at that point.
It feels quite ridiculous, especially if you are managing "soft" resources like IAM roles via Terraform / Pulumi. At least with real resources (say, RDS instances), one can argue that Terraform / Pulumi pricing is a small percentage of the cloud bill. But IAM roles are not charged for in cloud, and there are so many of them (especially if you use IaaC to create very elaborate scheme).
The kind of customers it is good to have.
Because filtering out price sensitive customers is a sound business strategy.
As a rule of thumb, solve any problem your customer might have. Except not having money.
Which is to say strong sustainable products need both.
... but ffs don't let the entire company use enterprise as a reason to ignore practitioner feature requests.
However, to play devil's advocate, the number of Terraform resources is a (slightly weak) predictor for resource consumption. Every resource necessitates API calls that consume compute resources. So, if you're offering a "cloud" service that executes Terraform, it's probably a decent way to scale costs appropriately.
I hate it, though. It's user-hostile and forces people to adopt anti-patterns to limit costs.
The previous pricing model, per workspace, did the same. Pricing models are often based on "value received", and therefore often can be worked around with anti-patterns (e.g. you pay for Microsoft 365 per user, so you can have users share the same account to lower costs).
In that world, I think it'd make more sense to charge per run-time second of performing an operation. I understand the argument you are making but the issue is you get charged even if you never touch that resource again via an operation.
It might make sense if TFC did something, anything, with those resources between operations to like...manage them. But...
That would make sense if you paid per API call to any of the cloud providers.
The previous "per apply" based model penalized early stage companies when your infrastructure is rapidly evolving, and discouraged splitting state into smaller workspaces/making smaller iterative changes.
Charging by RUM more closely aligns the pricing to the scale/complexity of the infrastructure being managed which makes more sense to me.
That said it has tempted me to move management of more resources into kubernetes (via cross plane/config connector)
There were runtime limits IIRC but there was nothing stopping Hashicorp offering a “per user” fixed rate plan at several hundred dollars per month to enterprises for the same service.
The various clients I’ve worked for who used TF would have lapped this up. RUM (or the equally opaque “call us” - we won’t answer! - enterprise pricing that proceeded it) not so much.
Not great for investors, but insiders benefitted a lot!
They will get capital losses.
That's not perfect.
My doubt in the value of the company was that I've been using Terraform for years in Enterprise settings and never needed to pay the company for anything.
Running a few products. Quoted $1MM or so over 3 years for support. I was able to say no and saved six figures each month.
Retail will always be holding the bag. This is known.
They didn't with HashiCorp certainly. Bought some but not too much and were part of a housecleaning a few years back (which I'm glad I did).
lmfao what the fuck? The source they reference: https://www.idc.com/getdoc.jsp?containerId=US51953724
These clowns want $2500 goddamned american dollars for the privilege of reading their bloviations on this topic, which i absolutely will not pay.
You know it's bad when the only people making money on this crap are management consultants.
Thinking back to 2014 using vagrant to develop services locally on my laptop I never would have imagined them getting swallowed up by big blue as some bizarre "AI" play. Shit is getting real weird around here.
You aren’t the target market for their “bloviations” - they are targeted at executives, and it isn’t like the executive pays this out of their own pocket, there is a budget and it comes out of the budget. Plus these reports generally aren’t aimed at technical people with significant pre-existing understanding of the field, their audience is more “I’m expected to make decisions about this topic but they didn’t cover it in my MBA”, or even “I need some convincing-sounding talking points to put in my slides for the board meeting, and if I cite an outside analyst they can’t argue with that”
Commonly with these reports a company buys a copy and then it can be freely shared within the company. Also $2,500 is likely just the list price and if you are a regular customer you’ll get a discount, or even find you’ve already paid for this report as part of some kind of subscription
Who might not have much of an engineering team, or not one with relevant expertise… and why should they trust the vendor’s engineering team? If they are about to sign a contract for $$$, being able to find support for it in an independent analyst report can psychologically help a lot in the sales cycle
While the most useful reports for sales are those which directly compare products, like Gartner Magic Quadrant or Forrester Wave - a powerful tool if you come out on top - these kind of more background reports can help if the sales challenge is less “whose product should I buy?” and more “do I even need one of these products? should we be investing money in this?”
Hopefully they do the right thing and hand hashicorp over to Redhat so they can open source the shit out of it. So they can do things like make OpenTofu the proper upstream for it, etc.
"Modern digital businesses need to be able to adapt to changing end-user demand, and since feature flags decouple release from deployment, it provides a good solution for improving software development velocity and business agility," said Jim Mercer, program vice president of IDC Software Development DevOps and DevSecOps. "Further, feature flags can help derisk releases, enable product experimentation, and allow for targeting and personalizing end-user experiences."
Wait. What? This reminds me of the trope of the "wikipedia citation" in high school and college.. that move was worth at most a C+. Are you seriously saying these fucks actually seriously cite this bullshit? In this day and age where even crowdsourced wiki articles seem "credible"? What the actual fuck? I hate this shit.
After the haze of the LLM bubble passes, I hope startups have an exit strategy other than "we'll just get 0.01% of users to pay 6+ figures for support" or "ads".
Good tech deserves a good business model such that it can endure for the long term.
this sounds like corporate AI slop
I met some great people along the way that I'm glad to have gotten the opportunity to work with. Godspeed all!
(Asking for a friend).
In any case, make sure to reach out via the website chat widget / email / demo form, we’re happy to help!
The migration from Terraform to OpenTofu is pretty seamless right now, and documented in the OpenTofu docs[2].
[0]: https://github.com/spacelift-io/spacelift-migration-kit
[1]: https://spacelift.io/blog/how-to-migrate-from-terraform-clou...
[2]: https://opentofu.org/docs/intro/migration/
Disclaimer: work at Spacelift
If given the chance, just take the exit rather than trying to integrate into IBM.
As someone working at Red Hat since before the acquisition, this does not match my experience of "the Red Hat treatment" even a little bit.
I don't doubt that they've handled acquisitions badly in the past but they did a decent job leaving us alone.
For engineering almost no difference other than switching to Slack.
That said, I think a playbook in HCL would be worlds better than the absolutely staggering amount of nonsense needed to quote Jinja2 out of yaml
I would also accept them just moving to the GitHub Actions style of ${{ or ${% which would for sure be less disruptive, and (AIUI) could be even opt-in by just promoting the `#jinja2:variable_start_string:'${{', variable_end_string:'}}'` up into playbook files, not just in .j2 files
https://docs.ansible.com/ansible/11/collections/ansible/buil...
For simple use cases, sure, but you could also just use AWS ECS or a similar cloud tool for an even easier experience.
Most of my issues with it aren't related to the scale though. I wasn't involved in the operations of the cluster (though I did hear many "fun" stories from that team), I was just a user of Nomad trying to run a few thousand stateful allocs. Without custom resources and custom controllers, managing stateful services was a pain in the ass. Critical bugs would also often take years to get fixed. I had lots of fun getting paged in the middle of the night because 2 allocs would suddenly decide they now have the same index (https://github.com/hashicorp/nomad/issues/10727)
I know, it's really sad. Kubernetes won because of mindshare and hype and 500,000 CNCF consulting firms selling their own rubbish to "finally make k8s easy to use".
Salt and Puppet both don't seem in a great place.
System Initiative is just AWS still, yeah?
Welp.
We were ready to invest in HC tools, but they were so damn brittle once we actually got past the smooth apple-like marketing and actually used them. Plenty of odd, not-well-documented behavior, random crashes, even the clusters required a bunch of manual steps to recover from. A major reason to even have a cluster is the self-healing, something tools like MongoDB did right 10+ years ago. Yet we had to manually edit a peers.json file and do all this garbage half the time when our clusters kept dying. It was infuriating. I kept insisting my devops guy had to be wrong when he told me that was the way it was done. I just couldn't believe that anything in 2021 required manual editing of JSON files when we have a million different self-discovery mechanisms (whether it's cloud metadata, or mDNS, etc). But he was 100% right, much to my disbelief.
So we ultimately pulled the plug after months of HC stuff running our QA system, because I just didn't feel comfortable pushing it to production given all the random crashes and behavior issues. And I feel vindicated.
I think if their stuff was more solid, I would 100% be happy to pay for it for our use cases. I thought generally that their ideas and levels of abstraction felt "right"