Having worked in corporate with vaguely software-buying related stuff, I am confused at why so many small companies think an enterprise would be excited to go with them.
Even if I love your product, how do I pitch to the powers that be that we replace something we are already paying for with this new thing? The company might make billions but I've always had to fight for my budgets.
And tell me again why we should bet our core operations on a two man outfit with six months runway? What happens when you pivot? What happens when our competitor acquires you? What happens when you go on a transatlantic flight and a key expires?
Selling to enterprise early on is a poisoned chalice as well. They have much larger teams, so you'll be dealing with a horde of product owners, compliance specialists, data privacy experts, who might never touch your product but come with excel sheets with 300 rows of gnarly questions. Not to mention just getting the bills paid can be a huge fight.
It will drag you into their orbit, especially if 80% of your revenue is from a single customer. Soon your other customers will start going to someone who actually have time to care about them. And by then there's been a political shift in-house and the new VP of X gets a quote for an outsourcing bundle from his squash buddy at one of the big system integrators. Your line item gets bundled into this to motivate the cost even though it's not even relevant. And that the end of your company.
If you do want to sell, treat the enterprise like an ecosystem of SMEs, find a department or team who are more innovative and sell to them behind the backs of enterprise IT. Once you've entrenched yourself and the users love you, then you can expand to other teams and eventually enterprise IT will be forced to negotiate with you for a license and do the compliance dance. But even so this will take years of effort and luck.
I worked at a well respected technical company and was given the task of evaluating a small company that we could acquire. I looked at the technology -something anyone could put together in a day. I looked at the business model. It was that you get free storage if you get a friend to sign up for free storage!!
I told the company that it had no technology and a business model that made no sense. They bought the company. Why? Because the target company told them that other companies were interested - and they were.
They did not want to miss the boat and lose what they had. Nothing came from this acquired company. Meanwhile the fundamental technology was disrupted by something new and the company fell apart. End of story. This is common.
So AI? This is about not missing the boat. Someplace, somewhere there is value in AI, but for now, if you have missed the boat you are probably better off. So no, this is not (as the current top comment says) about "they couldn't sell their software". This is about a very real reason why companies try to not miss the boat rather than innovate.
[ASIDE] And I cannot help but laugh at the Clojure reference with the statement "two things are simple if they are not intertwined". I have always been interested in Clojure, but I never go there because it is not "simple". It is intertwined with Java which I know all to well and do not love. Java was the language of choice at this same company and I wasted too many months of my life bowing before that cumbersome language.
IT is a highly dynamic system, and enterprises optimize for a minimal set of capabilities at the maximum level of abstraction under high levels of uncertainty and different inherited states.
This results in decisions that may not appear technically optimal but which are still an optimal outcome under the extreme uncertainty that an 'enterprise' operates in vis a vis technology paradigms.
Add to this that there is no one technology operating model. everyone has a different starting point, different inherited technical debt. They are optimizing to their own starting point, not a clean slate.
This is what people don't get about what Microsoft actually does - it abstracts both at the technical level and the operational (contracting) level. This is valuable for an organization whose core competency is not technology, even if it does not lead to the most optimal outcomes from a pure technology perspective.
Companies pick Java or .Net because hiring developers is easy, which business side loves, and a lot of business development work is not rocket science. It's taking business logic and implementing in code.
I recommend this blog article to understand the logic behind Java but it applies to other technologies in question. https://gist.githubusercontent.com/terryjbates/3fcab7b07a0c5...
Isn't familiarity with the language even more the case with a LLM. The language they do best with is the one with the largest corpus in the training set.
But that is the "fear" side of the enterprise sales equation... The "greed" side of it is for the buyer to make the long / short hedge.
The exec who gets the value of the working product can potentially come out shining, when their peers will be furiously backpedalling next year. And this consummate exec can do it by name-associating with their "main bet" which is optically great for the immediate term but totally out of their control (because big corp vendor will drag its feet like every SAP integration failure they've seen), and feeling a sense of agency by running an off-books skunkworks project that actually works and saves the day.
A fine needle to thread for the upstart, but better than standing outside the game.
> Clojure was not a hiring barrier - it was a hiring filter.
It makes me think about this HN comment: https://news.ycombinator.com/item?id=11933250 > Jane Street Capital's Yaron Minsky once said that contrary to popular belief hiring for OCaml developers was easier because the signal to noise ratio in the OCaml community is so much better than other, more approachable languages.
I saw a YouTube vidoe years ago that featured Yaron Minsky. He made similar points. In short, some programming languages are like catnip for excellent programmers.HN discussions seems to miss this. What LLMs are before you use them for agentic something is a lossy compression of a large text corpus.
The original wikis have to survive so you can have access to the non lossy version though.
This is still true today. Gartner makes a living out of it. Always prefer buying the "familiar" product rather than being successful with the right solution.
Fortunately history show that those who do their math right actually end up being extremely successful: Google using linux HW for their DB servers, AWS developing their own network equipment and protocols, etc. It takes guts but when it works it leaves competition years behind.
This friction, and the lead dividing solutions from consulting, gave me an idea—-they’re describing conditions where LLM revolution might track with the desktop revolution. Companies, groups within companies and small businesses will DIY it and say good enough.
So where its fair to say enterprise users buy safety, if he's referring to his own product I would offer the following.
He's in the AI tool space i.e. a better rag. So you're selling to AI developers and developers nearly always go open source first.
If they can't find an open source solution or if they don't even look, they prefer to build it themselves.
For this kind of product most enterprise buyers won't understand its benefits, you have to get the developers interested first.
And finally, in this market, you are 1 prompt away from someone cloning your whole business and calling it openaxon or something like that.
It's a tough time to be a software startup.
In the same article the author was mentioning a few expert systems from the past that were quite obviously successful.
> on the promise printed on its marketing
Ah, _that_ promise. That promise is never fulfilled anywhere nor it is expected to.
This dynamic is not new. Unsophisticated enterprise buyers making bad decisions in a bad way. We haven't had an overwhelming market discipline come down though.
Do these enterprises actually need "good?"
- Enterprise buyers are risk averse and buy the wrong thing - Language X is better because the people that use it are smarter - New tech is difficult for established players
Not really a fresh take but at least it's well written.
The insight here is that this also still applies to huge enterprise contracts where supposedly more rational decision making should apply.
The author has a new thing which is different - unfamiliar - and ostensibly better. To a customer, when is a claim for better credible, and when does better really better? How does better measure up as benefit?
The challenge for any product story is to a) illuminate the need - why is the status quo intolerable and b) communicate the benefit tangibly to your audience. That the audience thinks your new thing is worth the effort depends on them understanding the new thing, feeling the need, and feeling good about the effort needed to exploit your thing. You'd like to get to your customer saying "I want that".
I think the specific question for axonlore.com is communicating benefit - how does it impact whatever workflows it serves? The website is a "thing" story, vs a benefit story in my view. I like "enterprise intelligence" as a thing, but it's a tough product. It inevitably implies culture change, and in the decision making space, the key people think they are intelligent enough already -- they want to scale themselves. Someone mentioned "better RAG" - maybe the story is how agents can perform better and more cost effectively. I am not clear that "the market" knows that it needs that yet.
I don't think "familiarity" is the right framing. Application automation, or workflow automation, or whatever the enteprise framing is of agentic solution generation, is to me a question of variance and effort. Variance in the quality of a work product and the net effort to produce it. Variance is the complement to familiar.
- high variance / low effort: prototypes
- low variance / low effort: automating anything repetitive and complicated
- low variance / high effort: demonstrated need for precision and or reliability
- high variance / high effort: when there seems like potential huge upside, or existential risk.
From an IT perspective, enterprise status quo is towards low variance/high effort. The market "want" here now with "agentic" seems to the benefit of low variance/low effort solutions ... where, in enterprise, getting an adequate solution is no longer gated on negotiating with or relying on IT or dev. Ultimately, I think enterprises want low variance, low effort operations -- customers of enterprise customers pay for low variance. I think an Agentic-IT solution question will be how confidently can one iterate and converge to that from whatever is delivered in the first pass. What's the ultimate effort of getting something "right enough".
That why vc look favorably to startup which go trough the motion of setting up partner led sales channel. an established partner taking maintenance contracts bridge the disconnect in the lifecycle gap between the two realities.
But no, corporate is bad, I guess.
Huh? All current and previous-gen models are most effective when coding in languages with the most test data.
While I agree the newest frontier model may be smart enough to reason at a lower level and be agnostic but its “relatively dumber / less capable” forebears .. need lots of examples to pattern match from.
Familiarity once again!
One should not underestimate a "compression primitive with a chat interface". For certain tasks it is a superpower.
As the article notes, the alternatives from the large companies suck. So this is like buying fire insurance from a company that promptly sets fire to your house. You are buying the insurance while knowing you will need it because the disaster is already happening.
This is correct and very agreeable to everyone, but then after some waffle they then write this:
> Structure, for the first time, can be produced from content instead of demanded from people
These quotes are very much at odds. Where is this structure and content supposed to come from if you just said that nobody makes it? Nowhere in that waffle is it explained clearly how this is really supposed to work. If you want to sell AI and not just grift, this is the part people are hung up on. Elsewhere in the article are stats on hallucination rates of the bigger offerings, and yet there's nothing to convince anyone this will do better other than a pinky promise.
Imagine a model with a reliable 100M context window. Then all of a sudden you can.
> The information the intelligent answer needs was never in the wiki in the first place.
Oh well.