- Work at a hedge fund
- Every evening, the whole firm "cycles" to start the next trading day
- Step 7 of 18 fails
- I document Step 7 and then show it to a bunch of folks
- I end up having a meeting where I say: "Two things are true: 1. You all agree that Step 7 is incorrectly documented. 2. You all DISAGREE on what Step 7 should be doing"
I love this story as it highlights that JUST WRITING DOWN what's happening can be a giant leap forward in terms of getting people to agree on what the process actually IS. If you don't write it down, everyone may go on basing decisions on an incorrect understanding of the system.
A related story:
"As I was writing the documentation on our market data system, multiple people told me 'You don't need to do that, it's not that complicated'. Then they read the final document and said 'Oh, I guess it is pretty complicated' "
We’re going to write down what Step 7 currently is/does. No, now is not the time to start discussing what it ought to do. Please let us just get through sorting out what Step 7 currently is. Yes, some people do it differently. That’s why we hit a snag. Let’s just pick one of those wrong ways, document it, and do it all wrong together. We’ll fix it as a separate step. Now isn’t the time to fix it, as much as it feels like a convenient time to.
With that out of the way, the original article and this comment thread really makes me feel good by giving a sense of being right.
> Let’s just pick one of those wrong ways, document it, and do it all wrong together.
This reminds me of my colleague who established the importance of consistency very early.
"If you need to be wrong to be consistent, be consistently wrong".
They say this sarcastically but.. if you are the only sane one among a group of insane, now you are the insane one.
1) everyone agrees that it's stupid to move the clocks, we have electric lights now
2) NOBODY agrees which timezone everyone should be when we stop messing with the time.
Because solving 1&2 at the same time is about impossible, nothing will happen. What they should do is agree on 1. Write it into irrevocable law. THEN start arguing about 2.
Writing stuff down is great since it provides a baseline to agree upon, and later additions to the team will take it as given and not start to discuss minutiae and bog down discussions into nothingness. And if some point really is worth discussing, it shouldn't be hard to find support to change it. I've heard some wild misunderstandings of how things were based on how they were being done, and now I never want to do anything of any significant size without there being a clear and obvious process to it.
In Charlie Beckwith's book about Delta Force [0] there is a line where he says (paraphrasing):
"The SAS never wanted to write down what their role was and what tasks they were trained for. Why? Because they didn't want to get pigeon holed into a role. ... They also never wrote down their SOPs b/c the argument was that 'if you can't keep it in your head, you shouldn't be in the Regiment'. At Delta, we were going to write down our mission AND write down our SOPs."
1) design smart(er) requirements- I.e beat up the ask and rewrite the problem statement correctly. 1B is every requirement has a persons name attached who is traceable/responsible for its inclusion- not a department.
2) delete features you don’t need or which are hedges (if you aren’t adding back 10% of the time, then you aren’t deleting enough)
3) simplify or optimize. This step must come after 1 and 2 so you aren’t wasting effort optimizing the wrong thing
4) accelerate
5) automate
This way is very clear where AI plugs in- and more importantly, WHEN it plugs in.
Also, plenty of times people try to run this process backwards, with poor outcomes.
Write down the problem. Think very hard. Write down the solution.
In the world of Business IT, we get seduced by the shiny new toy. Right now, that toy is Artificial Intelligence. Boardrooms are buzzing with buzzwords like LLMs, agentic workflows, and generative reasoning. Executives are frantically asking, "What is our AI strategy?"
But here is the hard truth:
There is no such thing as an AI strategy. There is only Business Process Optimization (BPO).
This is well-expressed, and almost certainly true for an overwhelming majority of companies.
Almost all of the tech debt we have was introduced by leadership guidance to ignore. And all additional debt to manage it or ameliorate it (since problems don't just go away) is also guidance from leadership to fast track fixes.
What happened to the days where software engineers were the experts who decided tech priority?
My takeaway was that the project was doomed because it was named wrong. Should have been called Business Process Design.
They are now owned by Private Equity. I can only wonder what madness the would have wrought with AI.
They tried to implement a system whereby a customer has a single customer number. Between mergers, acquisitions and shutdowns it was impossible to keep straight and keep tracking history. It impacted rates, contracts, sales commissions, division revenue-everything. In they end they gave everyone a new number while still using the old ones.
The processes suck because of decades of corner cutting and "fat" trimming while the executives congratulate themselves for only making the product a biiiit worse in exchange for a 0.0005% cost reduction, before then offsetting any gains by giving themselves all the money that would've gone to whatever is now dead.
Repeat this process for 30 years and you have companies like Microsoft that can barely ship anything that works anymore, and our 4 Big Websites frequently just fail to load pages for no explicable reason, Amazon goes down and takes 1/3 of the internet with it, and AI companies are now going to devour the carcass of our internet and shit it back to us in LLM waffle while charging us money for the privilege to eat it.
I do agree on execs congratulating themselves afterwards though. It was obscene last year. This year it was mildly muted.
Not really, because solving those problems with headcount defeats the point. Part of the definition of those kinds of problems is that solutions involving headcount are invalid.
"it's always the process, stupid!
In the context of business IT, it is always and only about BPO, nothing else. so if you want to be successful implementing AI in the enterprise context, you have to handle it like every other software tool: look at your processes and find out how AI, agentic or not, can help optimize the process or help build a better process. Like every technology, AI dont make more intelligent, it only makes faster
write me a blog post to express this ideas"
and i added this in a second step
"i need to integrate another idea:
AI is very good at handling unstructured data, in fact, it's the first tool that is very useful for that. But most processes that uses unstructured data are not documented, as they are also often unstructured. So to implement AI in processes, we must improve our process design"
Fact: when you know what you want to achieve, AI can be very useful, especially for lazy people like me ;)
There are some people who insist on spamming out splog posts in that style, some of them think they are blogging, not splogging, and maybe they have good intentions but that style screams "SPAM!" and unfortunately people who are writing that don't understand how it comes across.
In fact, if an AI strategy becomes business process optimization, I'd say that AI strategy for that company is successful.
There are too many AI strategy today that isn't even business process optimization and detached from bottom line, and just being pure FOMO from the C suite. Those probably won't end well.
On the other hand, I have seen process stifle above average people or so called “rockstars”. The thing is, the bigger your reliance on process, the more you need these people to swoop in and fill in the cracks, save the day when things go horribly wrong, and otherwise be the glue that keeps things running (or perhaps oil for the machine is more apt).
I know it’s not “fair”, and certainly not without risk, but the best way I have (personally) seen it work is where the above average people get special permissions such as global admin or exception from the change management process (as examples) to remove some of the friction process brings. These people like to move fast and stay focused, and don’t like being bogged down by petty paperwork, or sitting on a bridge asking permission to do this or that. Even as a manger, I don’t blame them at all, and all things being equal so long as they are not causing problems I think the business would prefer them to operate as they do.
In light of those observations, I have been wrestling a lot with what it says about process itself. Still undecided.
I doubt there's much to do about the specific process that can be done to minimise the problems of the rockstars without also causing problems further down the ladder, without just starting to make exceptions like you said. It's probably just an emergent behaviour of processes like this intended to raise average quality. You pull up the bottom floor, but the roof gets lower as a result. You can find similar problems in schooling.
This is a case of bad process. No process is perfect, but the whole point of process is so when things go wrong they don't go horribly wrong, and that you don't need rockstars to fill in the cracks. It should be making your rockstars faster because the stuff they need others to take care of gets done well. Unnecessary friction that slows people down is generally a sign of management mistaking paperwork for process.
Is it slow and annoying to jump through these hoops? Without a doubt! I’ve also seen people on the other side of the process who are very frustrated that they can’t just escalate when they know devs would want to hear about it. But it’s not acceptable for people to get woken up every week because the new support engineer filed a customer error as a global outage, and smart people tried and failed to put a stop through it through training. I don’t know what the alternative could be.
Like, we recently had an incident where someone just pasted "401 - URL" into the description and sent it off. We recently asked someone to open the incident through the correct channels. We got a service request "Fix" with the mail thread attached to it in a format we couldn't open. We get incidents "System is slow, infrastructure is problem" from random "DevOps" people.
Sadly, that is the crap you need to deal with. This is the crap that grinds away cooperative culture by pure abuse. Before a certain dysfunctional project was dumped on us as "Make it Saas", people were happy to support ad-hoc, ambitious and strange things.
We are now forced by this project to enforce procedure and if this kills great ideas and adventures, so be it. The crappy, out-of-process things cost too much time.
In big corporate environments, ‘around average’ process would be a radical improvement. We are stuck in the reality where standing up a Service Now form is considered great progress.
>Processes that rely on unstructured data are usually unstructured processes.
I appreciate someone succinctly summing up this idea.
- Your process interacts with an unstructured external world (physical reality, customer communication, etc.)
- Your process interacts with differently structured processes, and unstructured in the best agreed transfer protocol (could be external, like data sources, or even internal between teams with different taxonomies)
- Your process must support a wild kind of variability that is not worth categorizing (e.g. every kind of special delivery instruction a customer might provide)
Believing you can always solve these with the right taxonomy and process diagram is like believing there is always another manager to complain to. Experienced process design instead pushes semi-structured variability to the edges, acknowledges those edges, and watches them like a hawk for danger.
We should ABSOLUTELY be applying those principles more to AI... if anything, AI should help us decouple systems and overreach less on system scope. We should get more comfortable building smaller, well-structured processes that float in an unstructured soup, because it has gotten much cheaper for us to let every process have an unstructured edge.
"Ask the vendor this set of 10 compliance questions. We can only buy if they check every box." is a structured process based on structured data.
Both kinds of processes have always existed, long before modern technology. Though only the second kind can be reliably automated.
Leaders think <buzzy-technique> is a good way to save money, but <buzzy-technique> actually is a thing that requires deeper investment to realize more returns, not a money saver.
I have seen a smattering of instances along the way where the act of defining requirements forced companies to define processes better. Usually, though, companies are unwilling to do this and instead will insist on adding flexibility to the automation tooling, to the point where the tool is of no help.
Which leads us to turning into a different team: we have to go figure out what the process engineering even is, which means becoming a bigger expert than they are at the process they want us to make tooling for.
I have learned to be careful of "too much process", but I find that the need for structure never disappears.
AI deals well with structure. You can adjust your structure to accept less-structured data, but you still need the structure, for after that.
Just maybe not too much structure[0].
I'm now in the process of trying to hand off chunks of the work I do to run my business to AI (both to save time but also just as my very broad, practical eval). It really is all about documentation. I buy small e-commerce brands, and they're simple enough that current SOTA models have more than enough intelligence to take a first pass at listings + financials to determine whether I should take a call with the seller. To make that work, though, I've got a prompt that's currently at six pages that is just every single thing I look when evaluating a business codified.
Using that has really convinced me that people are overrating the importance of intelligence in LLMs in terms of driving real economic value. Most work is like my evaluations - it requires intelligence, but there's a ceiling to how much you need. Someone with 150 IQ points wouldn't do any better at this task than someone with 100 IQ points.
Instead, I think what's going to drive actual change is the scaffolding that lets LLMs take on increasing numbers of tasks. My big issue right now is that I have to go to the listing page for a business that's for sale, screenshot the page, download the files, upload that all to ChatGPT and then give it the prompt. I'm still waiting for a web browsing agent that can handle all of that for me, so I can automate the full flow and just get an analysis of each listing sent to me without having to do anything.
Honestly, it takes such a relatively small amount of time that it makes sense to just do it myself until there's an agent that can easily handle it; I'm really only spending time trying to automate it now as a test of AI capabilities. If I actually wanted to get it automated tomorrow, the most time-efficient way to do that would just be to involve a VA from somewhere cheap for the work I'm doing.
The useful framing is not “where can we bolt on AI” but “what does the system look like if AI is a first-class component.” That requires mapping the workflow, identifying the decision points, and separating deterministic steps from judgment calls.
Most teams try to apply AI inside existing org boundaries.
That assumes the current structure is optimal. The better approach is to model the business as a set of subsystems, pick the one with the highest operational cost or latency, and simulate what happens if that subsystem becomes an order of magnitude more efficient. The rest of the architecture tends to reconfigure from that starting point.
For example, in insurance (just an illustration, not a claim about any specific firm), underwriting, sales, and support dominate cost. If underwriting throughput improves by an order of magnitude, the downstream constraints shift: pricing cycles compress, risk models refresh faster, and the human-in-the-loop boundary moves. That’s the level where AI changes the system shape and acts beyond the local workflow.
This lens seems more productive than incremental insertion into existing silos.
Example, one of many things, in our SDLC process, now we have test cases and documentation which never existed before (coming from a startup).
But I don't blame them. Process optimization is hard. If a new tool promises more speed, without changing the process, they are ready to pour money at that.
I recently did a pilot project where we reduced the time for a high friction IT Request process from 4 day fulfillment to about 6 business hours. By “handing text and unstructured data”, the process was able to determine user intent, identify key areas of ambiguity that would delay the request, and eliminate the ambiguity based on data we have (90%) or by asking a yes/no question to someone.
All using GCP tools integrating with a service platform, our ERP and other data sources. Total time ~3 weeks, although we cheated because we understood both the problem and process.
For many processes that have just suddenly changed, somewhat subjective evaluations can be made reliably by an AI. At least as reliably as was being done before by relatively junior or outsourced staff.
Replacing low-level employees relying on a decision matrix playbook-type document with AI has a LOT of applications.
Here’s your Ai strategy: every few months re-evaluate agent fitness and start switching over. Remember backstops and canaries.
Details:
Businesses usually assign responsibilities to somewhat flaky employees, with understanding there will be a percentage of errors. This works ok so long as errors don’t fluctuate wildly and don’t amplify through the system. Most business processes are a mess and that works ok.
Once agents become less flaky and there are enough backstops to contain occasional damage business will start switching.
There will be a mixture of those who succeeded accidentally, or by taking calculated risks, Or by virtue of sill holding their shot together.
Drawin will chew up the rest, which might be the majority.
> The intelligence (knowing what a "risk" actually means) still requires human governance.
Less and less. Why do you trust a human who’s considered 5000 assessments to better understand “risks” and process the next 50 better than the LLM who has internalized untold millions of assessments?
What's the prompt for that one? ;)
There is only Business Process Optimization (BPO)."
Exactly, that's the fundamental truth. The shiny tool of the day doesn't change it at all
What does it bring?
AI won't take a shoddy process (say, your process for reviewing and accepting forms from patients) and magically make it better if you don't have an idea of what "better" actually entails.
"Improving a system requires knowing what you would do if you could do exactly what you wanted to. Because if you don't know what you would do if you could do exactly what you wanted to, how on earth are you going to know what you can do under constraints?"
- Russ Ackoff
Did you read the example? The business process of human bias is gone in the cancer detective phase. AI eliminated it.