That's important in places like bodyshops and non-software business where no self-respecting engineer with options would want to work, and places where monotonous quality matters more than brilliance. Keep em as far as I'm concerned.
The Agile manifesto specifically says "Individuals and interactions over processes and tools." If teams are forced to work in a certain way (e.g. scrum) then your organization isn't actually doing Agile.
Based on my experience that is happening in a lot more places than what you listed. Doesn’t help that price tag of experienced devs pushed dramatically upwards in the recent years
If you have a team of quality people but they're all running around in their own direction doing their uncoordinated thing, they add up to basically zero. While if they're all working towards the same thing in the same direction, their efforts reinforce each other so the whole is much greater than the sum of its parts.
The expertise of people on a team provides a ceiling for what they can accomplish, that's true. And so you need to make sure that if you've got a task to achieve by a certain date, that it's below their ceiling. The best manager can't raise that ceiling.
But you still absolutely need to manage them in order for them to get anywhere even close to that ceiling. And to go back to your sports metaphor, when an amazing coach replaces a terrible one on a sports team, it's astonishing the difference it makes -- you almost can't believe it's the same team. It's astonishing what happens when a sports team plays "as one" instead of each player doing their own thing no matter how great they are. But that simply doesn't happen organically. It requires a coach, the same as a great team of employees doesn't get there without a great manager.
So ultimately it's not that either matters more than the other. You need both individual expertise and management. Think of the total resulting effectiveness as a multiplication of those two values.
Well first I would say that a team of quality people wouldn't all run in their own direction. They would know they need to coordinate to achieve a goal. Quality doesn't just mean being good at programming, it means the whole process of delivering software.
The sports metaphor is starting to break down, but just to have a one more go. Sports is different because the top teams all have top 1000 players in the world. This won't really happen in software. So with software it's more about putting together the right team. One where people enhance the whole, and that's what I think the biggest role of management is about. It's helping coordinate quality people.
The thing I was speaking out against was that you could have a good manager with a good process and not great people. The idea that the process can hide the problems of the team is something I don't believe. It doesn't matter how you organize if thats the case, nothing of real value will get made.
I do like the idea that the total result is a multiplayer of the two values. I just think the team quality is a little more of a multiplier than the other :)
If I learned anything watching "the playbook" on Netflix, is that great coaches don't seem to follow any kind of playbook at all. They're flexible, and managed to create strong relationships with their managees over time, while forcing them to be team players.
Wow, really? You must have never been part of a strong, productive team.
Tactics is mostly irrelevant.
Unfortunately the latter is much easier to change so leaders like to modify it constantly to seem like they are adding value. And since it’s mostly irrelevant it’s hard to prove that this is bad.
Despite having great people, coordination challenges greatly increase over time. This is a good summary:
No, but a good coach will start with you team, replace single player and get slightly better results. Replace another player and get better results. Rince and repeat until you and all your friends are long gone but your team plays in the NFL.
Being able to detect, attract, nurture and retain talents is exactly what the coach is doing on a daily basis.
You start with objectives that make sense and are aligned to the organization's mission.
Then you come up with metrics that attempt to measure how successfully those objectives are being realized.
Then you tie manager and executive compensation to those metrics.
Finally everyone proceeds to game the metrics, often in ways that completely contravene the original objectives, in order to maximize their bonuses.
Rinse and repeat.
To each their own.
I worked as a software engineer for 12 years before becoming a manager and I can promise you that a team works better when everyone has input on what goals the team commits to and how the team plans to achieve those goals.
You never never never NEVER want to work on a team where your manager decides all of that and then just throws tickets at you. Screw that sideways.
That has nothing to do about the creation of tickets imo. Ofc the team will find a way to organise ITS OWN WORK.
OKRs can absolutely be useful, but like so many other things, a system is what it does. If you measure bug rate, then you'll get no bug reports, even if there are bugs.
OKRs, when done correctly, aren't about accountability necessarily, but course correcting early and often. It's a tool to combat scope creep and a feedback loop for triggering pivots when progress towards laid plans aren't yielding returns.
These are things engineers should love! They keep engineers focused on the right things, aligned with the business and most importantly keep the business aligned with them.
Politics and bureaucracy are rampant everywhere, but like everything else, when used in moderation these tools can be really good.
Oh, that's when they work on the least bad way. They can become much more than that, all it takes is somebody intent on gaming them.
It's just that these managers often respond by relaying their OKRs to their teams and making them constantly-aware-of and responsible-for achieving them, rather than having the OKR be something private to the executive level that informs the manager's decisions, while they continue to run the team the way they would without OKRs.
When OKRs were first used at Google it already had tens of thousands of employees. Why would any startup use a tool created by a big org and designed for a big org? Should we give infants medicines that were created for adults?
Startups need to stop copying BigTech because they're not like BigTech.
Intel was huge when they started OKRs, but Google wasn't, in 2000.
https://rework.withgoogle.com/guides/set-goals-with-okrs/ste...
For example I’m sure Doerr recommends regular syncs on the status of your KRs.
They are intended to be both a way of reviewing the work in a planning period, and also determining what is red and needing attention within the planning period.
Tactical/team-facing KRs could be measured daily if you need to, though the CEO wouldn’t want that report.
Is practically begging the question. If the business results are not matching up, what's going on? Are are your OKRs in conflict with running the business as expected?
The title "will never be enough" is business dystopia to my ears. OKRs are bad enough — and you want more? OKRs as implemented where I've worked is the "O" is often BS, everybody skips the "KR" step completely, often naming abstract wibbly-wobbly goals in its place, and by Q3 it's all been forgotten about.
On a slightly different note, it would be great if organizations had no expectations for engineers to learn or know about any of this... and also get promoted. I went to school for so long to be a technical person but it seems every org expects engineers to learn everything about business operations, management etc. It's just a strange expectation, like being a plumber but then you also have to take care of the plants outside and do some gardening.
Particularly the last 2. Most companies, if they actually sit down and talk through their objectives honestly, and then get buy-in from the stakeholders and doers on the actual KRs and initiatives, will be forced to face and figure out the big issues in communication and accountability, at least by the time you have done OKRs for a year or so (your third or fourth planning cycle typically starts to feel like you have the hang of it).
And writing down your main goals ahead of time, with a clear owner, then reviewing after, is a good planning and accountability framework, no matter what name you give it.
As with anything good, there is snake oil to be had, and idiots will do OKRs wrong like they do everything else wrong. But I think it’s a useful tool in the hands of a competent and well-run company, particularly for startups that are getting into the stage where they need to start planning seriously, and don’t have an existing framework, say around 25-50 people.
> Key results should be measurable, either on a 0–100% scale or with any numerical value (e.g. count, dollar amount, or percentage) that can be used by planners and decision makers to determine whether those involved in working towards the key result have been successful. There should be no opportunity for "grey area" when defining a key result.
In a lot of places that use "OKRs" the metric is something non-specific, like "get more users" or "have a great UX". This means your bonus now depends on politics, instead of a defined metric of success. When the metrics are clearly defined, work can be a pleasure.
The process of defining a concrete metric in advance is just front-loading that politics process to the planning stage instead of at the back end. (Which may be better! But it doesn't eliminate politics.)
I am not so much talking about the correctness of the metric, but about the certainty that you will get it, if your results are good.
If there is no specific number someone can always say you performed bad one day before, for pretty much any reason (you did not care enough about the customer). This can destroy morale.
Instead, if the metric is quantifiable and easy to check on a dashboard there is no way to deny it. By the time the evaluation comes there is no manipulation possible.
Even within engineering, some teams are more geared towards “sustaining quality” than “expanding capabilities”.
You can just as easily set a “maintain quality” or “sell our product” objective that may or may not have explicit initiatives, and call the KPIs from the article KRs instead. There is really no difference.
You often see this within teams too; it’s best practice to have a quality metric as a guardrail to any growth metric (eg “add users, but don’t increase churn for existing” or “build new functionality, but don’t impact API uptime”). So you end up needing to represent these non-initiative-driven metrics that the article wants to call KPIs in your OKR system anyway.
KISS!
Just do the work, do it well, make it measurable, and hit 80% of your aspirational goals
Middle management needs to find something else to do, like complain about single panes of glass and metrics
But back to the problem: Objectives (O) should be inspirational and Key Results (KR) are measurable outcomes. The examples given for scorecards can very well be KRs. Instead of a weekly review we just use dashboards where everyone can see how we perform and we review OKR quarterly.
Supposedly it’s the leadership’s job of steering the company in the right direction but in the end they throw their hands up and ask the people doing the REAL work to simply do more.
Something smells fishy.
You don't say :)
KRs are just "things you can do to achieve the goal", so my process goes:
- What do I wish was different? - What would happen if that thing changed? - What's stopping it from changing? - How can I get over those obstacles?
The answers to "how can I get over those obstacles?" become the KRs that help achieve the O ("What do I wish was different?").
That's kind of the point of OKRs (in theory): To stop upper managers from dictating process to lower managers. (Which is a big problem.) Instead, ignore their process and evaluate them on how well whatever process they chose delivered. IE, give someone an objective, let them find their own solution, and evaluate them on only how well it worked.