If a new hire can't checkout, build, and test the software on the first day, then there is likely something either wrong with the hire or the infrastructure. A sufficiently old and arcane software system might take weeks before a new hire can make even a simple change, but that shouldn't impact those three items.
> If the user/client asks you to make a small but not trivial change, how long would it take to update and deploy the program?
I have had answers ranging from "A couple hours" to "A year" (yes, they were serious). Most were in the 1-3 month range, though, which is pretty bad for a small change. It also makes it apparent why a bunch of changes get batched together whether reasonable or not. If a single small change, single large change, or collection of variable sized changes all take a few months to happen, might as well batch them all up. It becomes the norm for the team. "Of course it takes 3 months to change the order of items in a menu. Why would it ever be faster than that?"
Upd. And “change menu items order, fast” is a sign of a problem. We found Mac Cube in ski vacation rental home once. It ran MacOS 10.2 or something. All the menu items were in the places we expected them to be! You think carefully first, then you implement menu items order. Upper Apple -> About this Mac. We managed to break their network config in like 5 minutes!
And I don't think I understand your update to your comment or you don't understand the point of that example from mine. It was illustrating the submission topic: normalization of deviance. Sure, you should think about where things should be but if a customer comes in and says, "Swap these two items" and you can't provide a working version with that single change for months then things have gone off the rails somewhere. I put it in quotes to reflect a statement like what I have heard from those teams I worked with. To them a long effort for a trivial change is normal, when it should be considered deviance.
EDIT: effect->affect. Always trips me up.
You can do a lot with good test automation, even in avionics. That cuts down a ton of the time and usually improves quality.
I'll also note, don't take my "deployed" too literally. I used that term because so many people here are working on server-based applications where that makes sense. Think "out the door". The exercise can only go as far as the team/org's reach. Deployed for avionics would mean more like, "At the flight test team". After that, it's up to someone else to schedule it and get it returned with issues or fielded.
Going beyond the team's reach without including those people (and thus making them part of the team, after a fashion) is guess work and opens up the blame-game. "It's all flight test's fault it takes a year to get out to the customer." Well, it takes you 9 months to get it to flight test and them 3 months to get done. So why does it take you 9 months? If you have a good reason (complex system, lots to test) then that's valid. If it's a simpler system, 9 months to get it to flight test is probably not justifiable.
- Install docker - Setup GitHub SSH credentials. - Pull the main repo. - Run a script that will pull down related repos, install dependencies, start up a bunch of docker containers, and then run health checks on the app. - setting interactive debugging takes a bit longer, but not too much more.
Unfortunately I’ve routinely dealt with our IT department being slow to give credentials to new employees or shipping them under provisioned or just incompatible systems. No you can’t give our new senior developer the same cheap crap laptop running an ancient version of windows on that you send to the junior marketing person doing cold calls all day.