I think parent comment is right. It's just a platitude for administrators to cover their backs and it doesn't hold to actual usecases
When you put something on autopilot, you also massively accelerate your process of becoming complacent about it -- which is normal, it is the process of building trust.
When that trust is earned but not deserved, problems develop. Often the system affected by complacency drifts. Nobody is looking closely enough to notice the problems until they become proto-disasters. When the human finally is put back in control, it may be to discover that the equilibrium of the system is approaching catastrophe too rapidly for humans to catch up on the situation and intercede appropriately. It is for this reason that many aircraft accidents occur in the seconds and minutes following an autopilot cutoff. Similarly, every Tesla that ever slammed into the back of an ambulance on the back of the road was a) driven by an AI, b) that the driver had learned to trust, and c) the driver - though theoretically responsible - had become complacent.
"if the program underperforms compared to humans and starts making a large amount of errors, the human who set up the pipeline will be held accountable"
Like.. compared to one human? Or an army of a thousand humans tracking animals? There is no nuance at all. It's just unreasonable to make a blanket statement that humans always have to be accountable.
"If the program is responsible for a critical task .."
See how your statement has some nuance? and recognizes that some situations require more accountability and validation that others?
If some dogs chew up an important component, the CERN dog-catcher won't avoid responsibility just by saying "Well, the computer said there weren't any dogs inside the fence, so I believed it."
Instead, they should be taking proactive steps: testing and evaluating the AI, adding manual patrols, etc.