Despite the fact that the distinction is very philosophical, I think the implications are very practical. Without it's own initiating energy everything an AI produces will be a response to an input, and it's response will be constrained by the bounds implied by that input. The specific type of dialectic between the programmer and the person giving requirements, which leads to creating the ACTUAL requirements, is impossible to happen with an AI, because a dialectic requires two opposed agents/forces while an AI is incapable of being an opposing force because it is only a derivative or product of whatever force is providing it's input; basically, it is constrained inside a box defined by the input it is given, and precisely what is needed for true synthesis(new ideas/thoughts, as opposed to an analytic breaking down of the already proposed ideas) is a whole separate box to interact with the one defined by the input.
My explanation is extremely abstract and will probably only make sense to someone who almost agrees with me already, but that's the best I could do. I'm sure there is a more down-to-earth way to explain this but I guess my understanding isn't good enough to find it yet. In my defense I do think this particular issue of agency in AI is one of the most subtle and philosophical problems in the world right now that actual has practical implications.