Just because it feels as though I do things because I can doesn’t mean that is actually true.
Choice stems from uncertainty, partial knowledge. It might be an illusion for an observer outside of the system, but as far as a participant within the system is concerned, there is choice, then there is free will.
I am writing this because I ca n but I don't need to do it. I have futures where I don't do that and do something more rewarding instead and still. As long as I am aware of the choices, then I have free will.
Now, a program which is objective driven and can infer from new inputs might be something else.
Just like humans try to maximize the stability of their structures via a reward system. (it got slighty complex, faulty at times, or the tradeoff between work vs reward is not always in favor of work because we do not control every variable, hence procrastination for example, or addiction which is not a conscious process but neuro-chemically induced).
This is a very philosophical discussion, but if I had an infinitely-powerful computer and could simulate an entire universe based on a series of instructions (physical laws), would the beings in that universe that created societies not be "acting on their own"?
If the instruction set is limited and defined by someone else, I believe it doesn't.
I think, re. the simulated universe, that for us, they wouldn't have free will because we know causality (as long as we are all knowing about the simulation). But as far as they would be concerned, wouldn't they have free will if they know that they don't know everything and whether the future they imagine is realizable?
If they knew with certainty that something was not realizable, they wouldn't bother, but since they don't know, either they try to realize a given future or they don't.
Partial information provides choice of action, therefore free will.