It keeps changing since our imagination of what tasks requires intelligence are weak. We think that when a computer can do X it can also do Y. But then someone builds a computer that can do X but can't do Y, and we say "oh, so that doesn't require intelligence, let me know when it can do Z and we can talk again.". That doesn't mean that Z means the computer is intelligent, just that Z is a point where we can look at it and discuss again if we made any progress. What we really want is a computer that can do Y, but we make small mini tasks that are easier to test against.
The Turing test is a great example of this. Turing thought that a computer needs to be intelligent to solve this task. But it was solved by hard coding a lot of values and better understanding of human psychology and what kind of conversation would seem plausible when most things are hardcoded. That solution obviously isn't AI, I bet you don't think so either, but it still passed the Turing test.