story
If we found out that one species of chimp learns sign languages through a causal model while another learns it through an associative one (for example) we wouldn't label one more or less intelligent, because it's the end result that matters – don't you think?
Likewise, arguably the ultimate goals of AI are behavioural (machines that can think/solve problems/communicate/create etc.), even if it's been relatively focused on mechanisms lately. Any particular kind of modelling is just a means to that end. Precisely what that end is is still a bit hard to pin down, though.