> First, you can see that time_f only gives you upper bounds for the runtime (they say this in the book explicitly, but in a different context), because who is to say that you cannot find even "faster" equations.
I don't understand the relevance. I'm not asking for exact computation of steps. Big O is fine. And provisional on the current implementation would be the whole point.
> If Isabelle had time_f built into its kernel from the start (the script just runs invisibly to you inside the kernel), would you then say that there is a formal connection in Isabelle between f and its runtime complexity?
No. That would be a clear hack. And it would break a lot of stuff in Isabelle. For example, you would introduce a distinction between first-order and higher-order functions. E.g. how do you calculate the runtime of a function like `map` when you don't have a concrete function to do the substitution on? This kind of thing only works when you have the concrete syntax of a function, which is different from the entire rest of the way the kernel works. (BTW the way the authors of this particular book deal with `map` is they just hard-coded it by hand).
Another illustration of this problem is that you wouldn't be able to state or prove very natural theorems such as "the asymptotic runtime of a sort function based on pairwise comparison can't be less than O(n log(n))".
Just jamming it in the kernel mixes syntactic and semantic concerns together. Depending on how awkward it's done it could affect the soundness of the underlying logic.
E.g. you could just say "oh we'll just make a logical relation `T` that is filled in by the compiler as necessary so e.g. in `map` it'll just have a placeholder that can then get put in" but then what is `T` in the kernel? What's its type? Can you abstract over `T`? What's its runtime representation? Does it even have a runtime representation? What are its deductive rules? So and so forth. And the answers to all these questions are all linked together.
There's a reason why `time_f` approaches haven't been adopted. It's very easy to blow up your logical system accidentally. It's a subtle problem and why I was asking for production examples. There's been some efforts here such as RAML, but it's a thing where you need to find the right balance between a full-blown macro system and a pure denotational deductive system.
> Third, if you wanted to do it not axiomatically (which is what this is: it is an axiomatic approach to runtime complexity), but definitionally, then you have to come up with a lot of definitions
The problem isn't axiomatic vs definitional, the problem is denotational vs operational. Once you decide one way or the other then the rest comes after that. Right now you can't even write a morphism between an Isabelle function and a machine function, let alone prove it.