1) Formally define what is meant by "innovation" in terms of clearly measurable outcomes.
2) Measure this clearly-defined quality among all countries and many sample points through time.
3) Try to separate out explanatory variables for the quality being measured. Build these data driven statistical models to model this formally defined "innovation" quantity -- not using hand-tuned weights of various measures, as these "rankings" or "indexes" often do.
4) Try to predict a probability distribution of the "innovation" quality, using models developed in step 3.
Step 1 should be qualified with explanation that this human-designed definition is an imperfect, and that all results should be understood in the context of this formal definition.
Step 2 should be qualified with notes of any possible limitations in the sampling methodology (availability of data, etc.) and how this factors into error margins.
Step 3 should be qualified with sufficient explanation that it's a model of reality derived from data, and therefore risks overfitting/underfitting/etc. errors.
Step 4 should be qualified with an explanation that this is a prediction based on the above model fit, and therefore is subject to potential errors compounded by any of the previous steps.
That would be the scientifically/statistically responsible and rigorous thing to do. But I suppose I'm crazy to expect Bloomberg to aim for any level of rigor in these "indexes".