Quadtrees have exceptional scalability but also notorious issues with space complexity and selectivity for some common data models.
Space complexity is unbounded space for non-scalar types, hence why r-trees are used in practice for things like polygons. They have poor search selectivity as a function of index size for many data models, quasi-monotonic data like time-series being an extreme case. In alternative algorithms that try to preserve scalability, these issues are handled via a combination compressing out non-selective branches of the index, for which there are many techniques in literature with various tradeoffs when applied to space decomposition, and by embedding the data model into a higher dimensionality index of a similar type, allowing the algorithm to be selective on additional properties of the data not for the purpose of search but to cause the data to organize in such a way that it is more resistant to manifesting selectivity and space complexity issues.
There are hundreds of described spatial indexing algorithms in literature that tackle the selectivity and space complexity issues in various ways, often by sacrificing the natural scalability of quadtrees in various ways. The space of possible algorithm designs with interesting tradeoffs is very large but bounded. Hanan Samet's canonical text on the algorithm design space weighs in at over a thousand pages and still doesn't cover the full scope of notable algorithm design clusters in public literature.