Given a database and a query, yes, you can understand the execution profile.
What you can't do is start out with a schema and a query and understand the execution profile. Query planners take statistics into account when determining which indexes to use or to fall back to scanning, and can normally only examine a fraction of possible join orders. So you can usually only fully predict the performance profile of simple queries. Databases can run perfectly well one day and fall off a performance cliff the next, when statistics change; doesn't happen often, but it can happen.
More likely, something works perfectly well in test and in the early days, but rapidly becomes untenable as the data grows and superlinear slowdowns emerge.
Thus you need to put run-time monitoring into place, and have a routine process of examining slow-running queries, and fixing things that are slow: rewriting queries, adding indexes, possibly denormalizing data, materializing views, etc. It's an ongoing process in any application that works with lots of data.