If that were so then there would be no need for native graph databases at all, and we would not be seeing cases that could not be served by relational dbs that are possible with Neo4j and native graphs.
You may be thinking of non-graph use cases. When hundreds of thousands to millions or more traversals are required to address graphy use cases, if those traversals are implemented as table joins, and the join complexity is dependent upon tables that are millions or billions in size (so dependence on total size of the data instead of just the relevant connected relationships) then you can see where pointer hopping on only relevant connected relationship and node records (proportional only to the elements of the subgraph traversed, not total data) would outperform the relational model. Also you have the flexibility of being as strict or as lenient as required with the labels of nodes traversed or the relationship types traversed, as well as their direction. That's tougher to do when you may not know what tables are meant to be joined or how, or if you pour all your nodes into a giant table, where the join cost is proportional to your total data.
Relational databases are very good at what they do. But no tool is perfect and covers all use cases easily. Design is a matter of tradeoffs, and some of the design choices made that make them excellent in many categories becomes a weakness in others. We're in an era of big data, huge data, where modeling, traversing, and exploring the connections within this data is increasingly valuable, and increasingly costly due to the sheer amount of data and the complexity of both the connections between and the use cases themselves. Native graph databases are a tool for these cases, and can also bring along simplicity in modeling and querying to the table as well as the performance that gives them an edge in these cases.