I don't think this approach scales, even in an environment that supports recursive queries like PostgreSQL.
The more scalable approach would be to use either a commercial database systems with explicit support for pattern matching or encode conversion path as a string (ex: "top page -> product page with SKU=1337 -> Purchase" becomes "T_SKU1337_P") and use REGEX/GROUP BY.
In all cases, this sounds like a suboptimal use case for either Solr or Elasticsearch.
I've always found it befuddling why so many developers want to use Solr/Elasticsearch for analytics heavylifting. It's probably because
1. SQL is not the most intuitive (although most pervasive) API for data analysis
2. Much of the data is already in Solr/Elasticsearch to make your data searchable/perform simple roll-ups and filtering, etc., so it'd be great if you can do more complex analytics against them as well
AS to why Solr/Elasticsearch is not ideal: the existence of superior alternatives that is OLAP databases.
I understand that ES can lose data, or have some data storage problems, but one could just as well store all the incoming data on Hadoop or so, without having to bother with C*, no?