As others have explained Tachyon does the "put dataset in memory" part.
Spark started off as the "in memory map reduce" but has now become a platform for Scala, Java, Python, HiveQL, SQL and R code to run. It is the most active Apache project and is getting more and more powerful by the day.
Given how easy it is to get running it wouldn't surprise me to see it in the years to come being using as the primary front end for all data needs.