http://en.wikipedia.org/wiki/Tuple_space
http://en.wikipedia.org/wiki/Linda_(coordination_language)
http://www.amazon.com/Mirror-Worlds-Software-Universe-Shoebo...
I've long contended that a tuple space was basically a generalised key-value store, so it's nice to see projects like this one crop up.
[1] http://java.net/projects/jini/
These goals are greatly divergent. In Pig, Java code is written to create new functions that can be used for analysis--i.e. Java is written in support of Pig Latin. Pangool focuses instead on extending Hadoop by making the Java code easier to write. This means Pig could potentially be implemented in Pangool, if Pangool were to satisfy the requirements for the task. (Not that I am suggesting that Pig actually be written--it might just be possible, depending on the technical requirements.)
Having used Hadoop in the past, I would be more inclined to use Pangool. Parts of Hadoop are poorly written--especially the reliance on singletons--and anything that makes it easier to write code that runs on a Hadoop cluster is a desirable goal in my eyes. I look forward to seeing how this project shapes up.
Though I don't have deep expertise in Hadoop, I find this claim highly suspect. High-level APIs achieve user-friendliness by making decisions/assumptions about the way a lower-level API will be used. I would be very surprised if there was no use case for which your API does impose a trade-off vs. the low-level Hadoop API.
I feel much more confident using a high-level API if its author is up-front about what assumptions it's making. If the claim is that there is no trade-off vs. the low-level API, I generally conclude that the author doesn't understand the problem space well enough to know what those trade-offs are.
I could be wrong, but this is my bias/experience.
HIVE -> Pig -> Pangool -> Cascading -> MapReduce
Nice addition!