GlusterFS, on the other hand, was incredibly easy, although I am not a fan of FUSE due to the high CPU usage.
I was working on an unrelated project with one of the designers of Ceph (UCSC's Scott Brandt) and in conversation he also seems to concur that Ceph was really built as a replacement for PanFS or Lustre (but still may be useful for other things of course).
Using it to replace GlusterFS still seems odd to me. It feels like they're both solutions to different problems.
Ceph is much larger than just the file system (as the article points out). And while many Ceph products/subsystems are used extensively in production environments (RBD block store, RGW, and RADOS), CephFS isn't officially supported as production-ready.
Despite that, we run Hadoop on top of CephFS, and can deal with the occasional metadata server problem. CephFS is actively being hardened.
Configuration-wise, Ceph involves quite a bit of voodoo compared to Glusterfs, it's an area where I hope to see some improvement.
More SSRC work and some of the original CEPH papers can be found at: