If memcached papers have one thing in common, it's an uncanny ability to get the comparison software to run as slowly as possible. 100k ops/sec/core is what you get when using a single client connection with blocking I/O. Using more clients (as per a normal prod setup) or pipelining queries is more like 1m ops+/core, with writes scaling worse than reads. In production it's easy to get some level of pipelining (multigets, clustered keys, etc), since you're rarely just fetching a key and then blocking.
A much better FPGA paper would show scales of at what level the syscall overhead becomes most of the CPU usage, as well as any measured latency improvements. I think some of the other papers address latency at least.
In reality it hardly matters. If you're hitting memcached hard enough with tiny values for it to matter, ensuring keys are clustered and pipelined is a lot less maintenance overhead than deploying FPGA's.
Every corner case that could be found in s/w was always the topic of a excited benchmark. Also, the old trick of 'hey, let's drop all the matches on the floor in our h/w or FPGA, while getting a huge number of matches in s/w and making the s/w guys look ridiculous'.
Every time I read a paper touting a great new speedup on FPGA (over some crap s/w implementation) I'm reminded of that old joke about the Texan visiting Israel and telling the owner of some small farm that "he can get on a tractor and ride for days without getting the the boundary of his property." The Israeli nods sympathetically and says "Yes, I too used to have a tractor like that".
So Excuse my ignorance, apart from AWS or Azure Scales, what would anyone uses Memcached on FPGA?
* I wouldn't mind if the system was simple plug and play and has all the benefits, cost saving without the headache. But very rarely are any technology deployment without any headache or hassle free.
Using a single AWS F1 (FPGA) instance, our Memcached accelerator achieves over 11 million ops/sec at less than 300 microsecond latency. Compared to ElastiCache, the AWS-managed CPU Memcached server, our Memcached accelerator offers 9X better throughput, 9X lower latency, and 10X better throughput/$.
We need to batch multiple requests per Ethernet packet to get around packet per sec rate limiting on AWS. See more details here: https://www.legupcomputing.com/blog/index.php/2018/05/01/dee...
If anyone is interested we would love to hear from you, we will be showing off an online demo later this week.
FPGAs are great for processing data at 10Gbps line rate with low latency. They are also good for compute tasks like compression and encryption.
Happy to talk more at kirvy@plunify.com if you are interested. Congrats on getting a seed round from Intel Capital!
edit: r4.4xlarge as per the link. 16vcpu? You should be able to beat on latency but beating on throughput means elasticache is misconfigured, likely. Or you're putting on way too much set traffic (think I saw you set the bench to 1:1 ratio of gets to sets?)
The interesting part is the FPGA could still do much more computation (for example, compression or encryption) while maintaining the same throughput due to hardware pipelining. We described this concept further in the blog post I linked to.
On our side, maxing the IOPS was the easy part. The hard one was to marry the protocol with converged/deterministic Ethernet with RDMA. We were split in between "one request, one frame/packet burst" vs "all requests are somehow smartly aligned with frames by stateful logic." The first one was surprisingly susceptible to performance artifacts due to varying round trip latency of few microseconds, thus it was possible to get packets in transit being dropped due to receiving NIC (a top tier hardware) being momentarily overloaded.
You have an advantage of being DC provider independent, and can jump the AWS ship whenever you want. Alibaba's solution will be tied to its infrastructure with its very expensive RDMA capable network.