Yes, but as-is the post is pure hand-waving. At least some measurement could confirm the theories being stated. Did someone actually try taking one use case where they had memcached and replace it with Redis? What actually happened?
Ok then. Build an application heavily dependent on caching. Restart memcached in the middle a production workload. How long does it take to get out from under the dog pile?
EDIT: This started out a bit flippant. Wanted to make the point that antirez is not just handwaving.
For example one of the points without elaboration was 'there are “pure caching” use cases where persistence and replication are important'. Sometimes caching warming isn't feasible i.e. needing to restart the cache in the middle of a production workload. Using entirely volatile cache one can find extended downtime from the dog pile of requests waiting for the cache to warm up. Persistence can be an attractive form of insurance against this scenario.
Unless it was some very limiting case, I doubt there was much of a difference -- for simple operations e.g. REDIS (GET/SET) the IPC/network stack will contribute more the to the runtime than the difference between the choice of caching programs.
Exactly, basically we can provide numbers only for GET/SET/DEL or similar workloads. Since Redis has more capabilities that may speedup dramatically certain use cases the only good comparison would be per-use-case. However to provide the basic operations benchmarks for different data sizes can give at least an idea about the "HTML fragments caching" use case and other similar workloads.