I believe that lambda CPU goes hand in hand with memory. Your lambda@edge function will only deploy with 128mb of ram allocated and hence get very little CPU. 50ms is not low-latency in my book either. See https://serverless.zone/my-accidental-3-5x-speed-increase-of...
Good question. We just tested with ab (ApacheBench) and consistently observed the difference between Lambda enabled and disabled. We wanted to be sure this system does not incur any significant overhead and that’s why we made sure to only run the Lambda when it is needed.
We hope to see Amazon improving this, as there can be use cases for Lambda@Edge where you can’t restrict URLs so easily.
“As an alternative to cookies, then, we opted to hash the user’s IP address and their User-Agent string”.
So if a user is on a mobile device and switches networks, 50/50 they will receive a consistent version?
Did you consider using Geo based filtering? Iirc Cloudfront has support for geo restrictions, so you could create multiple distributions and restrict one to a geo for rollout.
Yes, switching networks can indeed cause users to receive different versions within the 3 hour window. In general we ship small, incremental changes that rarely even are visible to the user, so a small likelihood of serving inconsistent version is acceptable.
Interesting idea with geo restrictions. Does this allow more granularity than country-level filtering? Most of our audience is based in the US, so country-level setting does not give us enough flexibility.
There is also value in knowing the percentage of users receiving the new version. It’s just more predictable and easier to analyze the data if the ratio of new-to-old is known beforehand.