Not the OP, but a young greybeard with opinions :)
I've set up a couple of API Gateways. One was a hand-rolled Go server; this could have probably been done using an existing off-the-shelf server, but there were some odd-ball requirements and it was pretty straightforward to just build the functionality. It did service discovery through Consul, applied some routing rules and the "special" logic, and that was about it.
Every other time I've just use nginx. Why? Most of the API Gateway projects out in the wild are bloody complicated! I just looked at the docs for EnvoyProxy and the "Architecture Overview" pane is taller than my 1080p monitor! Yes, at massive scale with thousands of services, something like this is probably the right solution. When you've got a handful of services, an automated rolling deploy across a small cluster of nginx servers is A-OK.
I've looked at a bunch of different packages, and every time my conclusion is that they introduce a massive amount of complexity that could be beneficial in the large, but will likely require a dedicated team to understand all of the intricacies of the whole system. KISS.
Envoy Proxy is pretty complex. It is really designed for machine configuration. This is one reason projects like Ambassador API Gateway (https://www.getambassador.io) exist -- it translates decentralized declarative Kube config into Envoy configuration (non-trivial exercise).
That said, Envoy has some great features such as distributed tracing, a robust runtime API for dynamic configuration, gRPC load balancing, etc.
He works for Haproxy Technologies, so probably has not considered switching :)
Edit: Probably also worth mentioning that some of the net functionality gap between the two has been closed. Haproxy added service discovery and h2 support some time after this was published: https://www.envoyproxy.io/docs/envoy/latest/intro/comparison
[1] https://www.envoyproxy.io/