Agreed. Although kubelet itself is not too terrible, all the other stuff you need to run alongside it (the "etc" part of your post) is what costs you. Network provider, per host monitoring, reserved resources per host, just to name a few.
Constantly adding and removing hosts can also negatively affect e.g. network provider, depending on which you use. In my experience, Weave worked significantly worse than something "simpler" like flannel when combined with frequent auto scaling.
So lets say that you were building a bandwidth heavy service, on the best providers each node is limited to around a 1gbps port per each 1GB 1vcpu node. If the goal is to maximize total data transferred my thought would be that it's better to have as many nodes as possible. Sending data at the rate cap shouldn't be that big a hit on the cpu with big enough chunks even when considering tls costs. But I'm not sure about this. It always seems that people are trying to maximize their compute capabilities and not their throughput when they talk about Kubernetes but I've never really had that focus.
What would you do if you were serving tons of data but didn't have to compute much?
Constantly adding and removing hosts can also negatively affect e.g. network provider, depending on which you use. In my experience, Weave worked significantly worse than something "simpler" like flannel when combined with frequent auto scaling.