Yup. At the end of the day these logic-bomb-esque mechanisms are unpreventable and just a cat-and-mouse problem.
There should be a way to battle this outside technical measures, like a crowdsourced group of real distributed humans testing apps for anything malicious.
You can detect both the triggered behavior and "hey this looks like a logic bomb" with static analysis. Yes, you'll never trigger this with some dynamic analysis of the app. But "hey, some code that does things associated with malicious or otherwise bad behavior is guarded behind branches that check for specific responses from the app developer's server" is often enough to raise your eyebrows at something.
In this case suspicious code is anything that achieves a fairly narrow subset of possible outcomes so I doubt it would come up much.
It’s a common fallacy to assume infinite worlds result in every possible world but 1, 10, 100, … is an infinite series but is only covering ~0% of possibilities.
Yeah. It's really really hard to prevent actors from coming up with clever ways to circumvent the automatic checks. But that just means that apple needs to play the cat-and-mouse game. That's what they always say their cut is for, no?
There should be a way to battle this outside technical measures, like a crowdsourced group of real distributed humans testing apps for anything malicious.