Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yup. At the end of the day these logic-bomb-esque mechanisms are unpreventable and just a cat-and-mouse problem.

There should be a way to battle this outside technical measures, like a crowdsourced group of real distributed humans testing apps for anything malicious.



They are not unpreventable.

You can detect both the triggered behavior and "hey this looks like a logic bomb" with static analysis. Yes, you'll never trigger this with some dynamic analysis of the app. But "hey, some code that does things associated with malicious or otherwise bad behavior is guarded behind branches that check for specific responses from the app developer's server" is often enough to raise your eyebrows at something.


The static analysis should trigger "Explain why you're doing this" as a criteria for approval.

But that would probably require some actual human code review, which costs $$s.

Apple could offload that to the developer in the form of review surcharges.


No need for human review, they can just reject anything suspicious.


I feel like "not suspicious" would eventually become an impossible bar.

You'd find a code pattern that was being used, and declare it suspicious.

Rinse and repeat, as people are still going to try getting around the rules.

Eventually you're left with some weird subset of a subset of a language that's legal to write iOS apps in.


In this case suspicious code is anything that achieves a fairly narrow subset of possible outcomes so I doubt it would come up much.

It’s a common fallacy to assume infinite worlds result in every possible world but 1, 10, 100, … is an infinite series but is only covering ~0% of possibilities.


Okay, but this is now a policy and procedure choice. The original claims were that these are undetectable.


This cannot as easily catch targeted attacks, which only send the malicious payload to certain people or niche groups.


Definitely can't.

Though that's another story, targeted attacks will always find a way to slip through.

This method can protect the general public a bit more compared to the current "screening".


> a crowdsourced group of real distributed humans testing apps for anything malicious

I guarantee they would get sued/attacked into oblivion, within a year or two. Look at what happens to Krebs. He gets SWATted, all the time.

They would need some good backing, and have to be subject to fairly stringent controls, themselves.


Sued by whom?


If they have false positives, or even real positives on outfits with lawyers on speed-dial.

You see that stuff all the time, with even officially-sanctioned enforcement agencies. Amateurs won't have a chance.


Yeah. It's really really hard to prevent actors from coming up with clever ways to circumvent the automatic checks. But that just means that apple needs to play the cat-and-mouse game. That's what they always say their cut is for, no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: