Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They're presumably already 99% of the way there. If the Secure Enclave can be updated on a locked phone, all they need to do is stop allowing that, right?

To me, the more profound consideration is this: if you use a strong alphanumeric password to unlock your phone, there is nothing Apple has been able to do for many years to unlock your phone. The AES-XTS key that protects data on the device is derived from your passcode, via PBKDF2. These devices were already fenced off from the DOJ, as long as their operators were savvy about opsec.



The real lynchpin here is not hardware, but iCloud. Apple can pull data out of an iCloud backup, and the only reason the San Bernadino case even got off the ground is because somebody at the county screwed up and effectively prevented the backup from occurring.

iCloud backups can be secured so not even Apple can get in them, but it is fundamentally much harder to secure (can't be hareware-entangled and still restore to a new device), and it would significantly complicate iCloud password changes. I'm sure they are working on it, but it is nontrivial.

That (software) problem is the real reason 99% of users are still exposed, as you say the hardware and secure enclave holes are basically closed.


> iCloud backups can be secured so not even Apple can get in... I'm sure they are working on it, but it is nontrivial.

There is no way they are working on this. It is an intentional design decision that Apple offers an alternative way to recover your data if you lose your password.

Or if you die without telling your next-of-kin your password. Most people do not actually want all of their family photos to self-destruct when they die because they didn't plan for their death "correctly". That would be a further tragedy for the family. (Most people don't even write wills and a court has to figure things out.)

Making data self-destruct upon forgetting a password (or dying) is not a good default. It's definitely something people should be able to opt-in to in particular situations, but only when they understand the consequences. So it's great news that in iOS 9.3 the Notes app will let you encrypt specific notes with a key that only you know. But it's opt-in, not the default.


Has Apple even given access of someone's iCloud account to next-of-kin after they died? I've never heard of this, and I don't expect Apple to be responsible to preserve photos. You already can have shared photo streams, and there are many solutions for other data that could be potentially lost that don't involve Apple getting directly involved in these cases.


The idea of Apple (or some other big corporation) providing my protected personal data to my next-of-kin is more frightening than the idea that the government has the ability to spy on me while I'm alive. It's the most morbid kind of subliminal marketing that could possibly exist.

"Hey, we're really sorry about fluxquanta's passing. Here is his private data which he may or may not have wanted you to see (but we'll just assume that he did). Aren't we such a caring company? Since we can no longer count on him to give us more money when our next product comes out, keep us and our incredibly kind gesture of digging through the skeleton closets of the dead in mind when shopping for your next device."


The thing is, you can opt in to destroy-when-I-die security. You can encrypt notes or use a zero-knowledge backup provider (backblaze offers this). But for most people that's the wrong default for things like decades of family photos.

In absence of a will it would be terrible to assume that a person meant to have all their assets destroyed instead of handed down. It should be an explicit opt-in. The default should be, your stuff is recoverable and inheritable.


> But for most people that's the wrong default for things like decades of family photos.

That seems like a weird assumption, that there'd be a single person with access to an account containing the only copies of decades of family photos. If someone else has account access or if there are copies of the photos elsewhere, then "destroy-when-I-die" isn't a big problem.

On the other hand, it also violates the way that I think things would usually work in the physical world. That is, if there's a safe that only the deceased had the combination to, I can still drill it to access the contents.


Far from a "weird assumption", that is exactly how most families operate. There's a family computer with all the photos on it that's always logged in, but maybe only dad or mom knows the iCloud password ("hey mom what's the password again?..") Or maybe they are split between family member iPhones, and they just show them to each other when they want to see them.

It would be a pretty big bummer for most families if when a family member passed away so did all those memories. That's probably not what they would have wanted. Or even if they just forgot their password.. that when they reset it all their photos go poof.

You are I might understand the consequences, but for most people it should really be a clear opt-in to "you can turn on totally unhackable encryption, but if you lose your pw you are totally screwed".


> that is exactly how most families operate.

Do you have non-anecdotal evidence for that? Among my own friends and family, there are some images that only exist on one device or account, but most of the stuff likely to draw interest ends up somewhere else (a shared Dropbox account, e-mail attachments, on Facebook, copied onto some form of external storage).

There are likely some demographic groups that are more likely to behave one way than the other, and that could perhaps account for our differing experiences.

On second though, it is the easiest way to use the account (each person having an account on each device). I wonder what percentage of people that would benefit from it actually use the Family Sharing option?


I see what you're saying, and I know that I'm the odd man out here. My original comment stems mostly from my own messed up familial situation. My parents, (most) siblings and I don't get along very well, and I'm single.

If I were to die today I wouldn't want my personal photos, online history, or private writing to fall into the hands of my family. Hell, I don't really even want my physical assets to go to them (something I really should address in a will one of these days to donate it all to charity).

There has been a lot of fighting and backstabbing over who gets what when relatives have died in the past, and the more emotional items (like photographs) have been used to selfishly garner sympathy online through "likes" and "favorites" and it makes me sick. My position is that if you didn't make the effort to get to know a person while they were alive, you should lose the privilege of using their private thoughts for your own emotional gain after they're gone. And I do realize how selfish that sounds on my part, but in my current position I feel like it's justified. If I got a long term partner I would probably change my mind on that.

So yes, an opt-in would be ideal for me, but I don't think many online companies provide that right now.


That's pretty standard, though: once you no longer exist, all your private data, all your private money, all your private goods become part of your estate, to be disposed of by your executor according to your will.


Things like money and personal physical property, sure, I understand that. But I feel like personal protected (encrypted) data should be treated differently. I'm thankful Google at least has options[0] available for their ecosystem, but I guess I'm going to need a will to cover the rest.

[0]https://support.google.com/accounts/answer/3036546?hl=en


Historical, pre-digital precedent:

In the case of sudden death, there would not have been any way to securely dispose of any private "data". So your private information, diaries, works you purposefully didn't publish, unfinished manuscripts you abandoned - everything was handed down to your estate, and more often than not used against your intent.

I'm not entirely clear whether your will could specify such disposal to be done, or could prohibit people from at least publishing these private notes and letters if not reading them, in any kind of binding and permanent way.


Yes. http://www.cnet.com/news/widow-says-apple-told-her-to-get-co...

Shared photo streams are only a solution if they are used. Most people don't even write wills.

If you fail to write a will should the state just burn all your assets, assuming that's what you meant? No, that's the wrong default. Burn-when-I-die should be opt-in for specific assets, not the default.

And the good news is Apple is providing opt-in options like secure notes. Perhaps even backups too (3rd parties already do). But only after presenting the user with a big disclaimer informing them of the severe consequences of losing the password.


> Farook disabled the iCloud backup six weeks prior to the attack

http://6abc.com/news/senior-official-stresses-feds-need-to-u...


They did not even attempt to get it to send a fresh backup to iCloud before they reset it making it impossible.

[0] http://daringfireball.net/2016/02/san_bernardino_password_re...


On the other hand, "turn it on and let it do its thing" is a terrible idea from a forensics standpoint. You want to lock the account down ASAP to prevent potential accomplices from remote wiping your evidence.


In an alternate universe it may have been a plausible deliberate measure, but in this universe, it was a fuckup.


The exact reason i simply just don't use the iCloud backup.


call me a cynic, but I'm not buying "somebody at the county screwed up"


Indeed, "The County was working cooperatively with the FBI when it reset the iCloud password at the FBI's request." https://twitter.com/CountyWire/status/700887823482630144


The "screwup" grandparent is suggesting is that the county didn't think to disable the setting that would let employees turn off iCloud backups for their devices, however many months or years ago, not that they've messed up during the investigation now.


No, they're probably referring to this, from the second letter,

"One of the strongest suggestions we [Apple] offered was that they pair the phone to a previously joined network, which would allow them to back up the phone and get the data they are now asking for. Unfortunately, we learned that while the attacker’s iPhone was in FBI custody the Apple ID password associated with the phone was changed. Changing this password meant the phone could no longer access iCloud services."

http://www.apple.com/customer-letter/answers/


It's not 99%; adoption of iCloud backups is not nearly that high.


Uhh, well it's probably pretty high. Considering their adoption rate for new software is sitting somewhere around 95%. iCloud backups default to on - just like automatic updates - when the user sets up their phone. Not to mention most Geniuses would ask to turn on iCloud backup when upgrading the device for convenience.


Well the specific phone that started this controversy didn't have any iCloud backups, so regardless of the percentage it doesn't pertain here.


It did have iCloud backups, but the latest was six weeks prior. The FBI requested the iCloud password be reset, which prevented a new iCloud backup they could have subpoenaed.


Naive quedtion perhaps, bit why wouldn't they be able to employ the same hardware on icloud than on the phone?


Uploading the encrypted content has no value as backup, if you don't have keys that can decrypt it. If the keys are backed up as well, all security is gone.


Is it that hard to have the phone display an encryption key and have the user copy it to dead tree?

As above, not a good idea for a default, but don't see why it wouldn't be technically viable for opt-in protection.


The hardware key is designed to be impossible to extract from the device. That's part of the security, so you can't simply transfer the data to a phone where protections against brute-forcing the user key have been removed.


> An encryption key

To spell it out (1) request new encryption key from device (let's call it key4cloud); (2) encryption key generated, displayed for physical logging by the user, & stored in the secure enclave; (3) all normal backups to iCloud are now encrypted via key4cloud; (4) user loses phone; (5) user purchases new phone; (6) new phone downloads data; (7) user enters key4cloud from physical notes & decrypts backup

Yes, it requires paper and a pencil and user education (hence the opt-in). But it's also incredibly resistant to "Give us all iCloud data on User Y."


It can be the same hardware but I believe that not usually meant with "hardware based encryption". The point is that the private keys never leave the hardware of the phone, thus making it secure. So they could employ the same hardware but the hardware does not have the necessary keys.


Does Apple owning the iCloud data center have an impact?


Why would they have made the Secure Enclave allow updates on a locked device without wiping the key in the first place? Either they didn't think it through, assumed they would never be compelled to use it as a backdoor, or perhaps they were afraid some bug could end up having catastrophic consequences of locking a billion people out of their phones with no way to fix it? Do we even know for certain that the Secure Enclave on the 6s can be reflashed on a locked phone without wiping the key?


From what's been said, it seems like it was made to be updated so that Apple could easily issue security updates. They've already increased the delay between repeated attempts at password entry. Probably they were worried about vulnerabilities or bugs that hadn't been found and wanted to maintain debugging connections to make repairs easier. A tamper-resistant self-destruct mechanism with no possibility of recovery introduces extra points of failure, and it seems that until now, they didn't think it was necessary.

Look at the controversy over the phone not booting with third-party fingerprint reader repairs as an example. People were upset when they found out that having their device worked on could make it unbootable, but Apple was able to easily fix it with a software update. If it had been designed more securely, it might have wiped data when it detected unauthorized modifications, which would have meant even more upset people. Now that this has become a public debate, there will be a very different response to making it more secure.


How much easier? If all they had to do to not have access to it themselves is to ask the user for his password when there's a new update, that's hardly that inconvenient...


I'm not saying that it was the right thing to do in hindsight, but I get a little nervous even when updating a small web server, so I understand the tendency to leave repair options open on something as big as iPhones. Real hardware-based security is about more than just about asking for a password. It means making the device unreadable if it's been disassembled or tampered with, and that could have unintended side-effects if any mistakes are made or something is overlooked. It's definitely worth pursuing considering the political situation the world is in right now.


As I understand it, Secure Enclave firmware is just a signed blob of code on main flash storage that's updated along with the rest of iOS, which can be done via DFU without pin entry. I assume DFU updates are very low level, with no knowledge of the Secure Enclave or ability to prompt the user to enter their pin.

Making the DFU update path more complex increases the risk of bugs and thus the risk of permanently bricking phones.

You could imagine an alternative where on boot the Secure Enclave runs some code from ROM which checks that a hash of the SE firmware matches a previously signed hash, which is only updated by the Secure Enclave if the user entered their pin during the update. If it doesn't match, either wipe the device or don't boot until the previous firmware is restored.

This way Secure Enclave firmware updates and updates via DFU are still possible, but not together without wiping the device.


Let us direct our attention to the superhero Mike Ash and his latest post on secure enclave. https://www.mikeash.com/pyblog/friday-qa-2016-02-19-what-is-...

Honestly, this is really the shit..


Yeah, the key question is how Secure Enclave firmware updates work, and whether they can be prevented without pin entry. One former Apple security engineer thinks they are not subject to pin entry: https://twitter.com/JohnHedge/status/699892550832762880


> or perhaps they were afraid some bug could end up having catastrophic consequences of locking a billion people out of their phones with no way to fix it?

That basically happened (at a smaller scale) just last week. When Apple apologized and fixed the "can't use iPhone if it's been repaired by a 3rd party" thing, the fix required updating phones which were otherwise bricked. It's not an unreasonable scenario.


If the device has a manufacturer's key and the user's key, then it's basically down to simple Boolean logic: does the innermost trusted layer allow something to be installed or altered if it is authorized by the manufacturer's key OR your key? Or the manufacturer's key AND your key? Or just your key? (With a warning if it has no other key?)


Underrated post.


>If the Secure Enclave can be updated on a locked phone, all they need to do is stop allowing that, right?

That probably also means removing most debugging connections from the physical chip, and making extra sure you can't modify secure enclave memory even if you desolder the phone.


A lot of that stuff was already in the original threat model for the Secure Enclave ("assume the whole AP is owned up").


No one has been talking about the fact that you can rebuild transistors on an existing chip. It's very high tech stuff, the sort that Intel uses to repair early engineering samples painstakingly, but it is used.

You decap the chip to expose the die with HF, and then use Focused Ion Beams and a million dollar microscope setup, you can rearrange the circuits. So, if the NSA absolutely had to have the data on the chip they could modify it to make it sing. So, if say they know an iPhone had the location of Bin Laden on it, they could get the goods without Apple.


They're not anywhere near 99% of the way there; they've destroyed the heterogeneous decentralized ecosystem that broad security requires.

Locking themselves out of the Secure Enclave isn't anywhere near sufficient. As long as the device software and trust mechanisms are totally opaque and centrally controlled by Apple, the whole thing is just a facade. There's almost nothing Apple can't push to the phone, and the audibility of the device is steadily trending towards "none at all".

If the NSA pulls a Room 641A, we'd never know. If Apple management turns evil, again, we'll never know. If a foreign state use some crazy tempest attack to acquire Apple's signing keys ... again, we'll never know.


Then again, nobody is suing over android phone crypto, and as recently as last November bugs have been discovered that sookmg things like entering an excessively long password allows you to bypass the lock screen.


In android world too many parties have the keys to kingdom and people that protect their devices take that into consideration. Also once the bootloader is unlocked and custom firmware put there - all bets are off.. I have yet to see viable attack against sufficiently strongly protected LUKS at rest.


I think from the context it's pretty clear that "hack" in this case is referring to "being forced to unlock". Yes, they could still deliberately break encryption for future OSes and phones, but the same could be said of any software, open or closed source.

I don't think acting like an open ecosystem is the be-all and end-all of security is productive. Most organizations (let alone individuals) don't have the resources to vet every line in every piece of software they run. Software follows economies of scale, and for hard problems (IE, TLS, font rendering, etc) will only have one or two major offerings. How hard would it be to introduce another heartbleed into one of those?


How does a 3rd-party researcher find the next heartbleed if they can't even decrypt the binaries for analysis?


Binaries can be converted back to assembly and quite often even back to equivalent C; bugs are most often found by fuzzing (intentional or not) which does not require source code. The difference between open and closed source is that open is more often analysed by white hats who rather publish vulnerabilities and help fixing them, while closed by black hats who rather sell or exploit them in secret.


You misunderstand; if you can't even decrypt the binary, you can't disassemble, much less run a decompiler over it.

As someone who has done quite a bit of reverse engineering work, I have no idea how I'd identify and isolate a vulnerability found by fuzzing without the ability to even look at the machine code.


If it runs, it has to be decrypted (at a current level of cryptography); at most it is obfuscated and the access is blocked by some hardware tricks which may be costly to circumvent, but there is nothing fundamental stopping you.


> don't have the resources to vet every line in every piece of software they run

For the same reason I do not independently vet every line of source code I run, but still reasonably trust my system magnitudes more than anyone could - and I argue, nobody can - trust proprietary systems. And that is because while I personally may not take initiative to inspect my sources, I know many other people will, and that if I were suspicious of anything I could investigate.

Bugs like Heartbleed just demonstrated... well, several things:

1. Software written in C is often incredibly unsafe and dangerous, even when you think you know what you are doing. 2. Implementing hard problems is not the whole story, because you also need people who comprehend said problems, the sources implementing them, and have reason to do so in the first place.

Which I guess relates back to C in many ways.

I look forward to Crypto implemented in Rust and other memory / concurrency / resource safe languages. There is always a surface vector of a mistake being made that can compromise any level of security - if you move the complexity into the programming language the burden falls on your compiler. But in the same way you can only trust auditable in production heavily used sources, nothing is going to be more heavily used and scrutinized, at least by those interested, than languages themselves.


C is not a problem -- you can make a bug in every language. Even with memory safety and a perfect compiler, bug may direct the flow in bad direction (bypassing auth for instance) or leak information via side-channel.


We all understand that as long as apple can update the phone the can do all kinds of bad things.

The important thing about the secure enclave thing is that it pushes security over the line so that the attacker has to comprimise you befor you do whatever it is that will get you on somebodys shitlist.


> if you use a strong alphanumeric password to unlock your phone, there is nothing Apple has been able to do for many years to unlock your phone

Is this true even if you use Touch ID?


Probably not. If you're dead, they probably have your fingers. If you're alive, they can compel you to unlock the device with your fingerprint.

The only point I'm making is that Apple already designed a cryptosystem that resists court-ordered coercion: as long as your passcode is strong (and Apple has allowed it to be strong for a long time), the phone is prohibitively difficult to unlock even if Apple cuts a special release of the phone software.


Using a strong pin is pretty annoying, and a relatively visible signal when using the phone on the street etc, So it can be a good filter(maybe via street cams) to filter suspicious people - which isn't a bad goal for law enforcement.


That sounds good until you remember the Bayesian Base Rate Fallacy: there are very few terrorists (the base rate of terrorism is very low), so filtering on "people with strong passphrases" is going to produce an overwhelming feed of false positives.


Be careful not to take the base rate fallacy too far, with enough difference in likelihood even a small base rate won't prevent an effect from being significant, and regardless of the base rate you'll still get some information out of it, it might just not be as much as you wanted.


Nobody cares that you're using an alphanumeric passcode on your iPhone.

Some corps require or strongly encourage it. My employer does.

And most parents I know use alphanumeric to keep their kids from wiping their phones and iPads just by tapping the numbers. (A four digit number code auto-submits on the 4th tap, so all it takes is 40 toddler taps. An alphanumeric code can be any length and won't submit unless the actual submit button is tapped.)


Corporate email profiles on BYOD phones often enforce a long passcode requirement, so you've got a lot of Fortune 500 sales guys to screen out if you're stopping and searching anybody with a suspiciously long password.


I'm at a loss as to how alphabet agency can determine a weak passcode vs strong passcode was used. how does a pin get stored on the phone? surely, not plain text of a 4 digit pin. if they do any encryption to the 4 digit pin, how would it appear any different than a significantly stronger passcode?


The grandparent post was about determining the complexity of a PIN/Passcode by watching it being entered - more screen interaction = more complex.


It uses a different screen. If you have a 4 digit pin, the entry screen looks a lot like the phone dialer, with the numbers 0-9.

If you have a stronger passcode, you see a full keyboard instead.


The prompt is different based on the type of code you use.


Except that with Touch ID, you only have to enter it when you reboot the phone, or if you've mis-swiped 5 times. I've had a strong pin for a couple of years, and really don't find it even a slight inconvenience (in the way that I use a super-weak password for Netflix, as entering passwords on an Apple TV is a real pain)


People who desire to be secure in their electronic papers and effects are not and should not be considered "suspicious people".


If they have access to a live finger for the TouchID, sure they can bypass - but they could do that with the $5 guaranteed coercion method as well [1].

Copying a good fingerprint from a dead finger or a randomly placed print is not easy [2]. It's hard, doable but you get 5 tries so if you screw up, you have thrown away all the hard work of the print transfer.

All bets are off if the iPhone is power-cycled. Best bet if you're pulled over by authorities or at a security checkpoint is to turn off your iPhone (and have a strong alphanumeric passcode).

[1] https://xkcd.com/538/ [2] https://blog.lookout.com/blog/2013/09/23/why-i-hacked-apples...


> All bets are off if the iPhone is power-cycled. Best bet if you're pulled over by authorities or at a security checkpoint is to turn off your iPhone (and have a strong alphanumeric passcode).

Excellent advice. Even better, if you're about to pass through US customs and border patrol, backup the phone first, wipe, and restore on the other side. Of course, this depends on your level of paranoia. I am paranoid.


If you're paranoid, making a complete copy of all your secrets on some remote Apple or Google "cloud" where the government can get at it trivially is the exact opposite of what you want to be doing.


If you're paranoid, you don't have a cell phone.


Or you have several, and send them on trips without you, etc.


Well, yeah, if you back it up with a 3rd party backup tool, you are trusting the 3rd party.

I recommend you make a backup to your laptop, which you then encrypt manually. That way the trust model is: you trust yourself. Then you can do whatever you want with the encrypted file. Apple's iCloud is perfectly fine at this point.

The real challenge is to find a way to restore that backup, because you have to be on a computer you trust. If you decrypt the backup on a "loaner" laptop, your security is broken.

If you decrypt the backup on your personal laptop but the laptop has a hidden keylogger installed by the TSA or TAO, your security is broken.

It would be necessary to backup the phone on the _phone_ _itself_. Then manually encrypt the file (easy to do). Then upload to iCloud. At this time, no such app exists for iOS.

Since you plan to restore the backup to the phone anyway, it's no problem to decrypt a file on the phone before using it for the restore.


> I recommend you make a backup to your laptop, which you then encrypt manually.

You mean your laptop that was manufactured by a 3rd party, with a network card that was manufactured by a 3rd party? And you're using encryption software that, even if it's open source, you probably aren't qualified to code review. I'm not downplaying the benefit of being careful, but unfortunately you can keep doing that pretty much forever.


All laptops and cameras entering the US are subject to search and seizure.


Well you can make an encrypted Backup via iTunes (that would involve firing up iTunes though shudders)


There's a reason Google decided to encrypt all communication between machines inside their datacenters.


Are you sure it's not just communication between data centres?


Probably not. FB is doing the same thing. In most cases your app or service does not actually know if the remote service it is talking to is local or in another DC. Yes, you can find out if you need to, but that requires contacting another service and introduces some delay and latency. Use a service router to try to keep the calls local to a rack or a DC, but you know that if there are problems with local cells you might get routed across the country so start with the assumption that _all_ connections get encrypted even if the connection is to localhost.


backup ==> zip/rar => encrypt with pgp or whatever => split => upload various parts to different cloud storage providers => wipe device => pass checkpoint => download => combine => decrpy => uncompress => restore.

its not trivial, but its sure easy to do in this day and age.


What data is likely on someone's phone that is not also in the cloud one way or another?


I wonder this too. The only personal data on my phone are my text and email messages. I'm not sure how other data would get onto the phone.


Wiping the phone doesn't help you. Using the strong password renders the information inaccessible, at least as inaccessible as your phone backup is. Touch ID isn't re-enabled until the phone's passcode is used. Presumably if the authorities have access to your phone's memory they also have access to your laptops, and neither will do them any damn good.

And it's paranoia if there's a legitimate threat, that's just called due diligence. ;)


> Touch ID isn't re-enabled until the phone's passcode is used.

Do the docs confirm that there is no way around this? I'd guess generating the encryption key requires the passcode, which is discarded immediately, and Touch ID can only "unlock" a temporarily re-encrypted version which never leaves ephemeral storage?


From the iOS Security Guide - How Touch ID unlocks an iOS device;

  If Touch ID is turned off, when a device locks, the keys for Data Protection class
  Complete, which are held in the Secure Enclave, are discarded. The files and keychain
  items in that class are inaccessible until the user unlocks the device by entering his
  or her passcode.

  With Touch ID turned on, the keys are not discarded when the device locks; instead,
  they’re wrapped with a key that is given to the Touch ID subsystem inside the Secure
  Enclave. When a user attempts to unlock the device, if Touch ID recognizes the user’s
  fingerprint, it provides the key for unwrapping the Data Protection keys, and the
  device is unlocked. This process provides additional protection by requiring the
  Data Protection and Touch ID subsystems to cooperate in order to unlock the device.
  The keys needed for Touch ID to unlock the device are lost if the device reboots
  and are discarded by the Secure Enclave after 48 hours or five failed Touch ID
  recognition attempts.


TouchID I believe unlocks the passcode so the phone can use it to login, but TouchID itself is not enabled until you enter it once, presumably because it isn't actually stored on the device in a readable way.


OK, I guess the effect is the same (as long as the passcode isn't recoverable until after startup). Thanks.


Could the "code equivalent" of your fingerprint be stolen by a rogue app if it's allowed to read it? I don't have a touchId phone but have wondered what would happen if your "print" is stolen -- passwords can at least be changed.


Speaking as an App Developer, we cannot touch stuff like that. We're allowed to ask Touch ID to verify things and process the results, but we don't actually get to use the Touch ID system. It's similar to how the shared keychain is used: We can ask iOS to do things, but then must handle any one of many possible answers. We don't actually see your fingerprint in any way.

Now Cydia and 3rd party stuff? I have no clue.


iOS itself does not see fingerprints, it refers to SE.


Wouldn't surprise me if true, iOS as a whole is built in a very modular fashion when it comes to the different components of the OS and developers only get access to what Apple deems us worthy of, hehe. Not that I want access to Touch ID, I much prefer to not have access to that...


Can non-US citizens be coerced into giving up their passcode?


Depends on if they're at a border crossing or in the interior of the country. Laws apply to citizens and non-citizens alike. If you haven't been admitted to the country, about the most they can do is turn you away at the border checkpoint and put you on the next flight back to your home country.


and if you're a citizen of the country you're trying to enter...


Then the TSA drops a paper clip while you bend over and pick it up


No, at least, not by the DOJ, and not for any use in a court of law.


We wrote about this in our border search guide and concluded that there is a risk of being refused admission to the U.S. in this case (in the border search context) because the CBP agents performing the inspection have extremely broad discretion on "admissibility" of non-citizens and non-permanent residents, and refusing to cooperate with what they see as a part of the inspection could be something that would lead them to turn someone away. (However, this is still not quite the same as forcing someone to answer in the sense that they don't obviously get to impose penal sanctions on people for saying no.)


One reason I'll never visit the states.

If I absolutely had to I just wouldn't take a phone/laptop with me.


" they don't obviously get to impose penal sanctions on people for saying no"

I wonder if there is any negative effects associated with being refused entry by a CBP? Could it be the case that if you are refused entry once, that in the future they will be more likely to refuse you entry? If so, that's a fairly significant penalty/power that the CBP person has.


> I wonder if there is any negative effects associated with being refused entry by a CBP? Could it be the case that if you are refused entry once, that in the future they will be more likely to refuse you entry? If so, that's a fairly significant penalty/power that the CBP person has.

Yes, some categories of non-citizen visitors (I don't remember which) are asked on the form if they have ever been refused entry to the U.S. (and are required to answer yes or no). If they're using the same passport number as before, CBP likely also has access to a computerized record of the previous interaction.


Plenty of countries will ask if you've ever been refused entry to any country. And you're also generally automatically excluded from any Visa Waiver Programme from then on too. So it's a major issue.


> If they're using the same passport number as before, CBP likely also has access to a computerized record of the previous interaction.

(They might also be able to search their database by biographical details such as date of birth, so getting a different passport may not prevent them from guessing that you're the same person.)


It is not a good bet if you're pulled over by the authorities to be doing something with your hands that they can't reliably identify as different from preparing a weapon. Particularly if not white.


This would prevent people from recording police abuse ...


Power-cycling can be done relatively quickly - in 10sec with two fingers (no swipe), or 5 sec + swipe if you only have one hand available.


> "Copying a good fingerprint from a dead finger or a randomly placed print is not easy [2]. It's hard, doable but you get 5 tries so if you screw up, you have thrown away all the hard work of the print transfer."

You get plenty of tries to perfect the technique, before using it on the actual device.

You acquire identical hardware and "dead finger countermeasures" (does the iphone employ any? Some readers look for pulses and whatnot, I don't know if the iphone does). You then practice reading the fingerprint on that hardware until you are able to reliably get a clean print and bypass any countermeasures. Only then do you try using the finger on the target phone.

You might still fuck it up, and you only get 5 chances on the target hardware. But with practice on the right hardware, I see no reason why you couldn't get it.


There's also a 48 hour window and touch ID doesn't work initially after booting.

https://support.apple.com/en-us/HT204587

Great design.


Not only the amount of work, technology and thought that have gone into this, but also how well this has been implemented is mind-blowing.


It really shows the staggering difference between having a Samsung phone with fingerprint security versus an iPhone.


Is it only five fails on TouchID to delete data? I don't have the option to delete the data enabled on my iPhone... but it often takes more than five tries to just get it to work on my finger that is legitimately registered in touchID.


After five failures the you cannot use Touch ID to unlock and will instead need the passcode to access the phone again. This means that any approach to fooling the fingerprint reader will need to be done within five tries.


No, it's five fails before Touch ID stops working until after a passcode is entered again.



Given the 6 tries, is there any benefit to a strong password?


It's my understanding that the current battle is about the request to bypass the retry cap.


  All bets are off if the iPhone is power-cycled.
Except, you don't have explicit control over the iPhone's battery, so how do you know if the power is actually cycled?


If the has been switched off or if >48h passed since the last unlock.

Also remember that rubber-hose cryptanalysis is always an option.


Can you be convicted in the US based on evidence obtained with physical torture?

Edit: Looks like the answer is it depends and not a resounding no

http://www.nolo.com/legal-encyclopedia/evidence-obtained-thr...


Of course you can. As long as the courts can be persuaded that there is no causal nexus between the torture and the evidence, or if the torture actually isn't legally torture. That assumes that the defendant can show (or is even aware) the torture actually took place.

Examples:

* prolonged solitary confinement: not legally torture

* fellow prisoner violence: not legally torture, no nexus

* prolonged pre-trial confinement: not really torture, but we may as well include it

* waterboarding/drowning: not legally torture? (Supreme Court declined to rule)

* stress positions: cannot show it took place

* parallel construction: cannot show / not aware


No, you cannot. Evidence derived from facts learned from torture is also excludable.


Sure, you can. It all depends on who gets to define "torture."

If they can find a judge who believes the iron maiden isn't torture while the anal pear is, then guess what... the government will use the iron maiden.

Even if they can't find such a pliable jurist, they'll have no problem getting a John Yoo to write an executive memo that justifies whatever they want to do to you, and let the courts sort it out later. There's no downside from their point of view.


> getting a John Yoo to write an executive memo

The memos didn't provide de iure indemnity. There is no constitutional basis, in fact the proposition that a memo can supersede the Constitution is idiotic on its face.

The failure is the de facto doctrine of absolute executive immunity. It has two prongs: 1. "When the president does it, that means that it is not illegal." 2. When the perpetrator follows president's orders, also not illegal.

Nevertheless, since there is no legal basis, there is nothing preventing the next government from prosecuting them.


The memos didn't provide de iure indemnity. There is no constitutional basis, in fact the proposition that a memo can supersede the Constitution is idiotic on its face.

Yes, and that's what I meant by "let the courts sort it out later." The Constitution's not much help either way, being full of imprecise, hand-waving language and vague terms like "cruel and unusual." It was anticipated by the Constitution's authors that it would be of use only to a moral government.

Nevertheless, since there is no legal basis, there is nothing preventing the next government from prosecuting them.

I wonder if that's ever happened in the US? Does anyone know?


I would disagree. The Constitution is a bulwark against tyranny. The US have successfully prosecuted waterboarding in the past.

It usually only happens when the rule of law is suspended and then resumed. You're a young country, so maybe it hasn't happened before. Robert H. Jackson was an American, though ;-)


Torture to get detailed info, use details to establish plausible parallel construction.

Enter parallel-constructed information as court-sanitized evidence.


TouchID disables itself after 48 hours and requires the password again.


Also after 5 failed attempts - you can test with an unregistered finger


Or if the phone runs out of batteries and restarts.


Does TouchID have any protections against your finger unlocking your phone post-mortem?


No, although I'd love to see a HealthKit app that uses your Apple Watch as a dead man's switch, and disables Touch ID or powers the phone off in the event the watch is removed or your pulse is no longer detected.


That wouldn't work well with loose wrists and other similar edge cases.


Then those people could turn it off. But it would be a nice option.


Without a wristprint for the watch to read, what prevents somebody else from wearing it?

The pulse and skin conductivity might change, but are either of those reliable enough metrics for such an application?


If you take the watch off, it automatically locks. I wouldn't mind it also automatically locking my phone and requiring a passcode instead of TouchID.

There is a VERY limited amount of time in which you can take the watch off and switch to another wrist (like milliseconds, you have to practically be a magician to switch wrists (which I do throughout the day)).

Apple has the watch, they could use it to beef up security for those that want it.


I don't think "already fenced off if people were savvy" is really valid. That's the security equivalent of "no type errors if people were savvy", which is the same as "probably has type errors".

It was near-impenetrable, but it could have been inevitable if it weren't for the fact that Apple could push OS updates without user consent. They could have made it impossible for anyone to get in even if your pin was 1234, but didn't.

Kind of disappointing given their whole thing about the Secure Enclave. Bunch of big walls in the castle, but they left the servant's door unlocked.


Secure enclave as per their docs sounds just like their implementation of trust zone.. err "Trust Zone", most likely following ARM specs.

The main difference would be that everyone knows trust zone through Qualcom's implementation and software - as it's been broken many times. At the end of the day "its just software" though, which runs on a CPU-managed hypervisor with strong separation ("hardware" but really, the line is quite a blur at this level).

What that means is that you need to be unable to update the secure enclave without user's code (so the enclave itself needs to check that) which is probably EXACTLY what apple is going to do.

Of course, Apple can still update the OS to trick the user into inserting the code elsewhere, then FBI to use that to update the enclave and decrypt - though that means the user needs to be alive obviously.

Past that, you'd need to extract the data from memory (actually opening the phone) and attempt to brute force the encryption. FBI does not know how to do this part, the NSA certainly does, arguably, Apple might since they're designing the chipset itself.


Secure Enclave is explicitly not TrustZone per Apples iOS Security Guide. It's a separate core in the SoC running on L4.


Aww shit.. embedded crypto hypervisors all up in this hood.


Wopw wopw


I don't understand the whole debate about Apple security:

- Apple is required to have backdoors, at least on iPhones sold in foreign countries, isn't it?

- Even if the SE were completely secure, a rogue update of iOS could intercept the fingerprint or passcode whenever it is typed, and replay it to unlock the SE when spies ask for it. As far as I know, the on-screen keyboard is controlled by software which isn't in the SE.

- Even if iCloud is supposed to be encrypted, they didn't open up that part to public scutinity.

- Therefore a perfect security around the SE only solves the problem of accessing a phone that wasn't backdoored yet. There are all reasons for, say, Europe and CIA, to require phones to be backdoored by default for LE and economic intelligence purposes.


Apple is not required by any country to have a backdoor and I am not aware of any agreement from Apple to install such a back door for anyone


If the person knowing the passcode is around and you can fool them into using their passcode then yes, you could capture their passcode. Touch ID is even less of a problem because taking someone's fingerprints is a lot easier than taking a passcode out of their head.

But in both those situations the weakness is in the person, not the device. Apple devices still potentially have security weaknesses which the FBI is asking Apple to exploit for them. Apple wants to fix these weaknesses, to stop Apple being forced to exploit them.


Apple is required to have backdoors, at least on iPhones sold in foreign countries, isn't it?

I don't believe this is the case.

Even if the SE were completely secure, a rogue update of iOS could intercept the fingerprint or passcode whenever it is typed, and replay it to unlock the SE when spies ask for it. As far as I know, the on-screen keyboard is controlled by software which isn't in the SE.

What you say about an on-screen passcode is likely true but the architecture of the secure enclave is such that the touch ID sensor is communicating over an encrypted serial bus directly with the SE and not iOS itself. It assumes that the iOS image is not trustworthy.

From the white paper [1]:

It provides all cryptographic operations for Data Protection key management and maintains the integrity of Data Protection even if the kernel has been compromised.

...

The Secure Enclave is responsible for processing fingerprint data from the Touch ID sensor, determining if there is a match against registered fingerprints, and then enabling access or purchases on behalf of the user. Communication between the processor and the Touch ID sensor takes place over a serial peripheral interface bus. The processor forwards the data to the Secure Enclave but cannot read it. It’s encrypted and authenticated with a session key that is negotiated using the device’s shared key that is provisioned for the Touch ID sensor and the Secure Enclave. The session key exchange uses AES key wrapping with both sides providing a random key that establishes the session key and uses AES-CCM transport encryption.

[1]: https://www.apple.com/business/docs/iOS_Security_Guide.pdf


I guess the last one percent is making sure you don't brick customer phones inadvertently with software update or fix.


Can the SE be updated on a locked phone? Because Apple's docs give the impression that it can't.


The only statement I could find from Apple was from the iOS security guide that states, "it utilizes its own secure boot and personalized software update separate from the application processor." I think we can both agree that's a pretty vague statement, if you have a better source I'd like to see it.


A former Apple engineer said on Twitter:

"@AriX I have no clue where they got the idea that changing SPE firmware will destroy keys. SPE FW is just a signed blob on iOS System Part"

https://twitter.com/johnhedge/status/699882614212075520

Then Apple seems to confirm it:

"The executives — speaking on background — also explicitly stated that what the FBI is asking for — for it to create a piece of software that allows a brute force password crack to be performed — would also work on newer iPhones with its Secure Enclave chip"

http://techcrunch.com/2016/02/19/apple-executives-say-new-ip...


I understand that the boot chain is the only way Apple may modify the behaviour of the Enclave but how would the update be forced? DFU wipes the class key, making any attempt at trying to brute force the phone, useless. If debug pinout access is available, then why does FBI needs Apple to access the phone at all?


"These devices were already fenced off from the DOJ, as long as their operators were savvy about opsec."

I hate to be that guy, but if you have an op and you have any opsec, you aren't even carrying a phone.

Right ?


Like literally every other type of security, OpSec is not binary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: