Originally Posted By: Virtual1
But the keys themselves can't be stored exclusively in a register, otherwise when you reset or powered down the phone, the keys would be gone.

That's the whole point. When you reset or power down the phone, the keys are gone. The phone needs the passcode to recalculate the keys. The only thing that's permanent is the unique ID, and stuff that is encrypted with keys that can only be recalculated from the passcode.

Originally Posted By: Virtual1
Also interesting idea of storing the grid of data using different dopings, but not practical on mass production scale. The problem is that you want each phone to use a different key. Different masks for each phone is totally impractical.

I don't know for sure how the unique ID is physically stored, but what's impractical about an ephemeral mask? One step in the process could use the x-ray equivalent of an LCD (metal windows that can be opened/closed, perhaps?) to be the mask for a small region of the chip, just large enough to store 256 bits. Or 256 1-bit masks? Or not use masks at all. Draw the pattern directly on the photo-resist using a focused electron beam. The only reason we use masks at all is so that the whole chip can be imaged in one step, but that's parallelism that isn't needed for only 256 bits. Or cut right to the chase, and dope the silicon directly with a focused beam of ionized dopant. The rest of the chip can be fabricated using normal masking techniques. The point to remember is that just because you know one way to fabricate chips doesn't mean all chips are fabricated that way. What's impractical for one fabrication method may be facile for another.

Or maybe it is stored in flash memory. Not "the flash", of course. Just a 256-bit array on the CPU. Then it's just a pattern of electrical charges, charges that can be made to dissipate when the layer that covers them is removed.

But eventually the whole thing boils down to xkcd/538/.

Originally Posted By: Virtual1
That's the sort of technology that CAN break into a smartphone such as the iPhone. You need to have the gear, the technique, and the knowledge of how to apply it. (that last part can be very difficult to obtain... UNLESS you're the company that manufactured it, OR have a badge to flash AT said company)

I can't believe you're comparing the processor in an iPhone to a satellite TV decoder. That TV decoder only had three layers, and the circuit components were huge, large enough to be visible in an optical microscope. The A7 processor in the iPhone 5s uses 28-nm technology. I'd love to see the guy from the video try to tap into that with a paper clip.

The big problem with TV decoders, and most DRM for that matter, is that they don't use unique keys. They base their security on asymmetric cryptography, thinking they can keep the secret key secret, and then write the secret key into millions of consumer devices, each using the cheapest technology they can find, thinking no one will look. One person does, tells everyone, and pretty soon even the script kiddies know it. Even the script kiddies that don't know how to spell "secret" are in on the secret.

Apple isn't that stupid.

Originally Posted By: Virtual1
Anyone that tells me that Apple can't break into their own phone, I am going to have a very diffcult time believing. Just throwing out a very generic scenario... NSA work with Apple and say, that guy from the video wink OK now we have the money, the gear, the technique, and the location of the bits we want. The entire device gets dumped, while on, including ram and flash. (but not registers) He goes to work and gets the bits off the trust chip. (we're going to assume apple doesn't just plain keep this data... who can really say for sure they don't?) Once they have the key from the original, Apple enters the key into a new unit, but in a "not locked" state on a new phone.

I'm the one having a very difficult time believing your scenario. Even with the design details of the A7 processor in hand (obtainable by bribing someone from Samsung, the company that actually fabs the chip, but it would have to be someone with access to the vault where the design details are stored), you wouldn't be able to access the unique ID, even by destroying the chip, and certainly not in a way that lets you continue running the chip under power, as you suppose.

I believe Apple when they say they have no record of the unique ID, and cannot even go back and discover it after the fact. Why? For one thing, because it's dead simple to design a system that works that way. Any graduate-level CS student (and any bright under-graduate) could and probably has designed such a secure system, probably as a homework exercise. Absolutely secure black boxes may sound impossible, but they're really easy to design and build. (By "secure black box" I mean a tamper-proof box that has secrets it uses to do its thing, but will not reveal those secrets unless tampered with, and you can't tamper with it.)

Another reason to believe Apple is that they have no incentive to keep the unique ID, and many reasons not to. There's nothing Apple can do with the ID. Having said publicly that they do not record it, there would be grave and expensive repercussions if they were ever found to have been lying. Lots of lawsuits, plus damage to their reputation that would translate into lower sales.

No sane Apple executive would ever authorize such a program. Part of the design of whatever method is used generate the unique ID would be the institution of polices guaranteeing that no Apple nor Samsung employee, sane or otherwise, could change the method in such a way that IDs could be recorded, without setting off alarms that would reach all the way to the top of both companies.

Apple's security policies, not just for the iPhone but for everything Apple does, are designed so that a rogue employee cannot defeat them. Apple understands that attacks can come from inside as well as out, and designs accordingly. There may be companies that give interns badges that grant them access to everything. Such companies may be the norm in Hollywood movies, but they're rare in practice.

Not unheard of, unfortunately. Target's security was breached when they gave an HVAC company free passage through the corporate firewall, so the company could monitor HVAC equipment in Target's data centers. Then that HVAC company got hacked, giving the hackers access to the computers where Target did their software development, letting the hackers install malicious software in Target's Windows-based point-of-sale devices. The hackers walked off with debit card numbers and their PINs. Some of those debit cards were also credit cards. Target made two mistakes: the obvious one was punching too big a hole through their firewall, but the important mistake was thinking that attacks could not come from within. Savvy companies know better.