Originally Posted By: artie505
Is it necessary in todays environment?

Is it no more than an overreaction to some imaginary threat?

Or is Apple being realistically visionary and conducting a pre-emptive strike against an amorphous, but nonetheless real, future threat?


The cliff's notes for sandboxing basically is to run a program inside a simulated environment, so that in the event that the program (by "design, bug, or malice) does something it's not supposed to, ("breaks out" of the program) it only gains access to the simulated environment, not the real one. It's merely a safety net.

If the world was perfect, sandboxing would serve no purpose. But since "design, bug, or malice" can never be reduced to zero, it has a purpose. The more you can reduce the threat to begin with, the less relevant a sandbox becomes.

But there will always be a risk again by "design, bug, or malice", of the sandbox being escapable. So you haven't added a bulletproof barrier, you've only added one level of exponent to the numbers. When sandboxes become standard, malware will simply be designed as a two-stage attack - escape the application, and then escape the sandbox the app is in. No one will write malware that can't get out of the sandbox, there'd be no point to it. But it would help reduce the effects of bugs. Preventing buggy apps from crashing the OS for example. So it becomes less valuable, but not entirely worthless. In a way even now, when an app crashes it usually doesn't bring down the OS, so in that respect there's already a sandbox of sorts in place. Or at least a buffer/insulator.


I work for the Department of Redundancy Department