Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is really minor compared to other parts of Firefox. Notably, Firefox requires all extensions to be signed by Mozilla.

And the only way to turn that restriction off, is to either use Firefox Nightly or Firefox Developer Edition (which is a beta). If you want to use stable Firefox, because you like having a stable web browser, you just can't turn the restriction off. Period. The closest thing you can do is installing it as a "temporary extension" which uninstalls itself when the browser restarts.

It's kind of ridiculous that the default browser of most Linux distros has a Apple-esque mentality of "we get to tell you what you're allowed to install."



I think it's unreasonable to assume the vast, vast majority of users, including technically literate ones, are likely to have an in detail understanding of any specific software's installation.

Requiring a few hoops - and it sounds like e.g. requiring a developer edition sounds like such a hoop - to ensure a likely misconfiguration was actually intentional and the user is capable of dealing with the consequences is not a bad idea.

In particular, debugging when things go wrong can be an insurmountable task.

Better isolation of software components has been a trend for decades; to the point where I think we can safely say that the old unix and windows model of permissions was a fundamentally insufficient idea. Devs flock to using VMs and containers precisely because uncontrolled interaction between stuff even controlled by the same nominal "user" is a huge pain - and that's before malware and privacy concerns come in.

Were software more isolated by default, and interaction more controlled and/or explicit, then indeed I think the argument against this kind of controlling-the-"user" features would be stronger. But as is? The alternative is clearly much, much worse.

After all, certainly here and often in other cases too - it's not like it's actually impossible to circumvent these restrictions. It's simply technically inconvenient in a way that happens to also prevent many unintentional bugs and some malware vectors.

In an ideal world, a devs for a piece of software would be hard-pressed to even do this, let alone feel the need to do it. But that's just not the world we live in; we're not even really close yet - except on really locked down platforms that go much further than needed to prevent the risks, and into the territory of quite openly restricting the user, not the software.


> Requiring a few hoops - and it sounds like e.g. requiring a developer edition sounds like such a hoop - to ensure a likely misconfiguration was actually intentional and the user is capable of dealing with the consequences is not a bad idea.

Except when such hoops then start being used as evidence the user is an undesirable and should be kept away from various services. See e.g. many Android apps refusing to work on rooted phones.

I specifically don't like bucketing things like these under "development" label - "dev mode", "dev build", "dev edition", etc. - because it creates the idea that those "dev capabilities" are there to help developers with development, and should very much not be used for non-development things.


I share your concern here, but I can't see a resolution to that being allowing every bit of software to alter any and all user data and alter the execution of any other software a user is running. That's where we came from, and the number of untrustworthy dependencies is so large nowadays that this kind of approach is both unsafe, but even without malware it's also unreliable and unpredictable.

There _will_ be constraints on running programs altering other stuff. Sandboxes _will_ get even stricter. The benefits are so large that this trend will inevitably continue, and rightly so.

To protect the ability to tinker we'll need to instead talk about who gets to control those sandboxes, ultimately. How can we poke holes without allowing abuse by malware or creative (ab)use rendering the system pointless? How can we ensure the poked-holes exist by user choice, not at the behest of a tiny handful of software behemoths?

On a technical level, I don't think that arbitrary and surreptitious dll injection is a line in the sand worth defending. It's not a great abstraction; it's tech debt.


> On a technical level, I don't think that arbitrary and surreptitious dll injection is a line in the sand worth defending. It's not a great abstraction; it's tech debt.

I think giving DLL injections up isn't really solving anything. DLL injection isn't an accident of history - it's a solution to a specific problem. The problem won't disappear if you remove the solution.

The issue in question is that it's also broadly useful to allow software to be modified by third parties in arbitrary ways, without the involvement or cooperation of the software vendor. In fact, the security folks are a major users of this capability - that's how malware scanners work, that's how emergency hot-fixing is done, that's how compliance systems work - and moving towards more evil/dystopian use cases, this capability is needed by anti-cheat systems, DRM, and the modern digital surveillance economy.

All this means that, if you eliminate the current way to let third parties inject code into programs, you'll soon be forced to create the equivalent mechanism yourself. The sandboxes will get stricter to protect users from criminals, and then they'll have holes poked in them to accommodate legitimate actors, both good and bad. Problem is, going through the whole dance of making a sandbox and then making it leaky, is that it generates bloat. You end up roughly in the same place you started, just with an extra layer of abstraction on top.

(And, of course, all those abstraction layers have bugs in them, so the attack surface for criminals is getting larger in the process.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: