Hacker Newsnew | past | comments | ask | show | jobs | submit | 986aignan's commentslogin

I'm surprised that the article doesn't mention the peroxidase hypothesis[1]. Has it been disproven?

[1] https://tmedweb.tulane.edu/pharmwiki/doku.php/acetaminophen


The problem is that if what "really counts" is too vaguely defined, then it's hard to pin down and argue the point.

Virtual memory probably isn't what you meant, but take something like user privilege separation. It's usually considered a good idea to not run software as root. To interpret the statement generously, privilege separation does restrict immediate freedom: you have to escalate whenever you want to do system-level changes. But I think josephg's statement:

> Sandboxing gives users more control. Not less. Even if they use that control to turn off sandboxing, they still have more freedom because they get to decide if sandboxing is enabled or disabled.

can be directly transposed to user privilege separation. While it's true that escalating to root is more of a hassle than just running everything as root, in another sense it does provide more control because the user can run arbitrary code without being afraid that it will nuke their OS; and more freedom because you could always just run everything as root anyway.

Maybe josephg's sense of freedom and control is what you're saying there is a trade-off between. But the case of privilege separation shows that some trade-offs are such that they provide a lot of security for only a little bit of inconvenience, and that's a trade-off most people are willing to make.

Sometimes the trade-off may seem unacceptable because OS or software support isn't there yet. Like Vista's constant UAC annoyances in the case of privilege separation/escalation. But that doesn't mean that the fundamental idea of privilege levels is bad or that it must necessarily trade off too much convenience for control.

I think that's also what josephg is suggesting about sandboxing. He says that the clipboard problem could probably be fixed; then you say, "but there are other examples". What remains to be shown is whether the examples are inherent to sandboxing and must degrade a capabilities/sandbox approach to a level where the trade-off is unacceptable to most.


> The problem is that if what "really counts" is too vaguely defined, then it's hard to pin down and argue the point.

It really wasn't. It isn't hard to understand what was meant.

> Virtual memory probably isn't what you meant,

No it wasn't and there is no need to put "probably". It was obvious it wasn't.

> can be directly transposed to user privilege separation. While it's true that escalating to root is more of a hassle than just running everything as root, in another sense it does provide more control because the user can run arbitrary code without being afraid that it will nuke their OS; and more freedom because you could always just run everything as root anyway.

The difference is that there are very few things I need to run as user directly daily as root on my Desktop Linux box. I can't think of anything.

However having to cut and paste a meme into ~/Downloads so I can share it on Discord or Slack is a constant PITA. If you sandbox apps you have to restrict what they can access. There is no way around this. The iPhone works the same way BTW. I know I used to own one. You either have to say "Discord can have access to this file", or you have to give it all the access.

> Maybe josephg's sense of freedom and control is what you're saying there is a trade-off between. But the case of privilege separation shows that some trade-offs are such that they provide a lot of security for only a little bit of inconvenience, and that's a trade-off most people are willing to make.

No they are a false sense of security with a lot of inconvenience. The inconvenience is inherent and always will be because you will need to restrict resources using a bunch of rules.

> Sometimes the trade-off may seem unacceptable because OS or software support isn't there yet. Like Vista's constant UAC annoyances in the case of privilege separation/escalation. But that doesn't mean that the fundamental idea of privilege levels is bad or that it must necessarily trade off too much convenience for control.

There are many things that seem like they are fundamentally sound ideas on the face of it. However there are always secondary effects that happen. e.g. Often people just ignore the prompts, this is called "prompt fatigue". I've literally seen people do it on streams.

Operating systems are now quite a lot more secure than they were. So instead of going for the OS, most bad actors will use a combination of social engineering to gain initial entry to the system. The OS security often isn't the problem. Most operating systems have either app stores, some active threat management.

If you are running things from npm/PyPI/github without doing some due diligence, that is on you. This is well past what non-savvy user is likely to do.

> I think that's also what josephg is suggesting about sandboxing. He says that the clipboard problem could probably be fixed; then you say, "but there are other examples". What remains to be shown is whether the examples are inherent to sandboxing and must degrade a capabilities/sandbox approach to a level where the trade-off is unacceptable to most.

It is inherent. It obvious it is. If you want to share stuff between applications like data, which is something you want to do almost all the time. You will need to give it access at least to your file-system. The more of this you do, you will either have to give more access or having to faff moving stuff around. So either you work with a frustrating system (like I have to do at work), or you disable it.

So what happens is you only have "all or nothing".


> If you want to share stuff between applications like data, […]. You will need to give it access at least to your file-system. The more of this you do, you will either have to give more access or having to faff moving stuff around.

Why are those the only answers?

If we had free rein to redesign our computers from the ground up, there’s lots of other ways that problem could be solved.

One obvious example is to make copy+paste be an OS level shortcut so apps can’t access the clipboard without the user invoking that chord. Then just copy paste stuff between applications.

Another idea: right now when I invoke a shell script, I say “foo blah.txt”. The argument is passed as a string and I have to trust that the program will open the file I asked - and not look instead at my ssh private keys. Instead of that, my shell program could have access to the filesystem and open the file on behalf of the script. Then the script can be invoked and passed the file descriptor as input. That way, the script doesn’t need access to the rest of my filesystem.

If we’re a little bit creative, there’s probably all sorts of ways to solve these problems. The biggest problem in my mind is that Unix has ossified. It seems that nobody can be bothered making desktop Linux more secure. A pity.

Maybe it’s time to give qubes a try.


> However having to cut and paste a meme into ~/Downloads so I can share it on Discord or Slack is a constant PITA.

Why round trip it through the file system or Files.app? That seems like extra (annoying) work On my iPhone, I copy the meme onto the clipboard and then I open discord/slack/signal/Whatsapp and find the right channel/chat, and paste right in there.


> It isn't hard to understand what was meant.

At least two independent people understood you in the same way. So just dismissing it isn't productive.

> PITA. If you sandbox apps you have to restrict what they can access. There is no way around this.

This has nothing to do with freedom though.

> You will need to give it access at least to your file-system.

On Qubes, you copy-paste with ctrl+shift+v/c and nothing is shared unless you actively do it yourself. It becomes a habit very quickly (my daily driver). Sharing files is a bit harder (you send them from VM to VM), but it's not as hard as you want it to look.


> At least two independent people understood you in the same way. So just dismissing it isn't productive.

Two people that we are aware of.

BTW, I often encounter this when talking to other techies. People go to the most ridiculous extremes to be contrarian. Often they don't even know they are doing. I know because I used to engage in this behaviour.

So I feel like I am well withing my rights to dismiss it.


I didn't say you weren't within your rights. I said it's counter-productive for the discussion.


I think it is counter productive to bring up ridiculous examples, which was obviously not what I meant.


Both things can be counterproductive simultaneously.


IIRC, you could use asymmetric cryptography to derive a site-specific pseudonymous token from the service and your government ID without the service knowing what your government ID is or the government provider knowing what service you are using.

The service then links the token to your account and uses ordinary detection measures to see if you're spamming, flooding, phishing, whatever. If you do, the token gets blacklisted and you can no longer sign on to that service.

This isn't foolproof - you could still bribe random people on the street to be men/mules in the middle and do your flooding through them - but it's much harder than just spinning up ten thousand bots on a residential proxy.


But that does not really answer my question: if a human can prove that they are human anonymously (by getting an anonymous token), what prevents them from passing that token to an AI?

The whole point is to prevent a robot from accessing the API. If you want to detect the robot based on its activity, you don't need to bother humans with the token in the first place: just monitor the activity.


It does not prevent a bot from using your ID. But a) the repercussions for getting caught are much more tangible when you can't hide behind anonymity - you risk getting blanket banned from the internet and b) the scale is significantly reduced - how many people are willing to rent/sell their IDs, i.e., their right to access the internet?

Edit: ok I see the argument that the feedback mechanism could be difficult when all the website can report is "hey, you don't know me but this dude from request xyz you just authenticated fucked all my shit up". But at the end of the day, privacy preservation is an implementation detail I don't see governments guaranteeing.


> But at the end of the day, privacy preservation is an implementation detail I don't see governments guaranteeing.

Sure, I totally see how you can prevent unwanted activity by identifying the users. My question was about the privacy-preserving way. I just don't see how that would be possible.


It is possible to take this too far, though - consider the OpenAI IMO proofs[1], for instance, and compare them to Gemini's.[2]

[1] https://github.com/aw31/openai-imo-2025-proofs

[2] https://arxiv.org/pdf/2507.15855 Appendix A


At some point, I would imagine the distinction between capital and land becomes blurry, though. Economic rent can be had from either if the barrier to competition is high enough.

Domain names are a good example, because as skissane said, you could just make another DNS root. The trouble is convincing people (browsers) to use it. The problem in attempting to overturn Facebook isn't mainly the coding, either, but having a critical mass care. Those barriers don't seem like absolutes the way land is; they're just very high, high enough for those who control them to extract economic rent.


And there is lots of land - just not in close proximity to existing economic activity. It’s a common pattern.


There’s not enough land that has enough water resources.


In the same way, you could build a new city somewhere. Land is expensive near cities.


Even if that is true (and I'm not saying it is), practical limits on handling the combinatorial complexity, or variety if you will, severely limits its use. No realistic fist-fighter has the information required or the processing capabilities to do the "biomechanical optimization problem" to anywhere near optimality.

In city planning and building design, the problem is even more severe. The planner doesn't know what people are going to settle where, what their desired needs are (or are going to be), and so on. That doesn't mean that there's no such thing as an awful solution, nor that you can't say anything at all. (A house probably needs windows, and you probably shouldn't stick a polluting industrial zone right next to a bunch of them.) It just means that trying to "micromanage" a city or complex building fails - for the same reason that micromanaging an organization fails.

(This is a requisite variety or "seeing like a state" argument.)


Monte Carlo simulations would be the obvious example.


Couldn't you replace the CCD with an adapter, connect the adapter to the video out of a computer, and then use the camera to "take a picture" of your already edited picture?

It seems to me that any "paper trail" scheme of the sort you describe would have to solve the problems of DRM to work: making the elements that report on the real world (in this case, the CCD) tamper-proof, making the encryption key impossible to extract, designing robust watermarks to avoid analog holes, etc.


Sure, you can also take a picture of the screen.

I don’t think C2PA’s goal is to completely prevent this type of thing, but to make it hard enough to stop low-effort attempts.

This, like DRM, will probably be an arms race, and future solutions will look nothing like what I described.

But then again, the spec has been out for more than a year, and I haven’t seen anyone big bothering to implement it. Maybe it’s a flop already.


The axioms just state what criteria the Swiss system (but not the Icelandic) obeys. You don't need to know them in order to vote in Iceland any more than you need to know that first past the post fails the Condorcet criterion in order to vote in the US.


You might need some kind of MMP part if you want it to be truly proportional. If the voters can only rank about ten candidates before it gets unwieldy, that would give an effective 9% absolute threshold. A party that gains 8% support everywhere would get no candidates elected.

Here's a paper by Markus Schulze proposing such a method: https://aso.icann.org/wp-content/uploads/2019/02/schulze4.pd... He uses some very large districts, but it should work for smaller districts too.


Yes, STV is non perfect but IMHO it’s worth it to not have party lists.

Also one of the main criticism of people opposed to proportional system is the lack of direct representation. STV solves that and even is superior to FPTP in that way because you are more likely to find a MP who is more sympathetic towards your cause/views if there are e.g. 3-5 members in your district.

Of course I’m not talking about the system proposed in the paper your linked, but rather about how MMP works in Germany. You get both part list and FPTP style party appointed candidates.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: