Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you arguing that ASLR is “security via obscurity”?


I would, and there is no shame in it, as far as I'm concerned.

I don't need to outrun the bear. I just need to outrun you.


ASLR is not security through obscurity though. It forces attacker to get a pointer leak before doing almost anything (even arbitrary read and arbitrary write primitives are useless without a leak with ASLR). As someone with a bit of experience in exploit dev, it makes a world of a difference and is one of the most influential hardenings, next to maybe stack cookies and W^X.


I'm genuinely curious what was so undesirable about this sibling comment that it was removed:

"ASLR obscures the memory layout. That is security by obscurity by definition. People thought this was okay if the entropy was high enough, but then the ASLR⊕Cache attack was published and now its usefulness is questionable."

Usually when a comment is removed, it's pretty obvious why, but in this case I'm really not seeing it at all. I read up (briefly) on the mentioned attack and can confirm that the claims made in the above comment are at the very least plausible sounding. I checked other comments from that user and don't see any other recent ones that were removed, so it doesn't seem to be a user-specific thing.

I realize this is completely off-topic, but I'd really like to understand why it was removed. Perhaps it was removed by mistake?


Some people use the "flag" button as a "disagree" button or even a "fuck this guy" button. Unfortunately, constructive but unpopular comments get flagged to death on HN all the time.


I had thought that flagging was basically a request for a mod to have a look at something. But based on this case I now suspect that it's possible for a comment to be removed without a mod ever looking at it if enough people flag it.


Not removed but hidden. You can turn on showdead in your profile to see them.


My point was more that, at least in this case, it looks like a post was hidden without any moderator intervention.

If this is indeed what happened, it seems like a bad thing that it's even possible. Since many, perhaps most people probably don't have showdead enabled, it means that the 'flag' option is effectively a mega-downvote.


I believe most people see security through obscurity has an attempt to hide an insecurity.

ASLR/KASLR intends to make attackers lives harder by having non consistent offsets of known data structures. Its not obscuring a security flaw, instead its raises an attacks 'single run' effectivness.

The ASLR attack that i believe is being referenced is specific to abuse within the browser, and running with a single process. This single attack vector does not mean that KASLR globally is not effective.

Your quote has some choice words, but its contextually poor.


That attack does not require a web browser. The web browser being able to do it showed it was higher severity than you would think than if the proof of concept had been in C, since web browsers run untrusted code all of the time.


The 'attack' there does require you to be able to run code and test within a single process with a single randomized address space, which is the exact vector that the web browser provides.

Most times in C, each fork() (rather than thread) has a differential address space, so it's actually less severe than you think.


The kernel address space is the same regardless of how many fork() calls have been done. I would assume the exploitation path for a worst case scenario would be involve chaining exploits to do: AnC on userspace, JavaScript engine injection to native code, sandbox escape, AnC on kernel space, kernel native code injection. That would give complete control over a user’s machine just by having the user visit a web page.

I am not sure why anyone would attempt what you described, for the exact reason you stated. It certainly is not what I had in mind.


Its been a few days and a thousand kilometers since I've read the paper, I thought it referenced userspace. How is it able to infer kernel addresses that are not mapped in that process ?


I assume people downvoted it because “ASLR obscures the memory layout. That is security by obscurity by definition” is just wrong (correct description here: https://news.ycombinator.com/item?id=43408039). It does say [flagged] too, though, so maybe that’s not the whole story…?


No, that other definition is the incorrect one. Security by obscurity does not require that the attacker is ignorant of the fact you're using it. Say I have an IPv6 network with no firewall, simply relying on the difficulty of scanning the address space. I think that people would agree that I'm using security by obscurity, even if the attacker somehow found out I was doing this. The correct definition is simply "using obscurity as a security defense mechanism", nothing more.


No, I would not agree that you would be using security by obscurity in that example. Not all security that happens to be weak or fragile and involves secret information somewhere is security by obscurity – it’s specifically the security measure that has to be secret. Of course, there’s not a hard line dividing secret information between categories like “key material” and “security measure”, but I would consider ASLR closer to the former side than the latter and it’s certainly not security by obscurity “by definition” (aside: the rampant misuse of that phrase is my pet peeve).

> The correct definition is simply "using obscurity as a security defense mechanism", nothing more.

This is just restating the term in more words without defining the core concept in context (“obscurity”).


I'm inclined to agree and would like to point out that if you take a hardline stance that any reliance on the attacker not knowing something makes it security by obscurity then things like keys become security by obscurity. That's obviously not a useful end result so that can't be the correct definition.

It's useful to ask what the point being conveyed by the phrase is. Typically (at least as I've encountered it) it's that you are relying on secrecy of your internal processes. The implication is usually that your processes are not actually secure - that as soon as an attacker learns how you do things the house of cards will immediately collapse.


What is missing from these two representations is the ability for something to become trivially bypassable once you know the trick to it. AnC is roughly that for ASLR.


I'd argue that AnC is a side channel attack. If I can obtain key material via a side channel that doesn't (at least in the general case) suddenly change the category of the corresponding algorithm.

Also IIUC to perform AnC you need to already have arbitrary code execution. That's a pretty big caveat for an attacker.


You are not wrong, but how big of a caveat it is varies. On a client system, it is an incredibly low bar given client side scripting in web browsers (and end users’ tendency to execute random binaries they find on the internet). On a server system, it is incredibly unlikely.

I think the middle ground is to call the effectiveness of ASLR questionable. It is no longer the gold standard of mitigations that it was 10 years ago.


ASLR is not purely security through obscurity because it is based on a solid security principle: increasing the difficulty of an attack by introducing randomness. It doesn't solely rely on the secrecy of the implementation but rather the unpredictability of memory addresses.

Think of it this way - if I guess the ASLR address once, a restart of the process renders that knowledge irrelevant implicitly. If I get your IPv6 address once, you’re going to have to redo your network topology to rotate your secret IP. That’s the distinction from ASLR.


I don't like that example because the damaged cause by and the difficulty of recovering from a secret leaking is not what determines the classification. There exist keys that if leaked would be very time consuming to recover from. That doesn't make them security by obscurity.

I think the key feature of the IPv6 address example is that you need to expose the address in order to communicate. The entire security model relies on the attacker not having observed legitimate communications. As soon as an attacker witnesses your system operating as intended the entire thing falls apart.

Another way to phrase it is that the security depends on the secrecy of the implementation, as opposed to the secrecy of one or more inputs.


You don’t necessarily need to expose the IPv6 address to untrusted parties though in which case it is indeed quite similar to ASLR in that data leakage of some kind is necessary. I think the main distinguishing factor is that ASLR by design treats the base address as a secret and guards it as such whereas that’s not a mode the IPv6 address can have because by its nature it’s assumed to be something public.


Huh. The IPv6 example is much more confusing that I initially thought. At this point I am entirely unclear as to whether it is actually an example of security through obscurity, regardless of whatever else it might be (a very bad idea to rely on it for one). Rather ironic given that the poster whose claims I was disputing provided it as an example of something that would be universally recognized as such.


I think it’s security through obscurity because in ASLR the randomized base address is a protected secret key material whereas in the ipv6 case it’s unprotected key material (eg every hop between two communicating parties sees the secret). It’s close though which is why IPv6 mapping efforts are much more heuristics based than ipv4 which you can just brute force (along with port #) quickly these days.


I'm finding this semantic rabbit hole surprisingly amusing.

The problem with that line of reasoning is that it implies that data handling practices can determine whether or not a given scheme is security through obscurity. But that doesn't fit the prototypical example where someone uses a super secret and utterly broken home rolled "encryption" algorithm. Nor does it fit the example of someone being careless with the key material for a well established algorithm.

The key defining characteristic of that example is that the security hinges on the secrecy of the blueprints themselves.

I think a case can also be made for a slightly more literal interpretation of the term where security depends on part of the design being different from the mainstream. For example running a niche OS making your systems less statistically likely to be targeted in the first place. In that case the secrecy of the blueprints no longer matters - it's the societal scale analogue of the former example.

I think the IPv6 example hinges on the semantic question of whether a network address is considered part of the blueprint or part of the input. In the ASLR analogue, the corresponding question is whether a function pointer is part of the blueprint or part of the input.


> The problem with that line of reasoning is that it implies that data handling practices can determine whether or not a given scheme is security through obscurity

Necessary but not sufficient condition. For example, if I’m transmitting secrets across the wire in plain text that’s clearly security through obscurity even if you’re relying on an otherwise secure algorithm. Security is a holistic practice and you can’t ignore secrets management separate from the algorithm blueprint (which itself is also a necessary but not sufficient condition).


Consider that in the ASLR analogy dealing in function pointers is dealing in plaintext.

I think the semantics are being confused due to an issue of recursively larger boundaries.

Consider the system as designed versus the full system as used in a particular instance, including all participants. The latter can also be "the system as designed" if you zoom out by a level and examine the usage of the original system somewhere in the wild.

In the latter case, poor secrets management being codified in the design could in some cases be security through obscurity. For example, transmitting in plaintext somewhere the attacker can observe. At that point it's part of the blueprint and the definition I referred to holds. But that blueprint is for the larger system, not the smaller one, and has its own threat model. In the example, it's important that the attacker is expected to be capable of observing the transmission channel.

In the former case, secrets management (ie managing user input) is beyond the scope of the system design.

If you're building the small system and you intend to keep the encryption algorithm secret, we can safely say that in all possible cases you will be engaging in security through obscurity. The threat model is that the attacker has gained access to the ciphertext; obscuring the algorithm only inflicts additional cost on them the first time they attack a message secured by this particular system.

It's not obvious to me that the same can be said of the IPv6 address example. Flippantly, we can say that the physical security of the network is beyond the scope of our address randomization scheme. Less flippantly, we can observe that there are many realistic threat models where the attacker is not expected to be able to snoop any of the network hops. Then as long as addresses aren't permanent it's not a one time up front cost to learn a fixed procedure.


Function pointer addresses are not meant to be shared - they hold 0 semantic meaning or utility outside a process boundary (modulo kernel). IPv6 addresses are meant to be shared and have semantic meaning and utility at a very porous layer. Pretending like there’s no distinction between those two cases is why it seems like ASLR is security through obscurity when in fact it isn’t. Of course, if your program is trivially leaking addresses outside your program boundary, then ASLR degrades to a form of security through obscurity.


I'm not pretending that there's no distinction. I'm explicitly questioning the extent to which it exists as well as the relevance of drawing such a distinction in the stated context.

> Function pointer addresses are not meant to be shared

Actually I'm pretty sure that's their entire purpose.

> they hold 0 semantic meaning or utility outside a process boundary (modulo kernel).

Sure, but ASLR is meant to defend against an attacker acting within the process boundary so I don't see the relevance.

How the system built by the programmer functions in the face of an adversary is what's relevant (at least it seems to me). Why should the intent of the manufacturer necessarily have a bearing on how I use the tool? I cannot accept that as a determining factor of whether something qualifies as security by obscurity.

If the expectation is that an attacker is unable to snoop any of the relevant network hops then why does it matter that the address is embedded in plaintext in the packets? I don't think it's enough to say "it was meant to be public". The traffic on (for example) my wired LAN is certainly not public. If I'm not designing a system to defend against adversaries on my LAN then why should plaintext on my LAN be relevant to the analysis of the thing I produced?

Conversely, if I'm designing a system to defend against an adversary that has physical access to the memory bus on my motherboard then it matters not at all whether the manufacturer of the board intended for someone to attach probes to the traces.


If you can look up the base address via AnC, is considering it to be a protected key material really correct?


I think that's why the threat model matters. I consider my SSH keys secure as long as they don't leave the local machine in plaintext form. However if the scenario changes to become "the adversary has arbitrary read access to your RAM" then that's obviously not going to work anymore.


If someone can guess the randomization within 1 second using the AnC attack, you can restart as much as you want, but it will not improve security.


  > The correct definition is simply "using obscurity as a security defense mechanism", nothing more.
Also stated as "security happens in layers", and often obscurity is a very good layer for keeping most of the script kiddies away and keeping the logs clean.

My personal favorite example is using a non-default SSH port. Even if you keep it under 1024, so it's still on a root-controlled port, you'll cut down the attacks by an order of magnitude or two. It's not going to keep the NSA or MSS out, but it's still effective in pushing away the common script kiddies. You could even get creative and play with port knocking - that keeps under-1024 ports logs clean.


I use non-standard SSH ports too. It does not improve theoretical security, but it does improve quality of life from generating smaller logs.


In the limit, an encryption key falls to the same logic. You simply rely on the difficulty of scanning all possibly keys.


I downvoted because the poster doesn't understand what security by obscurity means.


Except I do know what security by obscurity is and you are out of date on the subject. When you have attacks that make ASLR useless, then it is security by obscurity. Your thinking would have been correct 10 years ago. It is no longer correct today. The middle ground is to say that the benefits of ASLR are questionable, like I said in the comment you downvoted.


ASLR obscures the memory layout. That is security by obscurity by definition. People thought this was okay if the entropy was high enough, but then the ASLR⊕Cache attack was published and now its usefulness is questionable.


ASLR is by definition security through obscurity. That doesn't make it useless, as there's nothing wrong with using obscurity as one layer of defenses. But that doesn't change what it fundamentally is: obscuring information so that an attacker has to work harder.


Is having a secret password security by obscurity? What about a private key?

Security by obscurity is about the bad practice of thinking that obscuring your mechanisms and implementations of security increases your security. It's about people that think that by using their nephew's own super secret unpublished encryption they will be more secure than by using hardened standard encryption libraries.


Security by obscurity is keeping your security algorithms and design secret, not your data at runtime secret.


it’s a total distortion of what the phrase means.

Security through obscurity is when you run your sshd server on port 1337 instead of 22 without actually securing the server settings down, because you don’t think the hackers know how to portscan that high. Everyone runs on 22, but you obscurely run it elsewhere. “Nobody will think to look.”

ASLR is nothing like that. It’s not that nobody thinks to look, it’s that they have no stable gadgets to jump to. The only way to get around that is to leak the mapping or work with the handful of gadgets that are stable. It’s analogous to shuffling a deck of cards before and after every hand to protect against card counters. Entire cities in barren deserts have been built on the real mathematical win that comes from that. It’s real.


With attacks such as AnC, your logic fails. They can figure out the locations and get plenty of stable gadgets.

Any shuffling of a deck of cards by Alice is pointless if Bob can inspect the deck after she shuffles them. It makes ASLR not very different from changing your sshd port. In both cases, this describes the security:

https://web.archive.org/web/20240123122515if_/https://www.sy...


okay, sure, ASLR can be defeated by hardware leaks. The first rowhammer papers were over ten years ago, it's very old news. It's totally irrelevant to this thread. The fact that there exist designs that have hardware flaws which make them incapable of hosting a secure PRNG does not have any relevance to a discussion about the merits or lack thereof of a PRNG-based security measures. The systems you're referring to don't have secure PRNGs.

Words have meaning, god damn it! ASLR is not security through obscurity.

Edit: I was operating under the assumption that “AnC” was some new hotness, but no, this is the same stuff that’s always been around, timing attacks on the caches. And there’s still the same solution as there was back then: you wipe the caches out so your adversaries have no opportunity to measure the latencies. It’s what they always should have done on consumer devices running untrusted code.


> ASLR is not security through obscurity.

I used to think this, but hearing about the AnC attack changed my mind. I have never heard of anyone claiming to mitigate it.


Except in this case there can be a whole bunch of parallel bears.


Scrambling != Obscuring. Obscuring to me means that there's a fixed something to hide that can be discovered and exploited.


ASLR is technically a form of security by obscurity. The obscurity here being the memory layout. The reason nobody treated it that way was the high entropy that ASLR had on 64-bit, but the ASLR⊕Cache attack has undermined that significantly. You really do not want ASLR to be what determines whether an attacker takes control of your machine if you care about having a secure system.


The defining characteristic of security through obscurity is that the effectiveness of the security measure depends on the attacker not knowing about the measure at all. That description doesn’t apply to ASLR.


It produces a randomization either at compile time or run time, and the randomization is the security measure, which is obscured based on the idea that nobody can figure it out with ease. It is a poor security measure given the AnC attack that I mentioned. ASLR randomization is effectively this when such attacks are applicable:

https://web.archive.org/web/20240123122515if_/https://www.sy...


You are confusing randomization, a legitimate security mechanism, with security by obscurity. ASLR is not security by obscurity. Please spend the time on understanding the terminology rather than regurgitating buzz words.


I understand the terminology. I even took a graduate course on the subject. I stand by what I wrote. Better yet, this describes ASLR when the AnC attack applies:

https://web.archive.org/web/20240123122515if_/https://www.sy...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: