Hacker Newsnew | past | comments | ask | show | jobs | submit | chc4's commentslogin

OpenSSL is used by approximately everything under the sun. Some of those users will be vendors that use default compiler flags without stack cookies. A lot of IoT devices for example still don't have stack cookies for any of their software.

2026 and we still have bugs from copying unbounded user input into fixed size stack buffers in security critical code. Oh well, maybe we'll fix it in the next 30 years instead.

I recall Hoare,

"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."

-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"

Guess what 1980's language he is referring to.

Then in 1988,

https://en.wikipedia.org/wiki/Morris_worm

It has been 46 years since the speech, and 38 since the Morris worm.

How many related improvements have been tackled by WG14?


Humans are horribly unserious, yet extremely unfunny at the same time. What gives?

I particularly like the FIPS bit:

>The FIPS modules in 3.6, 3.5, 3.4, 3.3 and 3.0 are not affected by this issue, as the CMS implementation is outside the OpenSSL FIPS module boundary.

"I hereby define the vulnerability to be outside the bit that I define to be secure, therefore we're not vulnerable".


The bug isn't actually the copy but the bounds check.

If you had a dynamically sized heap allocated buffer as the destination you'd still have a denial of service attack, no matter what language was used.


The actual vulnerability is indeed the copy. What we used to do is this:

1. Find out how big this data is, we tell the ASN.1 code how big it's allowed to be, but since we're not storing it anywhere those tests don't matter

2. Check we found at least some data, zero isn't OK, failure isn't OK, but too big is fine

3. Copy the too big data onto a local buffer

The API design is typical of C and has the effect of encouraging this mistake

    int ossl_asn1_type_get_octetstring_int(const ASN1_TYPE *a, long *num, unsigned char *data, int max_len)
That "int" we're returning is either -1 or the claimed length of the ASN.1 data without regard to how long that is or whether it makes sense.

This encourages people to either forget the return value entirely (it's just some integer, who cares, in the happy path this works) or check it for -1 which indicates some fatal ASN.1 layer problem, give up, but ignore other values.

If the thing you got back from your function was a Result type you'd know that this wasn't OK, because it isn't OK. But the "Eh, everything is an integer" model popular in C discourages such sensible choices because they were harder to implement decades ago.


Win32 API at some point started using the convention of having the buffer length be a reference. If the buffer is too small the API function updates the reference with the required buffer length and returns an error code.

I quite like that, within the confines of C. I prefer the caller be responsible for allocations, and this makes it harder to mess up.


Assuming you're talking about a heap buffer overrun, it's still possible to exploit for EoP in some cases.

No, I mean you'd just allocate a tonne of memory

Ah, okay. Thought you were talking about OOB heap write or something.

A denial of service attack is a million times better than an RCE attack.

2026 and why not vibe code our own cryptography library just like we are vibing lots of sandbox solutions? /s

It's 2023, why not use Rustls.

It's 2014, why not use LibreSSL.

You don't have to bring up AI, everyone just needs to leave OpenSSL to die.


> 2026 and why not vibe code our own cryptography library just like we are vibing lots of sandbox solutions? /s

And make sure to make it a hybrid of PHP and JavaScript /s


I saw a Mastodon tweet a while ago, which went something like:

Do tech companies understand consent?:

- [ ] Yes

- [ ] Ask me again in a few days


Hey, that sounds like Signal!

https://github.com/signalapp/Signal-iOS/issues/4590

>We're not going to remove the reminders.

>If you don't want to provide that access, you still don't need to – you can simply tap remind me later once a month

(See also: https://github.com/signalapp/Signal-iOS/issues/4373, https://github.com/signalapp/Signal-iOS/issues/5809, ...)


I get their point that you can't provide a "No" in the reminder. But there should be an option (maybe even hidden under "advanced settings - here be dragons!") for this.

Molly, the Signal fork, has exactly this feature. https://molly.im/

>I get their point that you can't provide a "No" in the reminder.

Yes you can. All reminders should have an option "Do not remind me again."


Problem is (and that was their argument) people press this button all the time without reading the dialogue at all, and then won't know how to turn it back on. A messenger app has to deal with very technical illiterate people. But there should be an option in settings for the tech savvy user.

Perhaps non-tech-literate people should not be annoyed with unwanted popups either.

Signal is an interesting case study in UX failure. I and a bunch of other tech forward people were on it in its heyday but after they removed SMS support and implemented shitty UX like that nag dialog: Neither I nor a single person I know uses it any more. Everyone is on Whatsapp or iMessage.

It may be cryptographically superior, but does that matter at the end of the day if nobody uses it?


Cryptographical superiority aside, Signal doesn't collect personal data, unlike Whatsapp. For me that's the main reason to use it. The UX is good enough, although some points can for sure be improved.

Whatsapp should be a non starter. What Mark Zuckerberg did to Whatsapp should be required reading for anyone using the internet, and then decide if you still want to use Facebook (never mind, they build a shadow profile for you anyway)

"It's time. Delete Facebook" isn't subtle https://www.forbes.com/sites/parmyolson/2018/09/26/exclusive...


That needs spelled out.

Delete: Facebook, Messenger, Instagram, WhatsApp, Meta, Threads, Manus.

Most people think of Facebook and Messenger when they see "Delete Facebook". Thats also why the rest dont have Meta or FB in their name.


Sounds like they just don't care about privacy, do they? Guess showing them https://i.redd.it/0imry50rxy961.png still won't change anything..

That is a thoroughly unconvincing graphic, yeah.

A few of my neighbors have kids the same age as my kids, they're on a WhatsApp group chat, and my choice is either use WhatsApp or make my kid miss out on social events, so it's not really a choice.

"Hey let's switch to this app that nobody else is using and it sends you annoying popups every month but trust me bro it's more secure" is not a winning argument


The graphic has an error: in the Signal box, "Phone number" should be included.

WhatsApp isn't any better, it's just more popular.

> It may be cryptographically superior, but does that matter at the end of the day if nobody uses it?

I've made a few attempts to convert people, but no-go. People stay on Telegram and WhatsApp because they have better UX and features.

Signal refuses to see the value in good attractive UX.


Every so often I consider writing the "STFU license." Something like GPL but if you use this code, even as a library, you can't give people unwanted notifications. Would need to be pretty comprehensive and forward compatible to cover all the crazy cases that notification-enthusiasts dream up.

This. We must change laws that the above field is not considered as given consent. And while we are at it, we must change "silence is agreement" to "silence is disagreement". This applies to change of ToS, price increases etc. That means if I don't click a link with a button "I agree", the ToS change is not accepted - that means they have to cancel/delete my account.

Didn't FCC remove "1-click unsubscribe" requirement since it can "provide more choice and lower prices to all users across the board" (since the companies can rip off more users and create pseudo-lower prices)?

EU has its GPDR and it has some teeth, but US is currently hopeless on that front, for now, from my vantage point.

I'd love to be stand corrected though.


The FTC established a "click-to-cancel" rule, but (as with just so many regulations in the US) it was blocked by an appeals court. Federal law says there's a hoop they have to jump through for rules with an impact of more than $100 million, and they didn't jump through the hoop because they didn't think the impact was that high.

Just move to Germany, we have all you asked for.

No we don't. Banks yes, but outside of banking no one respects this.

> And while we are at it, we must change "silence is agreement" to "silence is disagreement".

Maybe we should reframe their "silence is agreement" message as "silence is consent".


So creepy and weird this comment has downvotes. These people/companies absolutely do not value nor care about consent.

I like to frame it like this: "ask me later" is rape culture. It promotes and reinforces a culture of never taking "no" for an answer, and pushing one's agenda/intent regardless of the preference/consent of the other party/parties.

> "ask me later" is rape culture

I see the point you're making but this sort of hyperbole has a tendency to turn people away from whatever point you're trying to make unless they already agree with you.


I was visiting a girlfriend once, and she was in the process of moving in the same city. There was a telephone bill on top of her dresser, and I noticed that she had noted "butt-rape fee" next to one of the line items there.

Now she is a very literate woman and loves poetry and "Penny Dreadfuls", so she uses language and words very deliberately. And so, I asked her why she wrote that, and she said it was some sort of unnecessary fee that they were charging to move her line from one address to another, and she clearly resented their opportunistic capitalism.

I certainly sympathized with her, especially since she is the type of woman who has probably been subjected to that sort of actual trauma in her own life, and that of her friends, she had every right to compare the experiences.



They ran out of letter "o" supply, so they can't spell "no".

RPISEC's Modern Binary Exploitation is somewhat famous for doing exactly that!

More people interested in security should know about RPI. :)

brb taking out a 10:1 bet on a new project which will print money and then rm -rf'ing all the code so i get a payout


If a single engineer can sabotage a project, then the company has bigger things to worry about. There should be backups, or you know, GitHub with branch protection.

Aside from that, perverse incentives are a real problem with these systems, but not an insurmountable one. Everyone on the project should be long on the project, if they don't think it will work, why are they working on it? At the very least, people working on the project should have to disclose their position on the project, and the project lead can decide whether they are invested enough to work on it. Part of the compensation for working on the project could be long bets paid for by the company, you know like how equity options work, except these are way more likely to pay out.

If no one wants to work on a project, the company can adjust the price of the market by betting themselves. Eventually it will be a deal that someone wants to take. And if it's not, then why is the project happening? clearly everyone is willing to stake money that it will fail.


<insert dilbert comic about wally coding himself a yacht>


SSRF is not just a DoS.


To have a significant impact SSRF needs to be combined with a second worse vulnerability: An endpoint that trusts unauthenticated requests just because they come from within the local network. Sadly several popular clouds have such a vulnerability out of the box (metadata endpoint).


Yeah, that's less of a "vulnerability" and more of how I expect 99% of companies to handle authentication within a network (sadly).


I think it would be very cute to train a model exclusively in pre-information age documents, and then try to teach it what a computer is and get it to write some programs. That said, this doesn't look like it's nearly there yet, with the output looking closer to Markov chain than ChatGPT quality.


Amazon Fire Tablet, one of the only things I've ever returned.


100%. Ours has become so inexplicably slow it’s wild, even after factory resets. The Amazon OS experience is also terrible. It sits unused.


Signal is an end-to-end encrypted messaging app. People continue to breathlessly mentioning the lack of database encryption as a problem, but that never made it a real security issue: its job is not, and has never been, dissuading an attacker who has local access to one of the ends, especially because that is an incoherent security boundary (just like the people who were very upset about Signal using the system keyboard which is potentially backdoored - if your phone is compromised, of course someone will be be able to read your Signal messages).


Database encryption isn't comparable to the keyboard drama. Protecting against malware in your keyboard can be done by using a different meyboard and is of course out of scope.

But if my phone gets taken and an exploit is used to get root access on it, I don't want the messages to be readable and there's nothing I can do about it. It's not like I can just use a different storage backend.

It's also a very simple solution - just let me set an encryption password. It's not an open-ended problem like protecting from malware running on the device when you're using it.


If someone has root access to your apparently unencrypted phone, then they can just launch the Signal app directly and it'll decrypt the database for them.

Which is to say this is an incoherent security boundary: you're not encrypting your phone's storage in a meaningful way, but planning to rely on entering a pin number every time you launch Signal to secure it? (Which in turn is also not secure because a pin is not secure without hardware able to enforce lock outs and tamper resistance...which in this scenario you just indicated have been bypassed).


Any modern Android is encrypted at rest, but if your phone is taken after first unlock, they get access to the plaintext storage. That's the attack vector.

A passphrase can be long, not just a short numeric PIN. It can be different from the phone unlock one. It could even be different for different chats.


No one is using an Alpha, Motorola 680x0, PA-RISC, or SuperH computer because that's the only thing they can afford. Rust supports 32bit x86.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: