OpenSSL is used by approximately everything under the sun. Some of those users will be vendors that use default compiler flags without stack cookies. A lot of IoT devices for example still don't have stack cookies for any of their software.
2026 and we still have bugs from copying unbounded user input into fixed size stack buffers in security critical code. Oh well, maybe we'll fix it in the next 30 years instead.
"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"
The actual vulnerability is indeed the copy. What we used to do is this:
1. Find out how big this data is, we tell the ASN.1 code how big it's allowed to be, but since we're not storing it anywhere those tests don't matter
2. Check we found at least some data, zero isn't OK, failure isn't OK, but too big is fine
3. Copy the too big data onto a local buffer
The API design is typical of C and has the effect of encouraging this mistake
int ossl_asn1_type_get_octetstring_int(const ASN1_TYPE *a, long *num, unsigned char *data, int max_len)
That "int" we're returning is either -1 or the claimed length of the ASN.1 data without regard to how long that is or whether it makes sense.
This encourages people to either forget the return value entirely (it's just some integer, who cares, in the happy path this works) or check it for -1 which indicates some fatal ASN.1 layer problem, give up, but ignore other values.
If the thing you got back from your function was a Result type you'd know that this wasn't OK, because it isn't OK. But the "Eh, everything is an integer" model popular in C discourages such sensible choices because they were harder to implement decades ago.
Win32 API at some point started using the convention of having the buffer length be a reference. If the buffer is too small the API function updates the reference with the required buffer length and returns an error code.
I quite like that, within the confines of C. I prefer the caller be responsible for allocations, and this makes it harder to mess up.
I get their point that you can't provide a "No" in the reminder. But there should be an option (maybe even hidden under "advanced settings - here be dragons!") for this.
Problem is (and that was their argument) people press this button all the time without reading the dialogue at all, and then won't know how to turn it back on. A messenger app has to deal with very technical illiterate people. But there should be an option in settings for the tech savvy user.
Signal is an interesting case study in UX failure. I and a bunch of other tech forward people were on it in its heyday but after they removed SMS support and implemented shitty UX like that nag dialog: Neither I nor a single person I know uses it any more. Everyone is on Whatsapp or iMessage.
It may be cryptographically superior, but does that matter at the end of the day if nobody uses it?
Cryptographical superiority aside, Signal doesn't collect personal data, unlike Whatsapp. For me that's the main reason to use it. The UX is good enough, although some points can for sure be improved.
Whatsapp should be a non starter. What Mark Zuckerberg did to Whatsapp should be required reading for anyone using the internet, and then decide if you still want to use Facebook (never mind, they build a shadow profile for you anyway)
A few of my neighbors have kids the same age as my kids, they're on a WhatsApp group chat, and my choice is either use WhatsApp or make my kid miss out on social events, so it's not really a choice.
"Hey let's switch to this app that nobody else is using and it sends you annoying popups every month but trust me bro it's more secure" is not a winning argument
Every so often I consider writing the "STFU license." Something like GPL but if you use this code, even as a library, you can't give people unwanted notifications. Would need to be pretty comprehensive and forward compatible to cover all the crazy cases that notification-enthusiasts dream up.
This. We must change laws that the above field is not considered as given consent. And while we are at it, we must change "silence is agreement" to "silence is disagreement". This applies to change of ToS, price increases etc. That means if I don't click a link with a button "I agree", the ToS change is not accepted - that means they have to cancel/delete my account.
Didn't FCC remove "1-click unsubscribe" requirement since it can "provide more choice and lower prices to all users across the board" (since the companies can rip off more users and create pseudo-lower prices)?
EU has its GPDR and it has some teeth, but US is currently hopeless on that front, for now, from my vantage point.
The FTC established a "click-to-cancel" rule, but (as with just so many regulations in the US) it was blocked by an appeals court. Federal law says there's a hoop they have to jump through for rules with an impact of more than $100 million, and they didn't jump through the hoop because they didn't think the impact was that high.
I like to frame it like this: "ask me later" is rape culture. It promotes and reinforces a culture of never taking "no" for an answer, and pushing one's agenda/intent regardless of the preference/consent of the other party/parties.
I see the point you're making but this sort of hyperbole has a tendency to turn people away from whatever point you're trying to make unless they already agree with you.
I was visiting a girlfriend once, and she was in the process of moving in the same city. There was a telephone bill on top of her dresser, and I noticed that she had noted "butt-rape fee" next to one of the line items there.
Now she is a very literate woman and loves poetry and "Penny Dreadfuls", so she uses language and words very deliberately. And so, I asked her why she wrote that, and she said it was some sort of unnecessary fee that they were charging to move her line from one address to another, and she clearly resented their opportunistic capitalism.
I certainly sympathized with her, especially since she is the type of woman who has probably been subjected to that sort of actual trauma in her own life, and that of her friends, she had every right to compare the experiences.
If a single engineer can sabotage a project, then the company has bigger things to worry about.
There should be backups, or you know, GitHub with branch protection.
Aside from that, perverse incentives are a real problem with these systems, but not an insurmountable one.
Everyone on the project should be long on the project, if they don't think it will work, why are they working on it?
At the very least, people working on the project should have to disclose their position on the project, and the project lead can decide whether they are invested enough to work on it.
Part of the compensation for working on the project could be long bets paid for by the company, you know like how equity options work, except these are way more likely to pay out.
If no one wants to work on a project, the company can adjust the price of the market by betting themselves.
Eventually it will be a deal that someone wants to take.
And if it's not, then why is the project happening? clearly everyone is willing to stake money that it will fail.
To have a significant impact SSRF needs to be combined with a second worse vulnerability: An endpoint that trusts unauthenticated requests just because they come from within the local network. Sadly several popular clouds have such a vulnerability out of the box (metadata endpoint).
I think it would be very cute to train a model exclusively in pre-information age documents, and then try to teach it what a computer is and get it to write some programs. That said, this doesn't look like it's nearly there yet, with the output looking closer to Markov chain than ChatGPT quality.
Signal is an end-to-end encrypted messaging app. People continue to breathlessly mentioning the lack of database encryption as a problem, but that never made it a real security issue: its job is not, and has never been, dissuading an attacker who has local access to one of the ends, especially because that is an incoherent security boundary (just like the people who were very upset about Signal using the system keyboard which is potentially backdoored - if your phone is compromised, of course someone will be be able to read your Signal messages).
Database encryption isn't comparable to the keyboard drama. Protecting against malware in your keyboard can be done by using a different meyboard and is of course out of scope.
But if my phone gets taken and an exploit is used to get root access on it, I don't want the messages to be readable and there's nothing I can do about it. It's not like I can just use a different storage backend.
It's also a very simple solution - just let me set an encryption password. It's not an open-ended problem like protecting from malware running on the device when you're using it.
If someone has root access to your apparently unencrypted phone, then they can just launch the Signal app directly and it'll decrypt the database for them.
Which is to say this is an incoherent security boundary: you're not encrypting your phone's storage in a meaningful way, but planning to rely on entering a pin number every time you launch Signal to secure it? (Which in turn is also not secure because a pin is not secure without hardware able to enforce lock outs and tamper resistance...which in this scenario you just indicated have been bypassed).
Any modern Android is encrypted at rest, but if your phone is taken after first unlock, they get access to the plaintext storage. That's the attack vector.
A passphrase can be long, not just a short numeric PIN. It can be different from the phone unlock one. It could even be different for different chats.
reply