Hacker Newsnew | past | comments | ask | show | jobs | submit | carey's commentslogin

The FSF also typically requires a copyright assignment for their GPL code. Nobody thinks that they’ll ever relicense Emacs, though.


It has been decades since I've seen an FSF CLA packet, but if I recall correctly, the FSF also made legally-binding promises back to the original copyright holder, promising to distribute the code under some kind of "free" (libre, not gratuit) license in the future. This would have allowed them to switch from GPL 2 to GPL 3, or even to an MIT license. But it wouldn't have allowed them to make the software proprietary.

But like I said, it has been decades since I've seen any of their paperwork, and memory is fallible.


They’re also not exactly a VC-backed startup.


yeah I don't mind signing a CLA for copyleft software to a non-profit org, but i do with a for-profit one.


malloc in uCRT just calls HeapAlloc, though? You can see the code in ucrt\heap\malloc_base.cpp if you have the Windows SDK installed.

Programs can opt in to the _segment_ heap in their manifest, but it’s not necessarily any faster.


Could it be an accent thing? Trying to learn foreign language pronunciation as a New Zealand English speaker is frustrating; I should not be pronouncing 에 as the vowel in “bed” the way I say the latter, but that’s what every description of it says.


You have a fencepost error; the notes in Western music in equal temperament are C, C♯/D♭, D, D♯/E♭, E, F, F♯/G♭, G, G♯/A♭, A, A♯/B♭, B, then c an octave higher is the thirteenth.


Ah thank you for the correction, I'm just a hobbyist and have not practiced in a few years


It’s accurate to say that there are twelve intervals, anyway, which is the point you were making.


All the modern AS/400 servers can run unmodified AIX programs these days anyway.


You keep enumerating words that I will never put in my CV lest employers know that I have been paid to suffer at their contact.


If it’s just for Java that’s not really a problem. They might be able to run the applet directly with appletviewer, or repackage it for Java Web Start. Otherwise, CheerpJ has a solution for continuing to run the applet in the browser by converting it to WASM.


Just to note that Java Web Start is also discontinued.

Karakun are on it though: https://openwebstart.com/


Same. Gradle even lets you configure replacement dependencies, so if anything depends on Log4J it automatically gets the adapter instead.


At least when it was released, IIS 10 supported HTTP/2, but did not support Kerberos, Negotiate or NTLM authentication over HTTP/2.

https://docs.microsoft.com/en-us/iis/get-started/whats-new-i...

I’m not sure that it’s a problem with HTTP/2 that Microsoft found no reason to implement Windows authentication for it.


The HTTP/2 protocol specifically makes this kind of authentication difficult. It's not just IIS, in general it doesn't quite work right.


Given the long history of request parsing vulnerabilities in HTTP/1.1 servers and proxies, is HTTP/2 actually worse, or have most of the HTTP/1.1 bugs just been fixed already?


These vulnerabilities are all from badly-written HTTP/2 → HTTP/1.1 translations. Most of them come from simple carelessness, rookie errors that should never have been made, dumping untrusted bytes from an HTTP/2 value into the HTTP/1.1 byte stream. This is security 101, straightforward injection attacks with absolutely nothing HTTP-specific in it.

Some of them are a little more complex, requiring actual HTTP/2 and HTTP/1.1 knowledge (largely meaning HTTP/2 framing and the content-length and transfer-encoding headers), but not most of them.

Is HTTP/2 actually worse? Not in the slightest; HTTP/1.1 is the problem here. This is growing pains from compatibility measures as part of removing the problems of an unstructured text protocol. If you have a pure-HTTP/2 system and don’t ever do the downgrade, you’re in a better position.


I'd agree that HTTP/1 deserves a significant portion of the blame.

On the other hand, one maxim I've learned from my time bug hunting is that nobody ever validates strings in binary protocols. As such, I'm utterly unsurprised there are so many implementations with these kinds of bugs, and I'd say they could have been predicted in advance.

In fact… let's see… yep, they were predicted. Some of them, at least. In the HTTP/2 RFC, under Security Considerations, 10.3 'Intermediary Encapsulation Attacks' describes one of the attack classes from the blog post, the one involving stuffing newlines into header names.

Does that mean something could have been done about it? Perhaps not. The ideal solution would be to somehow design the HTTP/2 protocol itself to be resistant to misimplementation, but that seems pretty much impossible. The spec already bans colons and newlines in header names, but there's no way to be sure implementations won't allow them anyway, short of actually making them a delimiter like HTTP/1 did – in other words, reverting to a text-based protocol. But a text-based protocol would come with its own share of misimplementation risks, the same ones that HTTP/1 has.

On the other hand, perhaps the bug classes could have been mitigated if someone designed test cases to trigger them, and either included them in conformance tests (apparently there was an official HTTP/2 test suite [2] though it doesn't seem to have been very popular), or set up some kind of bot to try them on the entire web. In principle you could blame the authors of HTTP/2 collectively for the fact that nobody did this. But I admit that's pretty handwavey.

[1] https://datatracker.ietf.org/doc/html/rfc7540#section-10.3

[2] https://github.com/http2/http2-test


> On the other hand, one maxim I've learned from my time bug hunting is that nobody ever validates strings in binary protocols.

I wonder how much this has to do with the way strings need to be handled in the programming languages these protocols are implemented in. If dealing with strings is something that seems to be even more of a danger (if done incorrectly) you might just not do it.


It's tough to say that something is a "rookie error" when basically every serious professional team makes the same mistake. This broke apparently broke every AWS ALB, for instance.


I am genuinely astonished at the number of implementations and major players that are experiencing problems here. I’ve done plenty of HTTP/1 parsing (most significantly in Rust circa 2014) and some HTTP/2 parsing in its earlier draft days, and I can confidently and earnestly state that my code (then and now) would never under any circumstances be vulnerable to the ones I’m calling rookie errors, because I’m always going to validate the user input properly, including doing any subsequent validation necessary in the translation layer due to incompatibilities between the versions, because I know it’ll blow up on me if I don’t do these things. Especially when all of this stuff has already been pointed out in the HTTP/2 RFC’s Security Considerations section, which such sections you’re a fool to ignore when implementing an IETF protocol. The attacks that depend on content-length and transfer-encoding I’m quite not so confident about, though I believe that any of my code that I wrote then or that I would write now will be safe.

It’s quite possible that my attitude to these sorts of things has been warped by using Rust, which both encourages proper validation and makes it easier and more natural than it tends to be in languages like C or C++. I’d be curious to see figures of these sorts of vulnerabilities in varying languages—I strongly suspect that they occur vastly less in Rust code than in C or C++ code, even when they’re not directly anything to do with memory safety.


An error that's extremely common among people doing their first work on a specific domain seems like a good fit for "rookie error".

It's easy to believe most professional teams make that mistake at some point. I'd hope that it's far more rare to make that mistake twice.


No, that doesn't make sense. The errors that trip seasoned pros up are very likely to trip rookies up as well. Words mean things; rookie mistakes the mistakes that don't trip up the pros.


you're assuming the "pros" hired people with experience in the domain and retained them, and didn't let rookies do said mistakes.


Ah, the venerable "no true professional" argument. A sufficiently optimizing professional would never make these mistakes, it's true!


I would bet that a lot of these are not rookie errors, they are more akin to Spectre or Meltdown: inherently unsafe code that was considered a valuable risk for performance.

In general, when writing a high performance middle box, you want to touch the data as little as possible: ideally, the CPU wouldn't even see most of the bytes in the message, they would just be DMA'd from the external NIC to the internal NIC. This is probably not doable for HTTP2->HTTP1, but the general principle applies. In high-performance code, you don't want to go matching strings any more than you think is strictly necessary (e.g. matching the host or path to know where to actually send the packet).

Which is not to say that it wasn't a mistake to assume you can get away with this trade-off. But it's not a rookie error.


No, as I said most of these are absolutely trivial injection attacks from not validating untrusted inputs, being used to trigger a class of vulnerability that has been well-documented since at least 2005.


My point is that the code is doing the most performant thing: sending the values from A to B with as little bit twiddling as possible. They almost certainly failed to even consider that there are different restrictions between the 2 protocols that could pose security issues.


Is an new bucket leaking in a dozen places worse than an old one with all leaks fixed? I would say yes until those holes in the new one are also fixed.


When I implemented an HTTP2 server several years ago it was all of the "fun" of HTTP 1.1 parsing and semantics plus the extra challenges of the HTTP2 optimizations such as HPACK, mapping the abbreviated headers to cache in-memory representations, stream management, and if you supported Push Promises then that too.


Unless you think Twitter was lying in https://blog.twitter.com/en_us/topics/company/2020/suspensio..., that last tweet was the call to violently disrupt the inauguration in POTUS’ absence that was the final straw.


Do you have a screenshot? Maybe Twitter posted it?

> “that last tweet was the call to violently disrupt the inauguration in POTUS’ absence”

What you are saying is not the last tweet from Trump I saw.


It’s in the blog post that I linked to, which quotes the tweet that you seem to be referring to.


This is from the blog [..]On January 8, 2021, President Donald J. Trump Tweeted: “The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!” Shortly thereafter, the President Tweeted: “To all of those who have asked, I will not be going to the Inauguration on January 20th.”[..]

I don’t see what you are reading from it. Do you want to translate?


In context, Twitter sees this as encouraging the violent minority of his supporters to do whatever they like at the inauguration, since he will not be there, only Pence, Biden and Harris.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: