Hacker Newsnew | past | comments | ask | show | jobs | submit | nlitened's commentslogin

From what I see, the code is incorrect in reading “messages” from TCP socket stream, and will be failing randomly in production with messages longer than 1500 bytes, and also sometimes when even shorter.

Instead, the TCP socket must be treated as a stream of bytes, and use either some delimiter as message boundary (like \n, while escaping any newlines inside JSON), or write message size before the message bytes itself, so that the code knows how many bytes to read until full message is read.

Edit: to clarify, TCP protocol does not guarantee that if you write some bytes in one go, they will be read in one go as well. Instead, they may be split into multiple “reads”, or glued together with the preceding chunk, or both. It’s a “stream of bytes” protocol, it only guarantees that written bytes come one after another in the same order.

So the “naive” message separation used in code above (read a chunk and assume it’s the entire message that was written) will work in manual tests, and likely even in local automated tests, but will randomly break when exposed to real network conditions.


Good write up, thanks for taking the time to go into detail. I may try to implement your feedback at some point.

Thanks - I had a quick scan through the code and noticed the 4096 byte buffers and wondered how larger messages were handled and couldn't see anything but wondered if I was missing something!

> because government will have more money now

Ah, so your idea is the good old “only the emperor who controls the violence apparatus should have a lot of money and power”?

It’s not a very original idea, and it has been tried many times, and it failed many times.

> but then we should be even more careful who gets to the top

Right, so “for some reason only the greedy power hungry psychopaths get to the top in the current system — let’s fix it so that there can’t be many of them, only one government who has power to take away other people’s wealth and concentrate it immensely, surely we will figure out how to make sure it’s not filled with greedy power hungry psychopaths as we go”


Your move is sinking a civil cargo ship in response to an attack on naval bases? Ok


You actually don't have to maintain the fork and/or update to latest version if you don't need new features.


You don't have to maintain the fork and/or update to the latest version if you don't need new features or security fixes.

Most people want security fixes.


Or patched vulnerabilities.


It’s up to the application to change this parameter on per-socket basis


> If your SaaS can’t compete on the service part, the software part ain’t gonna make or break you.

Oh, your bootstrapped team can’t simultaneously develop from scratch and support the new open source software project AND outcompete a multi-billion dollar business who decided to offer your service as a below-cost addon to their offering used by millions of people on day one? Tough luck, greedy bastard, you should have stayed in your cubicle.


So why would anyone start businesses or continue doing business in such a country?

You’re literally just describing an end to private property, where a privileged government representative can take anything you have. The “government job” will become so lucrative that the position would be passed down within families, father to son. It is already known how these economic systems function, I think.


I am describing a system where the government can take anything you have over a certain amount. (Or more precisely perhaps can take a proportion of what you have that asymptotically approaches 100% as your total wealth increases.) In my conception this money would then immediately be redistributed (as direct cash payments) to people with less. Government employees doing as you describe would also be subject to severe penalties. The purpose is to entirely eliminate massive wealth concentration.

As for why would anyone start a business? There's no disincentive to start a business in this scheme. I'd say the current system has greater obstacles to starting a business in many cases, due to high barriers to entry and regulatory capture by large players. The purpose of policies like the ones I describe is to encourage people to start small businesses and keep them small. You can grow your business up until its value is around that taxation threshold and then just kick back. We don't want people taking big businesses and making them bigger.


I think the major problem with your described system is how you quantify wealth. For example, you start a startup, get almost no salary, but you raise a 20M investment on 100M valuation — with your proposed method of calculation, the government already wants you to pay tax on your shares of a 100M enterprise, whereas you may not see a dollar of profit for another 10—20 years (or ever, if the startup fails). It's very difficult to quantify wealth, especially taking into account that a lot of it is risk-bound and long-term.

One interesting aspect of trying to quantify wealth and tax based on that — is that it gives enormous advantage to bearers of wealth that is difficult to quantify. For example, political followers is wealth that you can't tax, but one can turn into profit very easily and in many sneaky ways. Also power in general (power to collect taxes, power to control law enforcement and army, or people with guns in general) is wealth that isn't quantifiable in monetary amounts. So in this system powerful people will be much more powerful because they will start accumulating all other forms of wealth, and very difficult to restrict — why would they use their power to restrict themselves? They would use their power to remove any restrictions at the highest priority.

So instead of the current system (people willing to invent new things and work overtime for years to bring value to millions of people for a chance of outsized returns — and sometimes earning them) you get a system where political class seizes all power, removes all checks and balances, redistributes wealth production to themselves, and unleashes violence to rule forever. It has been tried many times.

> Government employees doing as you describe would also be subject to severe penalties

This only works in capitalistic open societies where wealth doesn't concentrate with government employees.

> The purpose of policies like the ones I describe is to encourage people to start small businesses and keep them small

Not all businesses can be small. How can a small business construct an airplane? Organize a nation-wide or international postal delivery service? Build millions of cars with spare parts available for decades? Make food, clothing, and shelter for millions? These things require economies of scale to be affordable. And yes, government-managed big businesses have also been tried, they tend to be very unproductive, and produce expensive and low-quality items (with tendencies to significantly decline over years).


The short answer to your startup example is that the number of businesses that take a $100M investment plus 10-20 years to realize a profit should be much, much smaller than it is now. It should be near zero. The fact that we currently have venture capital being thrown at stuff like this willy-nilly is part of the problem. Businesses should become successful before they become big.

> So instead of the current system (people willing to invent new things and work overtime for years to bring value to millions of people for a chance of outsized returns — and sometimes earning them) you get a system where political class seizes all power, removes all checks and balances, redistributes wealth production to themselves, and unleashes violence to rule forever.

I have some thoughts in response to some of your other points, but I think the fundamental disagreement here is that what you describe as "the system you get" is what I call the system we have, except that the powerful class in question is a sort of hybrid political/economic oligarch class.

The other way I would think about this is that what you call "the government" I would call "the public". We need radical transparency in all government action so that any kind of shenanigans such as you describe cannot occur, and we need to reflexively insist on this transparency regardless of whether we suspect any shenanigans in a particular case.

> Not all businesses can be small. How can a small business construct an airplane?

This is the best counterargument, and indeed airplanes are the example I've come up with as well when I formulated this counterargument to myself. However, I wouldn't describe this as "requiring economies of scale". It's just a matter of some products inherently being more complex (e.g., an airplane is more complex than a wooden spoon).

I think we should view economies of scale very critically. People say that economies of scale are "necessary" to keep things "affordable" for consumers. But in practice large economies of scale tend towards monopolism that in fact makes consumers more vulnerable to gouging. Economies of scale primarily benefit the producers that have them, and only indirectly and uncertainly benefit anyone else.

That said, if the goal is wealth diffusion, companies can become bigger the more diffuse their ownership. So, say, a worker-owned aerospace company could grow larger than one controlled by a small group of shareholders.

Finally, people talk a lot about the theoretical benefits of "innovation", but in my view innovation is also something to view skeptically. Perhaps in a world where there were a lot of small startups building airplanes or better mousetraps and competing genuinely on quality and price, we could think about relaxing some of the strictures I've mentioned. But that's not the world we live in. Much of what passes for "innovation" today is simply gaming the system, hoodwinking customers, and dodging consequences for harmful actions. I believe that this is connected to the fact that so many "innovative" companies are the type you mentioned above, essentially a venture capital gamble on some kind of high-concept startup, with a desired outcome of many total flops and a few gigantic runaway "unicorn" jackpots. That isn't healthy innovation and we should not only not encourage it but should actively prevent it. We want steady, incremental, monitored innovation, not a boom and bust cycle based on who can make the best sales pitch to their favorite billionaire. It is okay to never have another Facebook, another OpenAI, etc.


They should just start compiling Tor with Fil-C — free memory safety, no new bugs from full code rewrite


This move started before Fil-C existed.


Also “Rewrite it in Rust”.

P.S. it’s a joke, guys, but you have to admit it’s at least partially what’s happening


No, it has nothing to do with Rust.


But it might have something to do with the "rewrite" part:

> The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive.

> Back to that two page function. Yes, I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.

> Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.

> When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

From https://www.joelonsoftware.com/2000/04/06/things-you-should-...


A lot of words for a 'might'. We don't know what caused the downtime.


Not this time; but the rewrite was certainly implicated in the previous one. They actually had two versions deployed; in response to unexpected configuration file size, the old version degraded gracefully, while the new version failed catastrophically.


Both versions were taken off-guard by the defective configuration they fetched, it was not a case of a fought and eliminated bug reappearing like in the blogpost you quoted.


The first one had something to do with Rust :-)


Not really. In C or C++ that could have just been a segfault.

.unwrap() literally means “I’m not going to handle the error branch of this result, please crash”.


Indeed, but fortunately there are more languages in the world than Rust and C++. A language that performed decently well and used exceptions systematically (Java, Kotlin, C#) would probably have recovered from a bad data file load.


There is nothing that prevents you from recovering from a bad data file load in Rust. The programmer who wrote that code chose to crash.


That's exactly my point. There should be no such thing as choosing to crash if you want reliable software. Choosing to crash is idiomatic in Rust but not in managed languages in which exceptions are the standard way to handle errors.


I am not a C# guy, but I wrote a lot of Java back in the day, and I can authoritatively tell you that it has so-called "checked exceptions" that the compiler forces you to handle. However, it also has "runtime exceptions" that you are not forced to handle, and they can happen any where and any time. Conceptually, it is the same as error versus panic in Rust. One such runtime exception is the notorious `java.lang.NullPointerException` a/k/a the billion-dollar mistake. So even software in "managed" languages can and does crash, and it is way more likely to do so than software written in Rust, because "managed" languages do not have all the safety features Rust has.


In practice, programs written in managed languages don't crash in the sense of aborting the entire process. Exceptions are usually caught at the top level (both checked and unchecked) and then logged, usually aborting the whole unit of work.

For trapping a bad data load it's as simple as:

    try {
        data = loadDataFile();
    } catch (Exception e) {
        LOG.error("Failed to load new data file; continuing with old data", e);        
    }
This kind of code is common in such codebases and it will catch almost any kind of error (except out of memory errors).


Here is the Java equivalent of what happened in that Cloudflare Rust code:

  try {
    data = loadDataFile();
  } catch (Exception e) {
    LOG.error("Failed to load new data file", e);
    System.exit(1);
  }
So the "bad data load" was trapped, but the programmer decided that either it would never actually occur, or that it is unrecoverable, so it is fine to .unwrap(). It would not be any less idiomatic if, instead of crashing, the programmer decided to implement some kind of recovery mechanism. It is that programmer's fault, and has nothing to do with Rust.

Also, if you use general try-catch blocks like that, you don't know if that try-catch block actually needs to be there. Maybe it was needed in the past, but something changed, and it is no longer needed, but it will stay there, because there is no way to know unless you specifically look. Also, you don't even know the exact error types. In Rust, the error type is known in advance.


Yes, I know. But nobody writes code like that in Java. I don't think I've ever seen it outside of top level code in CLI tools. Never in servers.

> It is that programmer's fault, and has nothing to do with Rust.

It's Rust's fault. It provides a function in its standard library that's widely used and which aborts the process. There's nothing like that in the stdlibs of Java or .NET

> Also, if you use general try-catch blocks like that, you don't know if that try-catch block actually needs to be there.

I'm not getting the feeling you've worked on many large codebases in managed languages to be honest? I know you said you did but these patterns and problems you're raising just aren't problems such codebases have. Top level exception handlers are meant to be general, they aren't supposed to be specific to certain kinds of error, they're meant to recover from unpredictable or unknown errors in a general way (e.g. return a 500).


> It's Rust's fault. It provides a function in its standard library that's widely used and which aborts the process. There's nothing like that in the stdlibs of Java or .NET

It is the same as runtime exceptions in Java. In Rust, if you want to have a top-level "exception handler" that catches everything, you can do

  ::std::panic::catch_unwind(|| {
    // ...
  })
In case of Cloudflare, the programmer simply chose to not handle the error. It would have been the same if the code was written in Java. There simply would be no top-level try-catch block.


Look at how much additional boilerplate it took in your example to ignore the error.

In the Rust case you just don’t call unwrap() if you want to swallow errors like that.

It’s also false that catching all exceptions is how you end up with reliable software. In highly available architectures (e.g. many containers managed by kubernetes), if you end up in a state where you can’t complete work at all, it’s better to exit the process immediately to quickly get removed from load balancing groups, etc.

General top level exceptions handlers are a huge code smell because catching exceptions you (by definition) didn’t expect is a great way to have corrupted data.


The error wasn't ignored, it was logged (and it's an example on a web forum, in reality you'd at least increment a metric too and do other things).

> General top level exceptions handlers are a huge code smell

And yet millions of programs have such things and they work fine. My experience has been that they tend to be more reliable than other programs. E.g. IntelliJ hardly ever tears down the entire process when something goes wrong, it fails gracefully and reports back to HQ whereas other IDEs I've used hard crash quite regularly. Much more disruptive.


When dotnet has an unhandled exception, it terminates with abort.


unwrap is NOT idiomatic in Rust


Did you consider to rewrite your joke in rust?


it's never the technology, it's the implementation


cc: @oncall then trigger pagerduty :)


I also love the approach of ClickHouse with LowCardinality(String). Flexible, clear semantics, high performance


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: