The US government entered a debt spiral a while ago (https://fred.stlouisfed.org/series/A091RC1Q027SBEA), and needs lower interest rates to service its tremendous debt while trying to inflate it away by printing money. That's all it comes down to. For decades fiscal conservatives were warning about it and were laughed at. Now that we're are the end game the inevitable shitshow will become apparent. You can hate Trump (rightly so), but as much as he did contribute to the problem directly, the problem is larger and systemic, and anyone else in office would have the exact same problem now.
Exactly, interest rates must come down due to the government debt burden. This debt burden creates a strong incentive to force rates to zero, but we have to pretend the Federal Reserve is independent.
Separately, I think Jerome Powell is one of the worst Fed chairs as he is most (but not exclusively) responsible for what happened to the housing market by creating a lock-in effect and focusing on their CPI basket.
Might be obviously, but there is definitely a lot of biases in the data here. It's unavoidable. E.g. many bugs will not be detected, but they will be removed when the code is rewritten. So code that is refactored more often will have lower age of fixed bugs. Components/subsystems that are heavily used will detect bugs faster. Some subsystems by their very nature can tolerate bugs more, while some by necessity will need to be more correct (like bpf).
> We have an in-house, Rust-based proxy server. Claude is unable to contribute to it meaningfully outside
I have a great time using Claude Code in Rust projects, so I know it's not about the language exactly.
My working model is is that since LLM are basically inference/correlation based, the more you deviate from the mainstream corpus of training data, the more confused LLM gets. Because LLM doesn't "understand" anything. But if it was trained on a lot of things kind of like the problem, it can match the patterns just fine, and it can generalize over a lot layers, including programming languages.
Also I've noticed that it can get confused about stupid stuff. E.g. I had two different things named kind of the same in two parts of the codebase, and it would constantly stumble on conflating them. Changing the name in the codebase immediately improved it.
So yeah, we've got another potentially powerful tool that requires understanding how it works under the hood to be useful. Kind of like git.
Recently the v8 rust library changed it from mutable handle scopes to pinned scopes. A fairly simple change that I even put in my CLAUDE.md file. But it still generates methods with HandleScope's and then says... oh I have a different scope and goes on a random walk refactoring completely unrelated parts of the code. All the while Opus 4.5 burns through tokens. Things work great as long as you are testing on the training set. But that said, it is absolutely brilliant with React and Typescript.
Well, it's not like it never happened to me to "burn tokens" with some lifetime issue. :D But yeah, if you're working in Rust on something with sharp edges, LLM will get get hurt. I just don't tend to have these in my projects.
Even more basic failure mode. I told it to convert/copy a bit (1k LOC) of blocking code into a new module and convert to async. It just couldn't do a proper 1:1 logical _copy_. But when I manually `cp <src> <dst>` the file and then told it to convert that to async and fix issues, it did it 100% correct. Because fundamentally it's just non-deterministic pattern generator.
hot take (that shouldn't be?): if your code is super easy to follow as a human, it will be super easy to follow for an LLM. (hint: guess where the training data is coming from!)
It's not about memory/CPU/IO, but latency vs throughput. Most software is slow because it ignores the latency. If you program serially waiting for _whatever_ it is going to be slow. If you scatter your data around memory, or read from disk in small chunks, or make tons of tiny queries to the DB serially your software will be 99.9% waiting idle for something to finish. That's it. If you can organize your data linearly in memory and/or work on batches of it at the time and/or parallelize stuff and/or batch your IO, it is going to be fast.
Totalitarians on one side convince their side, that it's totally fine and desirable to ignore the law and let millions of illegal immigrants in. Then then totalitarians on the other side convince their side it is necessary to ignore the law and introduce wide sweeping surveillance to undo it. Congratulations, both sides cooperated while hating each other because they are easy to play dummies.
Yeah, even alert/warn/info would be an improvement.
I hate the concept of “errors” in general. They’re an excuse to avoid responsibility, and ship broken software with known undefined behavior.
The very notion of an error basically means “there was behavior I chose to not handle and do anything about but which I knew would happen” which is essentially just negligence.
Right buffer is a relative obvious and simple idea. For some reason the Java-OOP crowd keeps thinking that LMAX deserves a nobel price for being neither first nor last to use it.
> have to go out of my way
Yeah, that's exactly the annoying part. Can't mention ring buffer ever without someone bring up LMAX. "But do you know, that some Java developers somewhere once wrote something that didn't completely ruin the hardware performance?! It's stunning and amazing."
IMO the take-away from LMAX is not ring buffers - it's the knowledge of how much useful work a single CPU core can do. It's a story of playing to hardware's strengths instead of wrapping yourself up in bullshit excuses. They realized their problem was fundamentally not parallelizable, so they wrote it to run serially as fast as possible instead of wrapping themselves up in bullshit excuses, and the resulting performance was much faster than anyone would have ever guessed if they hadn't done it.
I am not sure why this is annoying. Some problems can be solved comprehensively. This is a pretty good example of one. It might be better for us to focus our attention on other aspects of the problem space (the customer/business).
reply