It really makes me upset that we are throwing away decades of battle tested code just because some people are excited about the language du jour. Between the systemd folks and the rust folks, it may be time for me to move to *BSD instead of Linux. Unfortunately, I'm very tied to Docker.
That “battle-tested code” is often still an enduring and ongoing source of bugs. Maintainers have to deal with the burden of working in a 20+ year-old code base with design and architecture choices that probably weren’t even a great idea back then.
Very few people are forcing “rewrite in rust” down anyone’s throats. Sometimes it’s the maintainers themselves who are trying to be forward-thinking and undertake a rewrite (e.g., fish shell), sometimes people are taking existing projects and porting them just to scratch an itch and it’s others’ decisions to start shipping it (e.g., coreutils). I genuinely fail to see the problem with either approach.
C’s long reign is coming to an end. Some projects and tools are going to want to be ahead of the curve, some projects are going to be behind the curve. There is no perfect rate at which this happens, but “it’s battle-tested” is not a reason to keep a project on C indefinitely. If you don’t think {pet project you care about} should be in C in 50 years, there will be a moment where people rewrite it. It will be immature and not as feature-complete right out the gate. There will be new bugs. Maybe it happens today, maybe it’s 40 years from now. But the “it’s battle tested, what’s the rush” argument can and will be used reflexively against both of those timelines.
As long as LLVM (C++ but still) is not rewritten is rust [0] , I don't buy it. C is like JavaScript, it's not perfect, is everywhere and you cannot replace it without a lot of effort and bugfix/regression tests.
If I take for example sqlite (25 years old [3]) there are already 2 rewrites in rust [1] and [2], and each one has its bugs.
And as an end user I'm more enclined to trust the battle-tested original for my prod than its copies. As long as I don't have the proof the rewrite is at least as good as the original, I'll stay with the original. Simple equals more maintainable. That's also why sqlite maintainers won't rewrite it in any other language [4].
The trade of rust is "you can lose features and have unexpected bugs like any other language, but don't worry they will be memory safe bugs".
I'm not saying rust is bad and you should not rewrite anything in it, but IMHO rust programmers tend to overestimate the quality of the features they deliver [5] or something along these lines.
systemd has been the de facto standard for over a decade now and is very stable. I have found that even most people who complained about the initial transition are very welcoming of its benefits now.
Depends a bit on how you define systemd. Just found out that the systemd developers don't understand DNS (or IPv6). Interesting problems result from that.
> Just found out that the systemd developers don't understand DNS (or IPv6).
Just according to Github, systemd has over 2,300 contributors. Which ones are you referring to?
And more to the point, what is this supposed to mean? Did you encounter a bug or something? DNS on Linux is sort of famously a tire fire, see for example https://tailscale.com/blog/sisyphean-dns-client-linux ... IPv6 networking is also famously difficult on Linux, with many users still refusing to even leave it enabled, frustratingly for those of us who care about IPv6.
Systemd-resolved invents DNS records (not really something you would like to see, makes debugging DNS issues a nightmare). But worse, it populates those DNS records with IPv6 link local addresses, which really have no place in DNS.
Then, when after a nice debugging session why your application behaves so strangely, all the data in DNS is correct, why doesn't it work, you find that this issue has been reported before and was rejected as won't fix, works as intended.
Hm, but systemd-resolved mainly doesn't provide DNS services, it provides _name resolution_. Names can be resolved using more sources than just DNS, some of which do support link-locals properly, so it's normal for getaddrinfo() or the other standard name resolution functions to return addresses that aren't in DNS.
i.e. it's not inventing DNS records, because the things returned by getaddrinfo() aren't (exclusively) DNS records.
The debug tool for this is `getent ahosts`. `dig` is certainly useful, but it makes direct DNS queries rather than going via the system's name resolution setup, so it can't tell you what your programs are seeing.
systemd-resolved responds on port 53. It inserts itself in /etc/resolv.conf as the DNS resolver that is to be used by DNS stub resolvers.
It can do whatever it likes as longs as it follows DNS RFCs when replying to DNS requests.
Redefining recursive DNS resolution as general 'name resolution' is indeed exactly the kind of horror I expect from the systemd project. If systemd-resolved wants to do general name resolution, then just take a different transport protocol (dbus for example) and leave DNS alone.
It's not from systemd though. glibc's NSS stuff has been around since... 1996?, and it had support for lookups over NIS in the same year, so getaddrinfo() (or rather gethostbyname(), since this predates getaddrinfo()!) have never just been DNS.
systemd-resolved normally does use a separate protocol, specifically an NSS plugin (see /etc/nsswitch.conf). The DNS server part is mostly only there as a fallback/compatibility hack for software that tries to implement its own name resolution by reading /etc/hosts and /etc/resolv.conf and doing DNS queries.
I suppose "the DNS compatibility hack should follow DNS RFCs" is a reasonable argument... but applications normally go via the NSS plugin anyway, not via that fallback, so it probably wouldn't have helped you much.
I'm not sure what you are talking about. Our software has a stub resolver that is not the one in glibc. It directly issues DNS requests without going through /etc/nsswitch.conf.
It would have been fine if it was getaddrinfo (and it was done properly) because getaddrinfo gives back a socket and that can add the scope ID to the IPv6 link local address. In DNS there is no scope ID, so it will never work in Linux (it would work on Windows, but that's a different story).
If you don't like those additional name resolution methods, then turn them off. Resolved gives you full control over that, usually on a per-interface basis.
If you don't like that systemd is broken, then you can turn it off. Yes, that's why people are avoiding systemd. Not so much that the software has bugs, but the attitude of the community.
It's not broken - it's a tradeoff. systemd-resolved is an optional component of systemd. It's not a part of the core. If you don't like the choices it took, you can use another resolver - there are plenty.
I don't think many people are avoiding systemd now - but those who do tend to do it because it non-optionally replaces so much of the system. OP is pointing out that's not the case of systemd-resolved.
It's not a trade-off. Use of /etc/resolv.conf and port 53 is defined by historical use and by a large number of IETF RFC.
When you violate those, it is broken.
That's why systemd has such a bad reputation. Systemd almost always breaks existing use in unexpected ways. And in the case of DNS, it is a clearly defined protocol, which systemd-resolved breaks. Which you claim is a 'tradeoff'.
When a project ships an optional component that is broken, it is still a broken component.
The sad thing about systemd (including systemd-resolved) is that it is default on Linux distributions. So if you write software then you are forced to deal with it, because quite a few users will have it without being aware of the issues.
Yes, violating historical precedent is part of the tradeoff - I see no contradiction. Are you able to identify the positive benefits offered by this approach? If not, we're not really "engineering" so to speak. Just picking favorites.
> The sad thing about systemd (including systemd-resolved) is that it is default on Linux distributions. So if you write software then you are forced to deal with it, because quite a few users will have it without being aware of the issues.
I'm well aware - my day job is writing networking software.
That's the main problem with systemd: replacing services that don't need replacing and doing a bad job of it. Its DNS resolver is particularly infamous for its problems.
Sure, those authors chose that license because they did not really particularly care for the politics of licenses and chose the most common one in the Rust ecosystem, which is MIT/Apache 2.
If folks want more Rust projects under licenses they prefer, they should start those projects.
> If folks want more Rust projects under licenses they prefer, they should start those projects.
100% true, but also hides a powerful fact: Our choices aren't limited to doing it ourselves. Listening to others and discussing how to do things as a group is the essence of community seeking long-term stability abd fairness. It'a how we got to the special place we are now.
Not everyone can or should start their own open source project. Maybe theyre already doing another one. Maybe they don't know how to code. The viewpoint of others/users/customers is valid and should not only be listened to but asked for.
I agree that throwing away battle tested code is wasteful and often not required. Most people are not of the mindset of just throwing things away but there is a drive to make things better. There are some absolute monoliths such as the Linux kernel that will likely never break free of its C shackles and thats completely okay and acceptable to me
It is basic knowledge that memory safety bugs are a significant source of vulnerabilities, and by now it well-established that the first developer who can avoid C without introducing memory safety bugs hasn't been born yet. In other words: if you care about security at all, continuing with the status quo isn't an option.
The C ecosystem has tried to solve the problem with a variety of additional tooling. This has helped a bit, but didn't solve the underlying problem. The C community has demonstrated that it is both unwilling and unable to evolve C into a memory-safe language. This means that writing additional C code is a Really Bad Idea.
Software has to be maintained. Decade-old battle-tested codebases aren't static: they will inevitably require changes, and making changes means writing additional code. This means that your battle-tested C codebase will inevitably see changes, which means it will inevitably see the introduction of new memory safety bugs.
Google's position is that we should simply stop writing new code in C: you avoid the high cost and real risk of a rewrite, and you also stop the neverending flow memory safety bugs. This approach works well for large and modular projects, but doing the same in coreutils is a completely different story.
Replacing battle-tested code with fresh code has genuine risks, there's no way around that. The real question is: are we willing to accept those short-term risks for long-term benefits?
And mind you, none of this is Rust-specific. If your application doesn't need the benefits of C, rewriting it in Python or Typescript or C# might make even more sense than rewriting it in Rust. The main argument isn't "Rust is good", but "C is terrible".
Lumen Field in Seattle just installed some Amazon Just Walk Out vendors this year. I'm happy to report you don't need to be logged into Amazon or have an app. I double clicked my phone to swipe my Apple Pay before I walked in, grabbed a beer and walked out.
The big issue I have with this experience is that you don't get a clear charge price before you leave. So you have to check a page either some minutes or hours later and hope that the total is correct. Like the article said, I don't love the idea of being charged for 3 overpriced bottles of water when I only took two. I'd rather just settle my transactions in the moment than try to remember what my total was and dispute things later from memory on the occasional times it's wrong.
> you don't get a clear charge price before you leave. So you have to check a page either some minutes or hours later and hope that the total is correct
Oh, I’m very much sure this is a feature. Because, you see, only some percentage of people will actually look at the receipt. Some fraction of them will notice the error. Some fraction of those people will actually be motivated to spend their time on the phone clawing back an extra $8 water. The complement of that small percentage is a lucrative chance to sell the same overpriced water more than once.
Amazon had used roughly 1,000 humans in India, according to some news reports, to help monitor accurate checkouts. The company told CNN it’s “reducing the number of human reviews” while developing the “Just Walk Out” technology. Amazon said besides data associates’ main role in working on the underlying technology, they also “validate a small minority” of shopping visits.
At the very least the is how it should be done. Having to download and install an app, then login, then connect payment info, etc... Sounds like such a pain I wouldn't even bother.
attention is finite. land is finite. resources are finite. access to qualified doctors is finite. access to food is finite (something we'll realize at the next great famine). access to water is finite. your time living on earth is finite (and shorter the less money you have).
we operate at a scale where that matters nowadays.
I saw a comment in another thread that the AMA recognizes the problem of a deficit in new MD's. According to the comment, congress provides funding for MD residents, and that is the real bottleneck.
This. You usually get one and only one chance with new people. People hate rejection and they're not going to keep asking.
So a pro tip: if you're starting a new job or something and want to integrate into the social circles, be prepared to drop everything for those first few invitations. The first one is basically mandatory.
I am sure Carmack himself encourages debates and discussions. Lionizing one person can't be expected of every employee (unless that person is also the founder or the company is tiny).
I was one month into my first full-time job, when I've (unknowingly of his rank) challenged the CTO in a technical discussion - in a public email exchange. Regardless of the outcome - I've been treated like an equal. This one short exchange has influenced not only the rest of my career, but my entire worldview.
I mean to some extent sure. But also you need to respect expertise and experience. So much of what we do is subjective, and neither side going to have hard data to support their arguments.
If it comes down to someone saying “I’ve been doing this for 30 years, I’ve shipped something very similar 5 times, and we ran into a problem with x each time”. Unless you have similar counter experience, you should probably just listen.
What happens in tech is you get a very specific kind of junior who wants to have HN comment arguments at work constantly and needs you to prove every single thing to them. I don’t know man it’s a style guide. There’s not going to be hard quantitative evidence to support why we said you shouldn’t reach for macros first.
Ugh. Can we as an industry stop blowing people up like this? It’s a clear sign that the community is filled with people with very little experience.
I remember this guy wanted $20 million to build AGI a year ago (did he get that money?), and people here thought he would go into isolation for a few weeks and come out with AGI because he made some games like that. It’s just embarrassing as a community.
Carmack's best work was between Keen and Quake, and it was mostly optimizations that pushed the limit of what PC graphics could do. He's always been too in-the-weeds to have a C-level title.
He is just a guy who can write game code well and has good PR skills online. I wouldn’t give him a cent if he promised anything in the AI field, no matter how much a bunch of online people gas him up.
He's a guy that knows a lot of math and how to turn that math into code. I don't know if he'd be able to come up with some brand new paradigm for AI but I'd want him on my team and I'd listen to what he has to say.
AI math is not game code math. There are plenty of actual experts in AI who know “how to turn math into code” with years of experience. I would not want this guy, his ego, his lack of social skills, his online fanbase, and his lack of experience in AI to be anywhere near my AI team.
I guess the general stuff is movies, Netflix shows, music, your last short weekend trip, and pretty much everyone has their own personal non work thing, usually attached to a club or group (hiking, photography, whatever).
I guess in that last category sports are commonplace, but it’s more “I’m training for a marathon next month” or “you should come bouldering sometime” rather than following professional sports on tv.
This sounds like it’s particular to your friend group rather than some coarse regional geography. If you toss a rock in Western Europe, you’ve got a better chance of hitting a football fan than someone who wants to go bouldering or train for a marathon.
>If you toss a rock in Western Europe, you’ve got a better chance of hitting a football fan than someone who wants to go bouldering or train for a marathon.
Yes and no. If you HAVE to choose a specific hobby, football will have more chances than others; but it will still work in a minority of cases and assuming carries an implication.
A comparison I could make is starting a conversation in the US with 'did you watch fox news yesterday?'. Out of all channels, it's the most watched one; but you still have high chances of asking a non-viewer, and then get hit by negative connotations.
Personal hobbies are much better topic for various reasons (you don't assume, people will naturally be exited about discussing their own, etc).
Yes, Henry Ford had Nazi sympathies. But VW was literally founded by the Nazis:
> "Volkswagen was established in 1937 by the German Labour Front (German: Deutsche Arbeitsfront) as part of the Strength Through Joy (German: Kraft durch Freude) program in Berlin" [0]
> "The German Labour Front (German: Deutsche Arbeitsfront, pronounced [ˌdɔʏtʃə ˈʔaʁbaɪtsfʁɔnt]; DAF) was the national labour organization of the Nazi Party, which replaced the various independent trade unions in Germany during the process of Gleichschaltung or Nazification." [1]
It's a pretty dumb ordnance, gravity delivered GBU57 is a physics bound problem. The dimensions etc are known, you can give it the most optmistic assumptions, i.e. complete steel for max penetration, release at altitude where it reach max terminal velocity without grid fins deployed, run that through ndrc/young pentration equations etc. There aren't any super secret parameters for subterfuge like electronic warfare. Eitherway there's public videos of GBU57 in action - grid fins deployed to hit a traffic cone - defense autists counted frames, did napkin math, it's more or less what's purported ~ mach 0.8-1.2 penetrator designed for ~60m concrete. IIRC the assume sphere cow math for heavier all steel, no grid fin (i.e. not accurate), max out at mach ~2, doubles energy, penetrates ~80m.
On the other hand, Fordow's construction time is known... as far as I know, many years before fgcc / uhpc and other "advanced" concrete formulas PRC formulated against US penetrators. And Israel probably has entire blue print, so who knows. E: quick lookup and GBU57 seems to be revealed shortly after guestimate of when Fordow started construction, possible Fordow could update design in anticipation, but then again, B2s were known entity and Iran's engineers can probably guestimate out what the maximum size/weight penetrator US could deliver on B2s before knowing GBU57 existed.
What if it has some sort of a booster to increase its kinetic energy just before the hit?
Also the behavior might improve in an area already weakened by a ventilation shaft/previous hit (first bomb turns 40 meters into fine gravel + detonates weakening quite a large are, second and third bomb easily go deeper)
I think 1) is unlikely, b2 bays can't fit much more, gbu57 is mostly metal and no booster for penetration 2) is what no one knows, but we (as in the public) also don't know layout/construction, i.e. actual depth, bunker design (can emb sloped concrete/steel layer to deflect penetrators laterally so follow up drop don't go straight down).
The real weapons system specs are never disclosed. Even on retired systems the real capabilities are often still classified because they can provide clues to their replacements capabilities.
The math is really that hard? I have no idea what the soil or rock is, what happens when the first bomb hits it, the second, and then the third? Does the timing matter? Does the timing matter if it's 5 minutes between? 1 hour between? Seconds between? Does the type of soil or rock compact or loosen when bombed? What's the variation in explosive yield? Does the ground transfer force from a shockwave well or poorly? Does that change after the first one?
For it to be super-linear an additional meter of concrete / earth / whatever must be easier to penetrate than the one before it which I would classify as a physical impossibility. This is why linear is the ideal case.
Even if I were to accept the dubious premise that there is enough fractured rock to make a difference and there is no hampering with rocks falling into the void and that it's possible to hit the exact same spot repeatedly without touching the sides, all that would do in big O notation would be increase the constant factor. It would not be super linear after the second bomb.
If your are talking about bombs that hit side by side then clearly that is sub-linear as no matter how fractured the rock it’s not easier to push through than air.
An explosion creates a pressure wave. A pressure wave fractures rock. Fractured rock may be easier to pierce than solid rock.
Ergo, if first bunker buster penetrates to maximum depth -20m and then explodes, fracturing rock within a __ radius, then second bunker buster travels through that fractured rock, the second (and so on) may be able to penetrate deeper.
I have no idea about the physics of penetrating fractured vs non-fractured rock, but it's a physically plausible mechanism.
Furthermore, given the multi-minute timeline reported, there's enough time for the bombs to be deployed sequentially.
In the linear case a bomb twice the size goes twice as deep.
Take a bomb, cut it in half and drop each half separately, one after another into the same hole, would you except the cumulative depth to be greater than the whole bomb or less? Consider that in the case of the whole bomb it is equivalent to two halves arriving at the exact same time.
The strike may have been able to achieve greater penetration depth with multiple sequential weapons impacting the same point (i.e. the three seen in satellite imagery).
I'm confident that 'drilling' with multiple bombs was the known approach prior to the attack. The planned approach to soviet bunkers was to use repeated accurate strikes of nuclear bombs to achieve a similar drilling for their bunkers.
There appears to be an assumption that the main facility was exposed to blasts from the tunnels and since that appears to be an obvious weakness I'm wondering why the Iranians wouldn't have blast doors between the tunnels and the facility as a form of redundancy. I am still worried that this is part of an approach to slowly warm Americans up to another war, much easier to sell a limit strike as a success, then 3-6 months later when the Iranians have recovered it'll be even easier to sell another strike or a more involved engagement.
There are physical limits to weight, hardness, max explosive energy and max kinetic energy and these are all known. The only way to exceed them would be to drop it from a higher altitude, like space, or give it a nuclear warhead. The US isn’t the only country that has tested bunker busters and the physics involved isn’t that hard. It’s just expensive.
Sure, but you have no firsthand knowledge of that information.
You are told the B2 can carry a certain payload weight.
You are told the B2 has a certain operational ceiling.
You are told the bombs are a certain weight.
You are told the bombs are made from a certain material.
You are told the bombs contain a certain type of explosive.
Everything you know about this device and its capabilities came from an organization that has every motivation to publish specs that are just enough to raise the eyebrows of the people this device is supposed to scare hell out of, but they have less than zero motivation to publish specs that speak to maximum capabilities.
So while your calculations might be accurate for the component values you gave it, your component values of your calculation are not accurate, because all you know is what you were told.
You can calculate these things based on wing size and airspeed and neither are hard to figure out, it’s clearly subsonic and it’s been seen in public.
While skunkworks are certainly a thing they’re not hiding some Star Trek antigravity device, physics is still physics and physical limits are physical limits. Look at the Otto Celera 500L if you want to see what attacking physical limits looks like. It’s an engineering problem and the fundamentals are well understood. The real magic is in creating the money to pay for it.
> You can calculate these things based on wing size and airspeed
If you can calculate the depth and damage those bombs did based on wing size and airspeed (which technically is another parameter you really don’t know, but are relying on what you are told) you ought to be working for the government.
The US military isn't the only entity making airplanes and bunker busters. We don’t need to rely on their figures to know a great deal about what happened. You are assuming they have some order of magnitude hidden capacity which would break the laws of physics, and I’m very confident that they didn’t do that.
Gotcha. So your perspective is there are other entities making airplanes with the capabilities of the B-2 and a bunker buster bomb equivalent to the GBU-57 so much so that you can reliably determine capabilities of those weapon systems…as a layman with just a hand calculator?
That is a $2B aircraft and a $20M ordinance (each). You want to tell us exactly what entity has anything even remotely equivalent? No one else but the US could bear to afford it. Maybe China…but if they have it’s not common knowledge.
I think you have pretty much dug yourself a hole here on your knowledge and capabilities…you have landed into silliness now. (That pun was definitely intended)
No amount of money enables an aircraft to violate the laws of physics. Clearly your knowledge on aircraft is limited otherwise we would have a shared understanding of the physics involved and wouldn’t even be having this argument.
Who is arguing that? I’m not. The only argument I have made is that you do not have all the values you need to plug into your “calculator” to make a BDA.
But perhaps you can figure all of those values you need by just knowing the wingspan and airspeed of the aircraft delivering the payload, if so…I defer to you and this amazing deductive knowledge that you possess.