Ah… I really could not disagree more with that statement. I know we don’t want to trust BigCorp and whatnot, but a single exposed port and an incomplete understanding of what you’re doing is really all it takes to be compromised.
Same applies to Tailscale. A Tailscale client, coordination plane vulnerability, or incomplete understanding of their trust model is also all it takes. You are adding attack surface, not removing it.
If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.
If you are exposing a handful of hardened services on infrastructure you control, Tailscale adds complexity for no gain. If you are connecting machines across networks you do not control, or want zero-config access to internal services, then I can see its appeal.
Even if you understand what you are doing, you are still exposed to every single security bug in all of the services you host. Most of these self hosted tools have not been through 1% of the security testing big tech services have.
Now you are exposed to every security bug in Tailscale's client, DERP relays, and coordination plane, plus you have added a trust dependency on infrastructure you do not control. The attack surface did not shrink, it shifted.
This felt like it didn’t do your aim justice, “$X and an incomplete understanding of what you’re doing is all it takes to be compromised” applies to many $X, including Tailscale.
Yes, it did. We have literally millions of times as much memory as in 1970 but far less than millions of times as many good library developers, so this is probably the right tradeoff.
C++ already killed it: templated code is only instantiated where it is used, so with C++ it is a random mix of what goes into the separate shared library and what goes into the application using the library. This makes ABI compatibility incredibly fragile in practise.
And increasingly, many C++ libraries are header only, meaning they are always statically linked.
Haskell (or GHC at least) is also in a similar situation to Rust as I understand it: no stable ABI. (But I'm not an expert in Haskell, so I could be wrong.)
It's not just about memory. I'd like to have a stable Rust ABI to make safe plugin systems. Large binaries could also be broken down into dynamic libraries and make rebuilds much faster at the cost of leaving some optimizations on the table. This could be done today with a semi stable versionned ABI. New app builds would be able to load older libraries.
The main problem with dynamic libraries is when they're shared at the system level. That we can do away with. But they're still very useful at the app level.
> I'd like to have a stable Rust ABI to make safe plugin systems
A stable ABI would allow making more robust Rust-Rust plugin systems, but I wouldn't consider that "safe"; dynamic linking is just fundamentally unsafe.
> Large binaries could also be broken down into dynamic libraries and make rebuilds much faster at the cost of leaving some optimizations on the table.
This can already be done within a single project by using the dylib crate type.
You could check that mangled symbols match, and have static tables with hashes of structs/enums to make sure layouts match. That should cover low level ABI (though you would still have to trust the compiler that generated the mangling and tables).
A significantly more thorny issue is to make sure any types with generics match, e.g. if I declare a struct with some generic and some concrete functions, and this struct also has private fields/methods, those private details (that are currently irrelevant for semver) would affect the ABI stability. And the tables mentioned in the previous paragraph might not be enough to ensure compatibility: a behaviour change could break how the data is interpreted.
So at minimum this would redefine what is a semver compatible change to be much more restricted, and it would be harder to have automated checks (like cargo-semverchecks performs). As a rust developer I would not want this.
How much evidence do we actually have that AI wasn't used for these "real props"?
(Personally I don't care about my ability to tell the difference between what's AI and what's not; I care about my ability to tell the difference between well-crafted and not, and that seems to be functioning fine)
IPv6 is already here if you're not in the US. I moved house last month and consumer ISPs don't offer a (real) IPv4 connection in my country any more; you get an IPv6 connection and your router does MAP-E if you want to send data over IPv4.
I want to echo this comment. I am on Map-e in Asia and it is very difficult to get an exclusive ipv4 address without paying extra money.
And I want to connect to my machines without some stupid vpn or crappy cloud reverse tunneling service. Not everyone in the world wants to subscribe to some stupid SaaS service just to get functionality that comes by default with ipv6.
I think Silicon Valley is in a thought bubble and for people there ipv4 is plentiful and cheap. So good for them. However, the more these SaaS services delay ipv6 support, the more I pray to any deity out there I can move off these services permanently.
> The current adoption woes are exactly because IPv6 is so different from IPv4. Everyone who tries it out learns the hard way that most of what they know from IPv4 doesn't apply.
In my experience the differences are just an excuse, and however similar you made the protocol to IPv4 the people who wanted an excuse would still manage to find one. Deploying IPv6 is really not hard, you just have to actually try.
> - I don't have a shortage of IPv4. Maybe my ISP or my VPN host do, I don't know. I have a roomy 10.0.0.0/8 to work with.
That's great until you need to connect to a work/client VPN that decided to also use 10.0.0.0/8.
> - Every host routable from anywhere on the Internet? No thanks. Maybe I've been irreparably corrupted by being behind NAT for too long but I like the idea of a gateway between my well kept garden and the jungle and my network topology being hidden.
Even on IPv4, having normal addresses for all your computers makes life so much nicer. Perhaps-trivial example, but one that matters to me: if two people live in one house and a third person lives in a different house, can they all play a network game together? IPv4 sucks at this.
Landed on 172.16/22 for this reason however it's not uncommon how an enterprise to use all 3 private classes. One place I worked used 192.168 for management, 10 for servers, and 172 for wifi
Using 2 different classes has been a pretty common setup for wifi and wireless in my experience
Sure, but opening up one port is a much smaller surface than exposing yourself to a whole cloud hosting company.
reply