For POSIX: I leave Bash as the system shell and then shim into Fish only for interactive terminals. This works surprisingly well, and any POSIX env initialisation will be inherited. I very rarely need to do something complicated enough in the REPL of the terminal and can start a subshell if needed.
Fish is nicer to script in by far, and you can keep those isolated with shebang lines and still run Bash scripts (with a proper shebang line). The only thing that’s tricky is `source` and equivalents, but I don’t think I’ve ever needed this in my main shell and not a throw-away sub shell.
I often write multi-line commands in my zsh shell, like while-loops. The nice thing is that I can readily put them in a script if needed.
I guess that somewhat breaks with fish: either you use bash -c '...' from the start, or you adopt the fish syntax, which means you need to convert again when you switch to a (bash) script.
I guess my workflow for this is more fragmented. Either I’m prototyping a script (and edit and test it directly) or just need throwaway loop (in which case fish is nicer).
I also don’t trust myself to not screw up anything more complex than running a command on Bash, without the guard rails of something like shellcheck!
I used to do it this way, but then having the mentally switch from the one to the other became too much of a hassle. Since I realized I only had basic needs, zsh with incremental history search and the like was good enough.
I don't care for mile-long prompts displaying everything under the sun, so zsh is plenty fast.
Wha do you mean by “fixing this” or it being a design flaw?
I agree with the point about sequential allocation, but that can also be solved by something like a linter. How do you achieve compatibility with old clients without allowing something similar to reserved field numbers to deal with version skew ambiguity?
I view an enum more as an abstraction to create subtypes, especially named ones. “Enumerability” is not necessarily required and in some cases is detrimental (if you design software in the way proto wants you to). Whether an enum is “open” or “closed” is a similar decision to something like required vs optional fields enforced by the proto itself (“hard” required being something that was later deprecated).
One option would be to have enums be “closed” and call it a day - but then that means you can never add new values to a public enum without breaking all downstream software. Sometimes this may be justified, but other times it’s not something that is strictly required (basically it comes down to whether an API of static enumerability for the enum is required or not).
IMO the Go way is the most flexible and sane default. Putting aside dedicated keywords etc, the “open by default” design means you can add enum values when necessary. You can still do dynamic closed enums with extra code. Static ones are still not possible though without codegen. However if the default was closed enums, you wouldn’t be able to use it when you wanted an open one, and would have it set it up the way it does now anyway.
Not sure what GP had in mind, but I have a few reasons:
Cherry picks are useful for fixing releases or adding changes without having to make an entirely new release. This is especially true for large monorepos which may have all sorts of changes in between. Cherry picks are a much safer way to “patch” releases without having to create an entirely new release, especially if the release process itself is long and you want to use a limited scope “emergency” one.
Atomic changes - assuming this is related to releases as well, it’s because the release process for the various systems might not be in sync. If you make a change where the frontend release that uses a new backend feature is released alongside the backend feature itself, you can get version drift issues unless everything happens in lock-step and you have strong regional isolation. Cherry picks are a way to circumvent this, but it’s better to not make these changes “atomic” in the first place.
Google’s SRE STPA starts with a similar model. I haven’t read the external document, but my team went through this process internally and we considered the hazardous states and environmental triggers.
I’m struggling to understand the chain of events, because the story starts midway. Is the claim that JUST the 2FA code was enough to pwn everything with no other vulnerabilities? If that’s the case, then that’s a way bigger problem.
Or (given the password database link at the end), is the sequence:
1) various logins are pwned (Google leak or just other logins, but using gmail as the email - if just other things, then password reuse?)
2) attacker has access to password
3) attacker phishes 2FA code for Google
4) attacker gains access to Google account
5) attacker gains access to Google authenticator 2FA codes
6) attacker gains access to stored passwords? (Maybe)
7) attacker gains the 2nd factor (and possible the first one, via the chrome password manager?) to a bunch of different accounts. Alternatively, more password reuse?
I guess the key question for me, was there password reuse and what was the extent, or did this not require that?
Disclaimer: work at Google, not related to security, opinions my own.
No, if they had had the password they wouldn't have needed to do all of that. They could have just logged in, perhaps just needed the 2FA code. However, you say that you gave them both enhanced security codes (I'm guessing this was a gmail backup key), and you also gave them the 2FA SMS code. These are the only two things you need to take over any gmail account, and it doesn't require knowing the password. It's just purely social engineering.
The only question mark is the email from google. It sounds like it was a scam email, so it would be interesting to know whether/how it was spoofed.
And did you have passwords using chrome password manager as well (which were also compromised by the Google account access, and this is how they got access to e.g. Coinbase?), or did they get passwords through some other means and just needed 2FA?
I did have saved passwords in Chrome password manager but they were old. My guess is that the attacker used Google SSO on Coinbase (e.g., "sign in with Google"), which I have used in the past. And then they opened up Google's Authenticator app, signed in as me, and got the auth code for Coinbase.
By enabling cloud-sync, Google has created a massive security vulnerability for the entire industry. A developer can't be certain that auth codes are a true 2nd factor, if the account email is @gmail.com for a given user because that user might be using Google's Authenticator app.
Hmm, I see what you mean, although technically this is still a 2 factor compromise (Google account password + 2FA code). Just having one or the other wouldn’t have done anything. The bigger issue is the contagion from compromising a set of less related two factors (the email account, not the actual login).
Specifically, the most problematic is SSO + Google authenticator. Just @gmail + authenticator is not enough, you need to also store passwords in the Google account too and sync them.
Although, this is functionally the same as using a completely unrelated password manager and storing authenticator codes there (a fairly common feature) - a password manager compromise leads to a total compromise of everything.
Inbox is the biggest compromise of them all IMO. I realized this a decade ago and use a different email for every account that I have. None of them have anything to do with my name in any way, I use 4 random words to create new email for any new account that I need. Accidental takeover of any one account does not lead to total take over of my life :)
No, it sounds like they got him to create backup codes, which (along with SMS 2FA code, which he also gave them), that is all they need to take over the gmail account. Job done.
These don’t prevent censorship necessarily, they will give you a way to detect it at best.
DNSSEC gives you the ability to verify the DNS response. It doesn’t protect against a straight up packet sniffer or ISP tampering, it just allows you to detect that it has happened.
DoT/DoH are better, they will guarantee you receive the response the resolver wanted you to. And this will prevent ISP-level blocks. But the government can just pressure public resolvers to enact the changes at the public resolver level (as they are now doing in certain European countries).
You can use your own recursive, and this will actually circumvent most censorship (but not hijacking).
Hijacking is actually quite rare. ISPs are usually implementing the blocks at their resolver (or the government is mandating that public resolvers do). To actually block things more predictably, SNI is already very prevalent and generally a better ROI (because you need to have a packet sniffer to do either).
DNSSEC itself won't help you alone, but the combination of DNSSEC + ODoH/DoT will. Without DNSSEC, your (O)DoH/DoT server can mess with the DNS results as much as your ISP could.
Of course you will need to configure your DNS server/client to do local validation for this, and at most it'll prevent you from falling for scams or other domain foolery.
In practice, DNSSEC won't do anything for ordinary Internet users, because it runs between recursive resolvers and authority servers, and ordinary users run neither: they use stub resolvers (essentially, "gethostbyname") --- which is why you DHCP-configure a DNS server when you connect to a network. If you were running a recursive resolver, your DNS server would just be "127.0.0.1".
The parent comment is also correct that the best DNSSEC can do for you, in the case where you're not relying on an upstream DNS server for resolution (in which case your ISP can invisibly defeat DNSSEC) is to tell you that a name has been censored.
And, of course, only a tiny fraction of zones on the Internet are signed, and most of them are irrelevant; the signature rate in the Tranco Top 1000 (which includes most popular names in European areas where DNSSEC is enabled by default and security-theatrically keyed by registrars) is below 10%.
DNS-over-HTTPS, on the other hand, does decisively solve this problem --- it allows you to delegate requests to an off-network resolver your ISP doesn't control, and, unlike with DNSSEC, the channel between you and that resolver is end-to-end secure. It also doesn't require anybody to sign their zone, and has never blown up and taken a huge popular site off the Internet for hours at a time, like DNSSEC has.
Whatever else DNSSEC is, it isn't really a solution for the censorship problem.
Obviously you need to enable local verification for DNSSEC to do anything in the first place, otherwise the DNS server can just lie about the DNSSEC status. If someone is manually configuring a DoH resolver, they probably have a toggle to do DNSSEC validation nearby.
DNSSEC doesn't prevent censorship, but it does make tampering obvious. Moving the point of trust from my ISP to Cloudflare doesn't solve any problems, Cloudflare still has to comply with national law. DoH is what you use to bypass censorship; DNSSEC is what you use to trust these random DNS servers you find on lists on Github somewhere.
A bit over half the websites I visit use signed zones. All banking and government websites I interact with use it. Foreign websites (especially American ones) don't, but because of the ongoing geopolitical bullshit, American websites are tough to trust even when nobody is meddling with my connection, so I'm not losing much there. That's n=1 and Americans will definitely not benefit because of poor adoption, but that only proves how much different kinds of "normal internet user" there are.
I think we're basically on the same page. With respect to who is or isn't signed, I threw this together so we could stop arguing about it in the abstract on HN:
It does say that they collect this information in their “Data and Privacy Policy”. Specifically section 2.2 (Data Collected): https://quad9.net/privacy/policy/
Which policy are you referring to that implies they don’t?
Also I think you are assuming they store query logs and then aggregate this data later. It is much simpler just to maintain an integer counter for monitoring as the queries come in, and ingest that into a time series database (not sure if that’s what they actually do). Maybe it needs to be a bit fancier to handle the cardinality of DNS names dimension, but re-constructing this from logs would be much more expensive.
The section you mentioned does not say anything about having counters for labels. It only mentions that they record "[t]he times of the first and most recent instances of queries for each query label".
Well, the counters aren't data collected, they are data derived from the data they do collect. The privacy policy covers collection.
EDIT: I see they went out of their way to say "this is the complete list of everything we count" and they did not include counters by label, so I see your point!
I don't see how that is compatible with 2.2. They don't say anything about counters per label. It says counter per RR type, and watermarks of least and most recent timestamps by label, not count by label.
If an organization is going to be this specific about what they count, it implies that this is everything they count, not that there may also be other junk unmentioned.
This might work for the types you create, but what about all the code written in the language that expects the “proper” structure?
> Types either represent the data or not
This definitely required, but is only really the first step. Where types get really useful is when you need to change them later on. The key aspects here are how easily you can change them, and how much the language tooling can help.
I don’t think that is implied. It was discovered first, but that doesn’t mean it is necessarily simpler or required less data to discover. Take Newton/Leibniz calculus for example as a clear example of similar discovery time, leading to the same result but using different approaches. Leibniz started after Newton technically, and yet is the preferred way.
Especially if theory B is equivalent to theory A, then using it as a replacement for theory A seems perfectly fine (well as long as there are other benefits).
In some cases it might be pointless though from a scientific standpoint because the goal is “not-yet-known” predictions, but if viewed through a mathematical lens, then it seems like a valid area of study.
Maybe the process behind creating theory A is more generalisable towards future scientific discovery, but that would make the process worthwhile, not the theory.
Being pedantic, even if you have to pay for a type of car, you still have no variance to expectation when you know what you are getting. I think that point was more about the variance in driving, driver etc rather than car type.
Re: enshittification in general. I think the incentives are better aligned for self-driving. Eg. charging people who create trash etc can also make the company money whilst improving overall experience.
With non self-driving, you have to rely on user ratings etc to penalise a specific driver, which seems inherently more fuzzy. The company has conflicting goals of keeping enough drivers (drives costs down etc), whilst guaranteeing a certain experience. It is difficult to create a system for drivers to “improve” (eg. Clean their car) and for a company to directly encourage that, whereas it’s easier to just charge people who litter more etc in a fully automated system.
Fish is nicer to script in by far, and you can keep those isolated with shebang lines and still run Bash scripts (with a proper shebang line). The only thing that’s tricky is `source` and equivalents, but I don’t think I’ve ever needed this in my main shell and not a throw-away sub shell.
reply