I usually just `git rebase origin/main -i` after the base branch has been merged there, and this means I need to explicitly drop the merged commits, but I can inspect what's happening.
Add `--update-refs` to your interactive rebase and it will give you an easy line to know how many commits to drop because it will add an `update-ref` line for the old branch. You can just easily delete everything up to and including that `update-ref` line and don't have to manually pull up a git log of the other branch to remember which commits already merged.
(Plus, of course, if you have multiple branches stacked, `--update-refs` makes it easier to update all of them if you start from the outermost branch.)
I picked up `--update-refs` to not lose my mind working in squash merge repos. I much prefer merge commits. I often make good, well documented commits and I like having access to the original commits if I need them, so when in a squash merge repo I become a branch hoarder renaming merged branches with a `zoo/` prefix (to drop them low in sort order, among other reasons it is name `zoo/`).
I will often keep experiment branches around and `--update-refs` helps me manage that because if I see commits that might update-ref a `zoo/` branch I know to drop them from the experiment branch.
All of that discovery of already merged commits would be automatic/cheap in rebases with merge commits. I was very frustrated before discovering `--update-refs`. I'm still often frustrated with squash merging, but keeping a large `zoo/` and having `--update-refs` is extra work that almost replicates the experience of just using merge commits in the first place. I don't know why so many think squash merge workflows are "simpler".
"Context" here is just a string. Debugging means grepping that string in the codebase, and praying that it's unique. You can only come up with so many unique messages along a stack.
You are also not forced to add context. Hell, you can easily leave errors unhandled, without compiler errors nor warnings, which even linters won't pick up, due to the asinine variable syntax rules.
I'm not impressed by the careless tossing around of the word "easily" in this thread.
It's quite ridiculous that you're claiming errors can be easily left unhandled while referring to what, a single unfortunate pattern of code that will only realistically happen due to copy-pasting and gets you code that looks obviously wrong? Sigh.
"Easily" doesn't mean "it happens all the time" in this context (e.g. PHP, at least in the olden days).
"Easily" here means that WHEN it happens, it is not usually obvious. That is my experience as a daily go user. It's not the result of copy-pasting, it's just the result of editing code. Real-life code is not a beautiful succession of `op1, op2, op3...`. You have conditions in between, you have for loops that you don't want to exit in some cases (but aggregate errors), you have times where handling an error means not returning it but doing something else, you have retries...
I don't use rust at work, but enough in hobby/OSS work to say that when an error is not handled, it sticks out much more. To get back on topic of succinctness: you can obviously swallow errors in rust, but then you need to be juggling error vars, so this immediately catches the eye. In go, you are juggling error vars all the time, so you need to sift through the whole thing every goddamn time.
> Debugging means grepping that string in the codebase, and praying that it's unique.
This really isn't an issue in practice. The only case where an error wouldn't uniquely identify its call stack is if you were to use the exact same context string within the same function (and also your callees did the same). I've never encountered such a case.
> You are also not forced to add context
Yes, but in my experience Go devs do. Probably because they're having to go to the effort of typing `if err != nil` anyway, and frankly Go code with bare:
if err != nil {
return err
}
sticks out like a sore thumb to any experienced Go dev.
> which even linters won't pick up, due to asinine variable syntax rules.
I have never encountered a case where errcheck failed to detect an unhandled error, but I'd be curious to hear an example.
Now all you have to do is get a Go programmer to write code like this:
if somethingElse {
err := baz()
log.Println(err)
}
Good luck!
As for your first example,
// if only err2 failed, returns nil!
Yes, that's an accurate description of what the code you wrote does. Like, what? Whatever point you're trying to make still hinges on somebody writing code like that, and nobody who writes Go would.
Now, can this result in bugs in real life? Sure, and it has. Is it a big deal to get a bug once in a blue moon due to this? No, not really.
Go only looks like that in toy examples where you have one method calling a bunch of libraries and services. If you are writing actual logic, the error handling is preferable to exceptions IMO, because no project even uses them correctly.
Now if you complain about slice handling, I'm with you.
Coding repetitive for-loops for everything and mind-numbing error handling put everywhere makes line count bloat up like crazy. Go is one of the most verbose languages I have seen and I say this as a guy coding in Go in my daily work.
Evidence is easy - think of a problem and ask LLM to generate idiomatic examples (leverage Java streams, with functional decomposition, etc) in Go and Java and with error handling. You will find that more often than not, the Java line count is far smaller.
I also code go daily for work, and while what you say is true, it's still far less than what I remember from working with Java, which was constantly wrapping mundane crap in classes and other stuff.
Yeah, well you can write "enterprise 10k patterns crap" in any language. Java projects suffered from the craze of those initial years where every "architect" and their grandmother insisted on patterns.
Idiomatic, Modern Java is written quite differently. Today, Go has a lot of arcane, noisy, complex code too. Ex: many, many k8s Go projects.
Feel free to provide some evidence. Like really, I would be interested in examples e.g. from the Java stdlib that are significantly more verbose than another generic purpose language.
But I do know that you are meaning stuff like AbstractFactoryFactory, but you do realize that there is zero need to write anything like that and you can (and people do) write bad code in any language?
They are already? See the football debacle in Spain. It's not their fault, legally and technically, but if everything wasn't centralized by them, it wouldn't happen.
Don't you think there are a few things we could say on this subject to bring the debate to good-old-HN level?
(1) LLM's Attention mechanisms are clear enough at a conceptual level:
"the chicken didn't cross the road because it was too wide"...
OK, so LLMs "understand" that the "it" is the road, because QVK etc is enough to "learn" this.
some say this is all you need... I beg to differ:
(2) Human brains are complex but better and better studied. You have one, should be interested in the hardware.
So: IMHO current LLM's look a lot like the Default Mode Network in our brains. If you read the description there: https://en.wikipedia.org/wiki/Default_mode_network I think you will see, like I do, a striking similarity between the behaviour of LLM's and our DMN's ways.
What a synthetic FPN would be I, have no idea so here goes:
The bag is very interesting!
I would never say that's all we need, but... I do say that that might be the most important part we need! That is, language is the most distinctive feature our brains we have. DMN, and similar shallow "activity scans" don't tell us much. Yes, some animals have some kind of language, they communicate and predict trivial results or remember some events. But this is meaningless compared to the output of a human brain, the difference is abysmal.
I don't think there's anything interesting left in the bag.
reply