The complaint above isn't that gdb is hard, it's that the binary with debug info doesn't correlate very well with the source code after optimisations.
Bugs tend to disappear on me when I change the optimiser flags or run the thing under gdb though. I've also had a program valgrind-clean that segfaults when run without valgrind. It's a confusing world out there.
Debugging optimized binaries is harder than debugging non-optimized ones, and requires more advanced debugging skills and gdb knowledge.
There is a reason why crash dumps on optimized builds used to be called "guru meditation".
One useful skill to get started is learning how to get at data and print it through various levels of complex C++ data structures, which might require leaning on the python API to stay sane.
> Bugs tend to disappear on me when I change the optimiser flags or run the thing under gdb though
A lot of that shouts UB or compiler bug to me. I don't get this in my code unless there is some UB I missed. Unfortunately there are places where UB is unintuitive and may not even be present in newer versions of the language.
Not a fact. It's typical to have debug and release builds, and possibly other flavors.
Of course sometimes all you have is a stack trace from a release build, and then you need to debug that. But if you can, debug the debug build, it's much easier.
Frankly, that's stupid advice. Typically you have at least two build modes, debug and release. Debugging without optimization is easier because the source code maps to the compiler output, also a program without optimizations behaves the same as with optimizations, unless you put undefined behaviour into your code (should be a rare thing for an experienced programmer) or hit a compiler bug (also quite rare).
I'm not sure where you got these ideas, but there are tons of benefits to debug builds like catching out of bounds lookups on vectors the moment they happen and faster compilation.
It's bizarre that you don't realize everyone works this way.
Not when you want to debug it.
I think you're mistaking debug information not lining up with your program for different behavior, but these are not the same thing.
Crashes in production happen with optimized builds. You need to be able to inspect the core and figure out what happened from there, as usually you can't reproduce the scenario.
What about just compiling and running your program after you make a change or get a crash?
Crashes in production happen with optimized builds. You need to be able to inspect the core and figure out what happened from there, as usually you can't reproduce the scenario.
Right... What's your point? This scenario doesn't overlap with regular iterations. Normal workflow and a crash after a program is distributed are two separate things.
The crash when your program is distributed is the normal workflow. What matters is that your program runs flawlessly in its distributed environment, not that your test suite passes locally.
Testing is but a proxy to achieve the true goal, and is far from perfect.
Your normal workflow is only finding bugs once you've released your program to other people? You don't find any bugs while you work on it? You write a program and if it compiles you immediately assume everything is fine, release it and wait for someone to complain?
Do you realize that this thread was about someone saying that debug builds are a legacy holdover from the 80s ?
Testing is but a proxy to achieve the true goal, and is far from perfect.
What is it exactly that you think you're replying to? All this was about was someone thinking there was no use for debug builds. You're hallucinating some sort of discussion or argument about distributed software, services, updates, none of it is even coherent to what is being talked about.
Did you get mixed up and think that people saying that debug builds are crucial for iterations means that they were saying no one ever needs to debug an optimized build?
Postmortem debugging on production builds and without debug info is just harder (depends on how much postmortem info your bug report system provides, and at least on Windows it still makes sense to create a PDB file and archive that inhouse to associate that with the minidump you're
getting from the bug report, slightly mismatched debug info is still much better than no debug info at all). A production build crash doesn't mean that the symptoms are not reproducible on a debug build, and investigating the debug build after the bug has been reproduced is a lot more convenient.
Also a lot more bugs show up and are already fixed during development and never even make it to CI or even out into the wild. That's were debug builds and debuggers are most useful (during the initial development phase).
Sometimes I really have a feeling that software development is moving backward in time (shakes head). Debuggers are incredibly poweful tools, use them!
Doesn't change the fact it's completely unrealistic to expect that you can do that with a full debug build.
The industry has moved towards software as a service, and sometimes your service crashes and you need to figure out why.
Even if you want to turn the problem into a regression test that you can run a debug build against, you'll still need to look at the core of the optimized build to figure out what happened to begin with.
Bugs tend to disappear on me when I change the optimiser flags or run the thing under gdb though. I've also had a program valgrind-clean that segfaults when run without valgrind. It's a confusing world out there.