Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Destructors shouldn't fail: that's why they're implicitly noexcept in C++11. (You can, however, throw inside a destructor so long as you catch before returning.)

In practice, it's not a problem. If you really want to do something fallible on every success path, you can use an IIFE or a named function to isolate everything before the thing you want to do on the success path.

What cases do you have in mind?



I ran into something like this (very simplified):

  MyRAIIFile temp_file = ...;
  AddSomeStuffToFile(&temp_file); // threw something
where MyRAIIFile has a destructor:

  ~MyRAIIFile() {
    underlying_file.close(); // also threw something
  }
So, AddSomeStuffToFile threw an exception, and the destructor also threw an exception, meaning I had two exceptions in flight which was undefined behavior and caused some weirdness. It took many hours to track down this particular problem...

I can see that one correct answer would be to put a try-catch around the .close() call, but that's the wrong place for that logic in my case; I want the caller of the destructor to decide what to do to recover. Even Java's checked exceptions would cause chaos here. Only returning an error in the destructor's return type (with a must-use annotation of course) would force me to handle this situation at compile time... but C++ can't do that.

Any advice for this situation?


The problem is that your close is fallible in the first place. In general, resource reclaim code should always be infallible. When you deallocate a resource, be it a file descriptor or a chunk of memory, you're returning something to the system. You're providing a gift. The kernel should never refuse this gift.

Linux confuses the issue somewhat. close(2) can report errors, and I'm guessing that when your close() throws an error, it's just propagating something it got from the operating system.

Thing is, close(2) errors aren't really errors. There are three cases: 1) close(2) succeeds; 2) close(2) fails with EBADF; and 3) close(2) fails with some other errors. In case #2, there's no problem. In case #2, your program has a logic bug and you should abort, not throw. In case #3, the close operation itself actually succeeded, and the kernel is just reporting some error that occurred during file writeback in the meantime.

Errors in case #3 should be ignored. If you care about file durability, call fsync(2) before close. Catching and propagating IO errors from close(2) ensures nothing, since the kernel is allowed to defer potentially-failing IO operations until after the close!


For case #2, isn't it a bit presumptious of the MyRAIIFile to make the decision to abort the entire program? It would be nice if the destructor could report the error upward to whoever called it, so they can decide whether to log or abort.

When you say "In general, resource reclaim code should always be infallible", that sounds kind of optimistic (as this example shows, cleanup code is fallible), the question is just where we handle it. So, should I instead read this statement as "destructors shouldn't return"? And if so, is that because of the C++ limitation that destructors can't return, or fundamentally a best practice unrelated to the language?


> isn't it a bit presumptious of the MyRAIIFile to make the decision to abort the entire program? I

No. Closing an invalid file descriptor is a logic bug. It's just as bad as an invalid pointer. When you notice one of these, you crash, because to continue means operating in some unknown and potentially dangerous state.


The usual advice is to add a TryClose method to your MyRAIIFile class that can signal failure, and that also keeps the object around in some well-defined state. This doesn't force you to handle the situation properly, but at least it makes it possible to do so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: