I said this in another comment but look at the leading chess engines. They are already so far above human level of play that having a human override the engines choice will nearly always lead to a worse position.
> You're not expecting it to always be right, are you?
I think another thing that gets lost in these conversations is that humans already produce things that are "wrong". That's what bugs are. AI will also sometimes create things that have bugs and that's fine so long as they do so at a rate lower than human software developers.
We already don't expect humans to write absolutely perfect software so it's unreasonable to expect that AI will do so.
I don't expect any code to be right the first time. I would imagine if it's intelligent enough to ask the right questions, research, and write an implementation, it's intelligent enough to do some debugging.