>take multiple bad audio signals and combine them into one signal that's better
The problem eventually comes down to the fact that "better" is subjective. We're in the murky realm of art here. Should your algorithm keep that fret noise or the squeaking of a vocalist's intake of breath? Are they "noise," or are they part of the performance?
>I know nothing about audio processing
Not wishing to be rude, but this much is very evident. Recording engineers position their microphones with millimetre precision in order to combat phase issues, and that is in an ideal studio scenario. Doing what you suggest is basically impossible.
Maybe I'm overstating it, you could probably do something and it'd be a nice bit of research, but you wouldn't get useful results in the way that you're imagining.
> Not wishing to be rude, but this much is very evident. Recording engineers position their microphones with millimetre precision in order to combat phase issues, and that is in an ideal studio scenario. Doing what you suggest is basically impossible.
Actually, that much I know, because I've done some amateur home recording. I know that, for example, when you mic a snare drum with two microphones that are pointed at each other, you have to put a phase inverter on one microphone. I also know my way around the basic processors for audio production (compressor, limiter, EQ, etc.).
What I don't know much about is the undoubtedly more advanced techniques which may or may not exist that could realize the idea I'm talking about. The best idea I can come up with is, if you had one audio source that captured the dynamics of a concert (perhaps from a phone that was far away from the house speakers), and another audio source that captured a clearer yet "smashed" sound (perhaps from a phone closer to the house speakers), perhaps you could apply a compressor to the second source that was keyed on the dynamics of the first. Again, I might be full of crap here.
I presume he means better in the sense that you'd try to remove overt noise, e.g. small conversations in the background, maybe wind noise. That sounds like it'd be possible to do. Improving a single video from multiple video sources would surely be impossible. Even with scene reconstruction etc you're not going to be improving the quality of any single video source...(?)
The problem eventually comes down to the fact that "better" is subjective. We're in the murky realm of art here. Should your algorithm keep that fret noise or the squeaking of a vocalist's intake of breath? Are they "noise," or are they part of the performance?
>I know nothing about audio processing
Not wishing to be rude, but this much is very evident. Recording engineers position their microphones with millimetre precision in order to combat phase issues, and that is in an ideal studio scenario. Doing what you suggest is basically impossible.
Maybe I'm overstating it, you could probably do something and it'd be a nice bit of research, but you wouldn't get useful results in the way that you're imagining.