This looks great! How small of an audio buffer have you been able to get down to? Any plans for an API?
I've been developing a VR spatial sound and music app for a few years with the Unity game engine, bypassing the game engine's audio and instead remote controlling Ambisonic VSTs in REAPER. I can achieve low latency with that approach but it's a bit limited because all the tracks and routing need to be setup beforehand. There's probably a way to script it on REAPER but that sounds like an uphill battle. It would be a lot more natural to interface with an audio backend that is organized in terms of audio objects in space.
What I'd like is more flexibility to create and destroy objects on the fly. The VSTs I'm working with don't have any sort of occlusion either. That would be really nice to play with. Meta has released a baked audio raytracing solution for Quest, and that's fun for some situations but the latency is a bit too much for a satisfying virtual instrument.
Hey, I’ve got down to under 20ms audio buffer, but it depends on the complexity of the scene.
What you’re working on sounds really cool, I’ll have a look at it!
It sounds like Audiocube offers the kind of features that you need, although it doesn’t have realtime audio input (yet, I’m working on it and have it partially working).
Looking at your project it looks like it would be great to integrate the two somehow - Audiocube would be awesome in vr but I have no vr dev experience
There's a library Valve made for spatial audio for games (inc. VR). I've played around with it a bit, it's incredible. I'm surprised more games haven't adopted it.
I've been developing a VR spatial sound and music app for a few years with the Unity game engine, bypassing the game engine's audio and instead remote controlling Ambisonic VSTs in REAPER. I can achieve low latency with that approach but it's a bit limited because all the tracks and routing need to be setup beforehand. There's probably a way to script it on REAPER but that sounds like an uphill battle. It would be a lot more natural to interface with an audio backend that is organized in terms of audio objects in space.
What I'd like is more flexibility to create and destroy objects on the fly. The VSTs I'm working with don't have any sort of occlusion either. That would be really nice to play with. Meta has released a baked audio raytracing solution for Quest, and that's fun for some situations but the latency is a bit too much for a satisfying virtual instrument.
Here's my project for context: https://musicality.computer/vr