I don’t understand what you’re trying to imply here.
Are you seriously suggesting that I chose to downgrade the graphics on the XB1 because I felt like it, and that dozens of other AAA game studios did the same thing?
Our engine was Microsoft native, by all rights it should have performed much better than PS4.
If you’re going to argue you’ll have to do a lot better than that since I have many years of lived experience with these platforms.
I didn't take it personally, i just think you're presenting ignorance as fact and it's frustrating.
Especially when seemingly it comes from nowhere and people keep echoing the same thing which I know not to be true.
Look, I know people really love virtualisation (I love it too) but it comes with trade-offs; spreading misinformation only serves to misinform people for.. what, exactly?
I understood the parents perspective, GPU passthrough (IE; VT-d & AMD-Vi) does pass PCI-e lanes from the CPU to the VM at essentially the same performance. My comment was directly stating that graphical fidelity does not solely depend on the GPU, there are other components at play, such as textures being sent to the GPU driver, those textures don't just appear out of thin air, they're taken from disk by the CPU, and passed to the GPU. (there's more to it, but usually I/O involves the CPU on older generations)
The problem with VMs is that normal memory access's take on average a 5% hit, I/O takes the heaviest hit at about 15% for disks access and about 8% for network throughput (ballpark numbers but in-line with publicly available information).
It doesn't even matter what the exact precise numbers are, it should be telling to some degree that PS4 was native and XB1 was virtualised, and the XB1 performed worse with a more optimised and gamedev friendly API (Durango speaks DX11) and with better hardware.
It couldn't be more clear from the outside that the hypervisor was eating some of the performance.
I guess I should clarify that my point was purely in abstract and not specific to the XBox situation.
Of course in reality it depends on the hypervisor and the deployed configuration. Running a database under an ESXi VM with SSDs connected to a passed-through PCIe controller (under x86_64 with hardware-assisted CPU and IO virtualization enabled and correctly activated, interrupts working correctly, etc) gives me performance numbers within the statistical error margin when compared to the same configuration without ESXi in the picture.
I haven’t quantified the GPU performance similarly but others have and the performance hit (again, under different hypervisors) is definitely not what you make it out to be.
My point was that if there’s a specific performance hit, it would be pedantically incorrect to say “virtualizing the GPU is the problem” as compared to saying “the way MS virtualized GPU access caused a noticeable drop in achievable graphics.”
Sorry, I don't think I implied virtualising the GPU is the problem.
I said "the fact that it's a VM has caused performance degradation enough that graphical fidelity was diminished" - this is an important distinction.
To clarify further: the GPU and CPU is a unified package and the request pipeline is also shared, working overtime to send things to RAM will affect GPU bandwidth, so overhead of memory allocations that are non-GPU will still affect the GPU due to that limited bandwidth being used.
I never checked if the GPU bandwidth was constrained by the hypervisor to be fair, because such a thing was not possible to test, the only corrolary is the PS4 which we didn't optimise as much as we did for DX and ran on slightly less performant hardware.
As you may understand: there's more to graphical fidelity than just the GPU itself.
CPU<->GPU bandwidth (and GPU memory bandwidth) are also important.
There is a small but not insignificant overhead to these things with virtualisation: VMs don't come for free.