Full motion, 360 degree doesn't change the card requirements, and you can do two cards if necessary, so the only real barrier is 4k rendering.
Apparently a 1060 can already render a game like overwatch in 4k, so I think we're already there. You just need to find someone that will take 4k screens out of some phones and put them in a headset.
"You just need to find someone that will take 4k screens out of some phones and put them in a headset."
Ha! With what cable? 4K * 2 * 90hz blows way past display port (even DP1.3). If it was just a matter of gluing two android phones together it would be on shelves already...
It may be hard in an objective sense, but it already exists. It's not the hard part of making a high-resolution VR headset because you can buy chips that do it for you.
A single displayport 1.3 cable, supported by all recent GPUs, can push 4K at over 120Hz. Screens that use such data rates already exist too.
Throughput is not the issue with VR, but latency; it's the time from input to final render, not the amount of time it takes to render a frame that is important (though obviously the latter is a lower-bound on the former).
Most of what I've read claims that games and applications need to explicitly add support for dual/multiple GPU, as it's not something you get out of the box. And very few games do.
Anecdotally, this seemed to hold true for me, as I initially had dual 1080s in SLI and moved one of them to a second machine with no noticeable difference.
Nope. Fairly low utilization on most of the 8 cores. It's only halving the GPU resources if all of the GPU resources are being used. And that's kind of the point.. adding more GPU won't help for most VR because the games simply won't use it.
So SLI automatically bridges the two+ GPUs into a single logical card, with twice everything except RAM. Apps don't have to explicitly support it; they see a GPU with twice (or more) the number of texture units, cores, etc. as the underlying SLI'd card has.
If your VR doesn't scale up its performance with resources, perhaps it's artificially throttling itself to a max framerate? Can't push pixels fast enough to the headset? waiting for sensor data from the headset before rendering?
I'm not sure, really. I've searched on it a bit and don't see a lot of good sources on explicitly why it doesn't work well, but I see the vast majority of user opinion is the same.
This is a bit anecdotal, and I don't have expertise in the underlying technology, but I read mentions of requiring use of specific driver functionality to get better performance out of an SLI or dual GPU setup, via mechanisms like using a dedicated GPU to render each eye independently from each other instead of attempting to render both images together on a bridged virtual card. Allegedly, it's a bit of extra effort to do this (as they'd need to support both the AMD LiquidVR and Nvidia VRWorks) and would only benefit a minimal audience, so other aspects of the game get prioritized for development effort.
There are a couple of implementations that do. eg, Nvidia Funhouse VR was a showcase app of what implementing their SDK could do and they used it. I think Serious Sam VR used it, but I haven't tried that game..
If all you want is 60FPS, current cards work well enough, though 60FPS has too much latency for VR in my opinion.
Ever since I got a 144hz monitor, my 980ti struggles to do 144hz consistently at 2k. Even though I'm not doing VR, the extra frames are very noticeable, and its hard to go back to 60hz displays for things in 3D.
For 4K VR, you don't need to render the entire scene in 4K. Only a few degrees of the center of vision really needs to be rendered in 4K, and the image can be at a much lower definition the farther you get to the periphery (foveated rendering).
Future headsets can use internal cameras to track the position of the pupils to figure out where that sweet spot needs to be every frame.
I'm not a VR developer, but my impression from reading about this (or maybe listening to some John Carmack talks) was the opposite: that VR was actually demanded much more from graphics cards than non-VR.
This was (going from memory here) due both to the much higher refresh rate that's required, to having to pre-render not only what the user sees but the whole 360 degree sphere they might look to around them at any moment, and perhaps some other VR-specific requirements that don't spring to mind at the moment.
Was I dreaming that I read this? Maybe a VR developer or someone more knowledgeable than I could comment?
Rendering for VR currently is more intensive due to the high frame rate and double-rendering (once for each eye).However no one renders a 360 sphere and single-pass stereo rendering is already possible which does the cpu-side rendering for both eyes together. [1] Still, at present, VR rendering is way more intensive than regular rendering and it hasn't yet benefited from decades of optimization work like regular rendering.
The parent post is probably referring to foveated rendering - a research technique which uses eye-tracking to render most of the screen in low quality and areas being looked at in higher quality. (Real vision is perceived something like this - try reading text that isnt in the center of your gaze.) You can do foveated rendering better for a close screen so eventually it's possible that the quality of VR rendering will surpass regular rendering.
The GPU is faster than a human can move. You don't need to render all around.
Even better, if you render just a couple degrees extra you can double your framerate via interpolation. That way you can have 120Hz head-tracking while only having to render the world at 60Hz.
Apparently a 1060 can already render a game like overwatch in 4k, so I think we're already there. You just need to find someone that will take 4k screens out of some phones and put them in a headset.