I mostly agree with that sentiment with one exception, on the hardware version of the VB you had focus and IPD controls to make the image more comfortable. However I haven’t found a good way to emulate these in software.
I tried a basic shader that simply moved the images closer and further apart as an analog to manual IPD adjustment. But it didn’t seem to work. Ill revisit it.
I’m also looking into an option to hook into the neural network that leia uses for their YouTube video playback feature. Tho with that, I imagine they can buffer frames ahead of time, so it remains to see what kind of fps hit we’ll see trying to add stereo depth estimation in top. In theory tho, if it’s playable, it would make depth intensity, focal distance, and even bokeh a possibility. Even tho it might be a little less than perfect, it seems like a fun experiment
With the 3ds, I’m at least able to expose a control for depth intensity that maps to the hardwares 3d slider and the games will respect it
In the case of something like the wii and the GameCube, I think since those 3d implementations are post processing effects, and aren’t as hindered by the way the individual games choose to implement their rendering, I should be able to implement the same depth control option by adjusting the geometry shaders that perform the offsets, without needing to mess with adding AI inference on top.
Lots to explore!