Today I toke a photo with the Hydrogen, but then I realized the photo was taken in 2d instead 4V.
I really wanted that photo in 3D. So I thought about at least using the 2d to 4V conversion, surprisingly, the conversion had better 3D than the depth I guess I would get from native 4V capture.
So my suggestion is: to add a setting for native 4V/3D photos to also make post-conversion to 4V for distant objects, but using real depth that sensors can calculate from close objects. Also, If something is close enough that disparity points are easily seen by processor use real depth, but also using algorithms to improve (i.e. with the hair of people, disparity points will be erratic, but algorithm can detect is part of the head of a person).
I would add a button to make that post conversion if you want (so, you initially have the original 4V photo, and then you can press the button to improve depth only if you want/not happy with native camera results).