Hey @adigital,
I don’t know of any app like that, but your feedback has been heard.
The hard part isn’t the eye-tracking in this case, it’s the novel view synthesis. This is one of the few situations where real-time content like games actually would be easier than photos or videos, as accurate information for distant views already exists, versus photos and videos which would require either trying to synthesize the distant views ahead of time, in real-time, or just use pure stereo and throw away the goal of additional views altogether (which causes a major timing issue as switching the location of the view needs to be significantly faster when there isn’t another view waiting to the left or right of the current one for the user to land on).
Generally, Leia’s focus is Lightfield content, as that’s really what we’re the best at. Though we’ve recently released some stereo-focused features due to some enthusiasts making requests, it’s a very tiny fraction of our user base using enthusiast features, so we have to weigh that against what the majority of users will enjoy. But again, your feedback has been heard. When we have the chance, I’ll ensure the team explores what’s possible.
We are always improving our algorithms, so it could get better automatically in the future.
To give some context as to what causes issues in the algorithm, there are a few things that could be a problem:
-
There’s alignment issues in the stereo image (even if not visible to a human eye or in a stereoscope, the algorithm works on a pixel level and can “see” things we can’t)
-
The camera is very different from our ground truth training data. We’ve primarily trained our algorithms using content shot on RED Hydrogen One, as well as from paid stereo data sources which use a variety of cameras and rigs, however, only photos in which have been verified as having perfect alignment. If the camera has a wide baseline or the content has a large amount of disparity, or the content has little to no disparity, these are all situations where our algorithm doesn’t do a great job of estimating correctly.
-
There are any repeating patterns like fence posts or tiles. The algorithm is bad at identifying such patterns and makes mistakes due to it.
As mentioned above, not all alignment issues are visible to the eye. FinePix W3 is actually known for having poor alignment in many cases (very inconsistent between individual cameras, yours may have very good or very poor alignment). Because of this, many W3 users use StereoPhoto Maker to automatically align the MPO’s. I personally found a pretty large difference in the quality of how Lume Pad processed MPO’s from my W3 when I A/B tested SPM’s alignment. I recommend you give that a try if you haven’t already. That said, if the content has a LOT of disparity, there’s no easy way for it to be translated into a 4V image with a clean parallax effect without massive view jumps, so when the algorithm tries, it will usually end up flattening the image and creating artifacts. High disparity content should probably be shown in ST mode or be displayed on another 3D display more optimized for it.
In the rare cases when people tell me that Lume Pad makes them uncomfortable, I ask them and they admit that they have astigmatism or another eye health issue. If that’s not the case for people you’re demoing to, I’d be curious as to how they feel about other 3D displays, such as polarized 3D glasses-based systems.
What content are you showing them to start? I usually start with the home screen showing the immersive wallpaper, as it’s a very simple effect, without much disparity, so people’s eyes can get adjusted before I show them photos or games or video which might have much more disparity. I don’t show people high-disparity stereo content unless they explicitly ask to see something with more depth, as most people in general I’ve found can’t handle very high-disparity content (that’s the exclusive realm of enthusiasts like us).
Yes, it depends on your IPD, but in general people should be about 30cm-36cm away from the display at the dead center of the display to start. From there it’s much more comfortable tilting the device to see other view zones and move it back.