LVF file with depth map

Hello!

The Leia Video Format (LVF) documentation states:

Leia Video Format images (abbreviated as LVF) are MP4 images with metadata that may contain a depth map and/or one or more video tracks known as “views”

I have a video (mp4) and a depth map (also mp4). How can I combine them in an LVF file so that the LumePad2 uses this particular depth map?

If there is no command line (ffmpeg) or tool yet, is there a sample video I can look into?

I found this thread regarding converting LVF to SBS:

Here, @Nima explains that the stream “abl” is used for the left image and “abr” for the right image. Is there a stream identifier for depth maps?

Or is there any other way to use custom depth maps for videos?

Many regards,
Jens Duttke

As you said yourself, the LVF file is close to SBS format but instead of the images appearing next to each other for L and R they are overlapping in a way.

I can see it possible to make a SBS to LVF, but not depth map. If the formentioned is ever developed you could try convertering to SBS first then to LVF.

Hey Jens, though we have built the LVF format with a plan to eventually support depth maps, we currently don’t have the ability to utilize that feature if you were to put a depth track into the file.

Is there a specific reason you’d like to put a depth map into an LVF? We have a plan to allow depth maps to be uploaded to Immersity AI to generate SBS from monoRGB + Depth in the near future.

My idea is to use the algorithms developed and optimized for the Lume Pad 2.

I wrote a Python script that generates an SBS video from the input videos (color + depth map). However, there are various parameters that affect the result. While I can increase the values for VR headsets (to achieve a stronger 3D effect) since each eye gets its own image, this quickly leads to ghosting on the Lume Pad 2.

Therefore, I was hoping that what is stated in the support document on the website is accurate and that I can simply rely on your years of experience in this field and simply let the Lume Pad do the work. :wink:

Many regards,
Jens Duttke

Hey Jens,

What you’re doing with SBS seems to be the correct workflow. There’s no native system to generate stereo from RGB+D in real-time on Lume Pad 2.

You need to decrease the disparity and/or decrease the contrast in your content until you can get the maximum depth that you’d like to achieve without cross-talk.

Because VR headsets have total view isolation and no crosstalk, you can achieve much higher disparities on those devices than auto-stereoscopic devices.

Just keep tweaking the SBS content till it looks great on Lume Pad 2.

Oh, I see. I thought this was possible because the Leia Player can also convert 2D videos to 3D on the device. I assumed this was done using an algorithm (AI) that generates a depth map for the 2D images and then converts them into an SBS video, all of which works impressively fast.

But then I guess I’ll need to tweak my Python script a bit more until it delivers results similar to the LumePad 2 itself :slight_smile:

Many regards,
Jens Duttke