Acer SpatialLabs View Pro 27

Acer has announced a 3D monitor called SpatialLabs View Pro 27.

It is a 27" PC monitor and has a 4K resolution with 160hz refresh rate. It is switchable from 2D to 3D. It supports head tracking and parallax motion. It has an optional detachable hood. It has Acer Immerse Audio for spatial surround sound.

I was able to see it at the Metacenter Global Week conference in Orlando at the Acer booth. Here’s a video of the display I took there so you can see what it looks like on camera.

Please keep all discussion about this product on this thread.

1 Like

Is Leia actively working with Acer on this device or is it just Acer doing their own thing with licensed 3D tech?

That’s a question that you should reach out to Acer to get an answer to.

That said, we plan to have a future update of our SDK and some of our Windows apps (i.e. SR Hub, SR Media Player) work with the upcoming device.

Where could I find more information about Leia Windows apps?

1 Like

1 Like

Will it have the TrueGame features? Especially the Ultra 3D mode. That is a long anticipated product for nvidia 3d vision users. We’re thousands that want a new and proper replacement for gaming in 3D to update our setups.

For Acer software and products, you need to ask Acer.

So. Do you confirm TrueGame features are not part of Simulated Reality ecosystem, but Acer exclusive?

Acer TrueGame is definitely not Leia or Dimenco technology.

That doesn’t mean we don’t have our own equivalents or won’t make a competing offering. But they did not use our software for that application/service specifically.

I hope it relay on real polygons depth (added second virtual camera), instead of AI

None of the solutions I know of for games use real-time AI for 3D conversion. They either use a second camera (known by many as geometry 3D) or they use the game’s internal depth map (known by many as depth 3D).

Both of those are using “real polygons” for depth. The big difference is whether they’re rasterizing a discrete individual view per eye with the polygons or they’re synthesizing novel stereo views from a single albedo using the depth provided from the polygons.

The second virtual camera gives the most real depth, but some games use tricks like apply shadows to final 2d image instead the polygons, making shadow areas appear at a fixed near distance, and areas without shadows with real depth… Solution is to use depth map, but it produces halos and other issues that make the 3D less realistic, so it should only be used on games that use tricks that broke real 3D depth with a second camera.
So, for these games a solution could be to invent an hybrid solution combining the dual camera approach with depth map approach, to avoid the dirty tricks developers make on such games, and the artifacts of the depth map approach.