Blender SDK Camera Rig Questions

i have a couple questions about the blender rig:

im sure a lot of these are things im misunderstanding or incorrect about, but i thought i’d compile them:

are there any plans to add eevee support?
nvm, just needed to update the settings for all 5 scenes

i noticed that even when i change my render samples to a lower number, they always seem to render with 32 samples. is this hard coded somewhere?
nvm, just needed to update the settings for all 5 scenes

i noticed that theres no visualization for the “near” clipping plane. could be helpful to have that enabled by default.

i noticed that the rig flips out when the focal/convergence plane is too close to the camera. not sure if theres a minimum distance hard coded? it would be nice to be able to control that, or just have it lowered if possible.

it would cool to have independent controls between fov and “far plane” right now its all baked into the focal length control so you can’t easily modify one without the other (i don’t know if what i’m asking for here makes sense in terms of the physics of the rig, maybe it’s not possible to independently modify these things, but i feel like the unity camera rig allows for it, if memory serves)

when it comes to enabling depth of field, you have to manually update the individual settings for each of the 5 cameras in the rig. it would be nice if they used drivers that referenced the “2d camera” so any time you change the depth of field options on the visualizer camera, the rig cameras updated as well.
(the documentation is somewhat clear that these settings need to be manually synchronized)

when i set up world nodes with an environment map (hdri) the map is not used. not the background, nor the lighting, i have to manually add additional lights to the scene for the final render to be lit. is this something i’ve misconfigured, or a known limitation?
nvm, just needed to update the settings for all 5 scenes

i noticed that even when i have film > transparent disabled, the output renders still have a transparent background, rather than rendering the world environment. so i end up having to do a skybox or textured sphere / other geometry if i want the background filled with pixels. is this just a limitation of the current camera rig, or am i missing something obvious?
nvm, just needed to update the settings for all 5 scenes

i haven’t found a good way to be able to export the camera rig to an existing project. so far the only way that seems to work is importing my existing project assets into a copy of the camera rig file. this isn’t always ideal as blender’s export/import and “append” processes sometimes leave out things like instanced objects. (append is a pain cause you have to repeat append multiple times for each collection/asset. it doesn’t allow for multi select ((maybe newer versions have fixed this i need to check; i know theres a new asset library so maybe that solves this)))

it would be cool if there was a plugin script that could “inject” the camera rig into the current project (note: the docs mention this as well)

other minor annoyance is that sometimes the control widgets get lost, could be helpful to set them to render on top (i think that’s an option in the viewport display settings, i need to confirm)

oh and if you select the rig armature, and hit “;” (semicolon) to focus the controls in the view, the view gets blasted out to the middle of nowhere and you have to find your way back. so, currently the only way to interact with them is to carefully/manually visually navigate near them or search the hierarchy to select them. its not unusable, but it did take some getting used to (quick work around is to enter edit mode, then focus the view to one of the bones in the armature, then exit back to object mode)

ok i just re-read the manual and most of these items are addressed in the documentation.

the possibility of an add-on in the future is mentioned
the fact that each camera has its own scene/world settings explains the missing hdri (i think, i need to test to confirm)

i may be able to figure out how to add my own “driver” to sync the dof settings, but the docs mention they have to be manually replicated

and the docs mention that the setup could benefit from some additional scripting to make each camera render at 1/4 resolution. so i might look into implementing that.

Here’s a version of the Camera rig that has “driver” connections set up so that all of the Camera > Depth of Field settings update for all 4 render cameras, any time you change one of the settings on the 2D Reference camera:
image

Note: the only field that isn’t automatically synced is “Focus Object” so, either leave it set to “Convergence Plane” or, if you change it, you’ll still need to manually change it for all 5 cameras in the rig