Volinga Suite - User Manual
What’s new?
The brand new 0.3.1 version of our UE plugin is out!
We added support from a new brand technology: 3D Gaussian splatting!
- ✨ Added support to 3D Gaussian Splatting!
- ⚙️ Real-time update of Volinga NeRF Actor properties.
- ⚡ Performance improvements.
- 📹 Multi-camera and multi-viewport rendering.
- 🎨 NeRF Postprocessing options.
- 🎮 Unreal 5.3 support.
- 📷 Improved nDisplay compatibility: Inner/Outer frustum support.
- ↺ A new degree of freedom: Rotate your NeRF inside the cube!
1. What is Volinga Suite
Volinga Suite is a groundbreaking tool that empowers creators to effortlessly create and render NeRF in real-time using Unreal Engine. This innovative suite is comprised of three key components: the Volinga Renderer, the Volinga Exporter, and the Volinga Creator. The Volinga Renderer is a powerful tool that allows for the rendering of NeRF models in real-time, providing a seamless and immersive experience for users. The Volinga Exporter enables creators to easily export their NeRF models to an NVOL file format, making it easy to share and collaborate with others. Finally, the Volinga Creator is a user-friendly interface that streamlines the process of creating NeRF models.
2. Volinga Renderer
2.1 What is Volinga Renderer
Volinga Renderer is a software developed by Volinga that enables the real-time rendering of NeRFs. This software is powered by NVOL file format. Volinga Renderer is part of the Volinga Suite, which is also composed of Volinga Creator and Volinga Exporter.
2.2 Plugin for Unreal Engine
Volinga Renderer can be integrated into Unreal Engine using the Plugin provided by Volinga. We will go through the process of installation and use of this plugin.
2.3 Installation of the plugin
To install Volinga plugin for UE, we need to find the installation path for Unreal Engine within our system. The default path is: “C:\Program Files\Epic Games\UE_5.X\Engine” and copy the folder “VolingaRenderer” into the “Plugins” folder of the engine.
If the plugin is correctly installed, we will see it in the “Plugins” section of our Unreal Engine project:
The plugin can be now enabled by clicking on the checkbox to the left of the Volinga logo. A warning message will appear to inform us this is a plugin in beta version:
Now, Volinga Renderer plugin for Unreal Engine is ready to be used.
2.4 Volinga NeRF Actor
Volinga NeRF Actor is the core actor of the plugin. It acts as a placeholder for NeRFs . Volinga NeRF Actor is composed by a cubic mesh, which limits the regions where NeRF will be rendered, and a NVOL asset, that will hold the NeRF. Using Volinga NeRF Actor is as easy as dragging an NVOL asset to the viewport:
Volinga NeRF actor also allows to use Unreal’s gizmos for translation and rotation, and also allows to use the scale to crop the NeRF (whenever unbound property of NeRF is disabled):
Volinga NeRF Actor allows for compositing NeRFs and 3D objects in a seamless way:
Modifying the properties under NeRF Settings of Volinga NeRF Actor, we can modify different properties of Volinga Renderer. There are two different kinds of NeRFs that a Volinga NeRF Actor can hold:
2.4.1 NeRFacto
NeRFacto is the default method for real data captures of static scenes included in NeRFStudio framework (https://docs.nerf.studio/). Volinga provides a modified version to make it run real-time.
The properties that can be modified when using a NeRFacto based NVOLs are the following:
- NVOL: This property allows us to select the NVOL (i.e. NeRF Scene) we want to be rendered. We can select any of the NVOL assets existing in our project.
- Unbound: This property allows to set the cubic boundaries as crop limits.
- ICVFXCamera: If you have a camera in the scene and you set this field with that camera, Volinga will render the NeRF by applying its point of view when you press Play In Editor.
- Rotation Offset: This property allows you to rotate the NeRF with regard to the cube center.
- Location Offset: This property modifies the position of NeRF origin with regard to the cube center.
- Scale Offset: This property allows to modify the scale of NeRF.
- Lock Scales: This property with lock the actual scale of the bounding box to NeRF scale. In this way, the cropped section is kept constant when rescaling the NeRF.
- Enable Dynamic resolution: This property enables the use of dynamic resolution. If enabled, Volinga Renderer will reduce the render resolution to reach the target FPS set by the user. If disabled, Volinga Renderer will render at the viewport resolution.
- Movement Target FPS: This is the target frame rate when the camera in the viewport is moving. To trigger this target FPS, hold down the right mouse button.
- Static Target FPS: This is the target frame rate when the camera in the viewport is still. This is also the default target frame rate.
- Resolution Multiplier: This property enables the user to manually reduce the render resolution to increase the frame rate. It has to be set between 0 and 1.0. This property will have no effect if Dynamic Resolution is Enabled.
- Render Method: This property allows the user to select between the default render method (Quality optimized) and a new inference mode which is optimized for achieving higher frame-rates (Speed optimized). This last method can increase the rendereing speed up to a 33%, but may produce artifacts when moving the camera in some NeRFs.
- Ray Samples: This property set the number of samples that will be evaluated by NeRF on each camera ray. A larger number will increase rendering quality by reducing the frame rate, while a smaller number will produce quality loss but will increase the frame rate.
- Tint: Allows the NeRF to be tinted with a solid color.
- Saturation: Set color saturation.
- Shadows: Set the dark tones of the NeRF.
- Gamma: Set the middle tones of the NeRF.
- Lights: Set the light tones of the NeRF.
- Hue: Set the hue tint.
2.4.1 3D Gaussian Splatting.
3D Gaussian Splatting is a method devoped by INRIA and Max Planck Institute (https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/). It provide high quality reconstruction and high rendering perfomance. The properties that can be modified using a 3D Gaussian Splatting based NVOL are the following:
- NVOL: This property allows us to select the NVOL (i.e. NeRF Scene) we want to be rendered. We can select any of the NVOL assets existing in our project.
- Unbound: This property allows to set the cubic boundaries as crop limits.
- ICVFXCamera: If you have a camera in the scene and you set this field with that camera, Volinga will render the NeRF by applying its point of view when you press Play In Editor.
- Rotation Offset: This property allows you to rotate the NeRF with regard to the cube center.
- Location Offset: This property modifies the position of NeRF origin with regard to the cube center.
- Scale Offset: This property allows to modify the scale of NeRF.
- Lock Scales: This property with lock the actual scale of the bounding box to NeRF scale. In this way, the cropped section is kept constant when rescaling the NeRF.
- Lower alpha threshold: Remove NeRF pixels where Alpha channel is lower than threshold by making them fully transparent. Used to remove artifacts.
- Upper alpha threshold: Change NeRF pixels where Alpha channel is higher than threshold by making them fully opaque. Used to remove undesired transparency.
- Tint: Allows the NeRF to be tinted with a solid color.
- Saturation: Set color saturation.
- Shadows: Set the dark tones of the NeRF.
- Gamma: Set the middle tones of the NeRF.
- Lights: Set the light tones of the NeRF.
- Hue: Set the hue tint.
2.5 Editor Preview
NeRF rendering might take many resources. Therefore, Volinga plugin allows to disable it when working on other parts of the level.
To disable/enable Editor Preview 💚, we only need to click on the button with the Volinga icon next to the simulation button.
Whenever Editor Preview 💚 is enabled, the button will be displayed in green. If it is disabled, it will be displayed in gray.
2.6 Volinga and Disguise’s RenderStream
To use Volinga Renderer together with Disguise’s RenderStream, we recommend using the native integration which removes the need of using Unreal Engine and improves the performance. You can learn more about it here.
2.7 Volinga and Pixotope
Volinga can be used together with Pixotope using our custom plugin.
2.8 Volinga and Nuke Server
Volinga can be used in NukeX using Nuke Server plugin. Using a Level Sequencer and the Unreal Reader Node in Unreal, we can animate cameras and get the renderings into Nuke.
3. Volinga Exporter
Volinga Exporter is a tool provided by Volinga to convert .ckpt files trained using NeRFStudio and .ply files created using 3D Gaussian Splatting into NVOL files. In the case of NeRFStudio, Volinga Exporter only support Volinga model, which is an external method. You can add Volinga to your existing NeRFStudio installation using:
pip install git+https://github.com/Volinga/volinga-model
Or you can follow the instructions at https://github.com/Volinga/volinga-model. When you have added Volinga method, you an now train a new NeRF:
ns-train volinga --data /path/to/your/data --vis viewer
Once the training is done, you can find your checkpoint file in the outputs/path-to-your-data/volinga
folder. Then, you can drag it to Volinga Suite to export it to NVOL.
In the case of 3D Gaussian Splatting, you can create .ply files following the instructions provided in https://github.com/graphdeco-inria/gaussian-splatting. You can convert it into an NVOL file following the same process.
4. Volinga Creator
To train NeRFs using Volinga Creator you can just drag and drop the training images (.jpg, .jpeg, .png, .tif, .tiff) or a training video (.mp4, .mov).
After filling the file name and the #hastag, you can upload the media files and wait for your NVOL to be generated.
4.1 Best practices for capturing scenes
When capturing videos or images to create NeRFs, the capturing process have a great influence on the output quality. We strongly recommend to follow this guide developed by Jonathan Stephens and Jared Heinly:
We also recommend to watch the episode of the podcast “Computer Vision Decoded” where they explain this guide in detail: https://www.youtube.com/watch?v=AQfRdr_gZ8g&t=4s
5. Volinga Desktop
Volinga Desktop allows to have the complete power of Volinga Suite in your computer. At this moment, Volinga Desktop provides two different modules: Volinga Installer and Volinga Exporter.
5.1. Volinga Installer.
Volinga installer provides a on-click installation for the different plugins of Volinga:
When a plugin is selected to be installed, a dialog window will pop up to select the installation path. For each of the plugins, the installation path will be different:
Unreal Engine: You will have to select Unreal Engine installation folder, which is usually under the path: “C:\Program Files\Epic Games\UE_5.X\Engine”:
Pixotope: You will have to select Pixotope Engine installation folder, which is usally under the path: “C:\Program Files\Pixotope\{Pixotope Version}\Pixotope Engine”.
Disguise:
You will have to select RenderStream projects folder, usually located under:
”C:\Users\{current-user}\Documents\RenderStream Projects”
Replacing current user with the name of the user.
Once the installation is finished, your plugin will be ready to be used!
5.2 Volinga Creator
In the Creator section you can create new NVOLs, either online like on the web or locally if you have a graphics card that meets the minimum specifications.
To create a new nvol, simply drag and drop a dataset onto the "+" button, or click on this same button to manually select the files. The allowed datasets are the following:
- JPG, PNG or TIFF images. If the training is online, the maximum number of images is 2000. Currently only images with a colour depth of 8 bits are accepted.
- Videos in MP4 or MOV format. The maximum number of videos in online trainings is 1, and their size cannot exceed 5 Gb. In local trainings you can add as many videos as you want, but they must be recorded with the same camera.
- PLY or CKPT files.
- COLMAP datasets. To add COLMAP datasets you must follow the following file and folder structure:
- colmap
- images
- jpg
- jpg
- …
- sparse
- 0
- bin
- bin
- bin
- 0
- images
- colmap
Once you have done this, you must select the type of training you want to do, NeRF or 3DGS, and whether you want to do it locally or online.
NOTE 1: Local training is only available for 3DGS.
NOTE 2: NVOLs generated locally by non-Enterprise clients will be generated with a watermark.
Next, a name for the NVOL must be configured and tags can be added if desired, after which the Process button can be clicked.
Advanced parameters
When performing a local training session, before starting it, a window will be displayed to select the advanced training parameters that can be configured:
The functionality of each of these parameters is specified below:
- save-iterations: it represents the iterations of the model to save, besides the last one. You must specify it using spaces to separate each iteration.
- iterations: it represents the number of iterations you want to try the model. Usually the larger you trime, the higher the quality of the model, but usually around 30000 iterations the improvement is not noticeable.
- resolution: it represents the factor to downscale the training images. For example, if your training images are at a 4K resolution and you select 2 as resolution, the images will be reduced to 2K during training. You can also specify an exact pixel value, such as 1280, which will cause the image to resize to that width while preserving the aspect ratio.
- frames per video (only valid if you drop videos): It represents the number of frames you will extract from a video. The default number is 300 but, the more frames you extract, the better quality you will get, but the longer it will take to train. Also, more frames will require more VRAM memory in yout GPU.
- data_device: it specifies the device where the images will be stored during training. CUDA will result in a faster training but will require more GPU memore. CPU will result in a slower memory (since images have to be transfered to GPU on each iteration) but will reduce GPU memory consumption.
- Save files for SIBR: if checked, the file structure for the original SIBR viewer will be generated.
NOTE: we recommend not to modify the following parameters unless you feel very confident about it:
- scaling_lr: learning rate for scale matrix
- position_lr_init: initial learning rate for gaussians positions.
- position_lr_final: final learning rate for gaussians positions.
- percent_dense: the extent of the scene (in percent) that a gaussian must exceed before beeing splitted.
Once you have set all the parameters, press the Start button and wait for the process to finish. You can see the progress of the training in the gallery itself:
Continue Training and Retrain
One of the possibilities offered by Volinga Desktop is the possibility to retrain already trained datasets or to continue a training where you left off.
Retrain
To retrain an NVOL, simply open the context menu and click on the "Retrain" option. This will load the dataset into the Creator and you can then click on Process and Start to start training with the selected advanced parameters.
The Retrain process takes advantage of the COLMAP data calculated in the original training to save training time and to be able to make quick tests with different settings.
NOTE: this option is not valid if you want to re-train a dataset by adding images, for that you will have to start a training from the beginning adding all the desired images or videos to the Creator interface.
Continue training
To continue training from where you left off, simply open the NVOL context menu and click Continue training. This will load the NVOL into the Creator and you can then click Process, select the desired final iterations, and click Start to continue training with the selected advanced parameters.
When continuing a training session, the advanced parameters that will be loaded will be the original ones, but they can be modified before continuing the training session.
Queue trainings
To queue trainings, simply start a new training with one of the methods mentioned above (Creator, Retrain or Continue training), which will add this new process to the list of pending tasks, which will appear in the gallery itself.
6. Known Limitations and Bugs
- A project that uses Volinga plugin cannot be packaged as a shipping .exe.
- Only one NeRF can be rendered at a time. If a level has more than one VolingaNerfActor in place, only one of them will render the NeRF.
- Drag & drop a NVOL directly from explorer to Unreal viewport causes undefined behavior. Try to drag & drop it from explorer to content browser and after that drag it from content browser to the viewport instead.
- NeRF are no relightable.