Volinga Suite - User Manual

What’s new?

The brand new 0.3.1 version of our UE plugin is out!

We added support from a new brand technology: 3D Gaussian splatting!

1. What is Volinga Suite

Volinga Suite is a groundbreaking tool that empowers creators to effortlessly create and render NeRF in real-time using Unreal Engine. This innovative suite is comprised of three key components: the Volinga Renderer, the Volinga Exporter, and the Volinga Creator. The Volinga Renderer is a powerful tool that allows for the rendering of NeRF models in real-time, providing a seamless and immersive experience for users. The Volinga Exporter enables creators to easily export their NeRF models to an NVOL file format, making it easy to share and collaborate with others. Finally, the Volinga Creator is a user-friendly interface that streamlines the process of creating NeRF models.

NVOL is a new standard file format to store NeRFs in a fast and efficient way.

2. Volinga Renderer

2.1 What is Volinga Renderer

Volinga Renderer is a software developed by Volinga that enables the real-time rendering of NeRFs. This software is powered by NVOL file format. Volinga Renderer is part of the Volinga Suite, which is also composed of Volinga Creator and Volinga Exporter.

2.2 Plugin for Unreal Engine

Volinga Renderer can be integrated into Unreal Engine using the Plugin provided by Volinga. We will go through the process of installation and use of this plugin.

To use the plugin, it is recommended to use at least an NVIDIA RTX 3060 GPU. Nevertheless, the Volinga Renderer also works on GPUs of the series NVIDIA RTX2000.

2.3 Installation of the plugin

This section describes the manual installation of the plugin. However, you can easily install de plugin with just one click using

To install Volinga plugin for UE, we need to find the installation path for Unreal Engine within our system. The default path is: “C:\Program Files\Epic Games\UE_5.X\Engine” and copy the folder “VolingaRenderer” into the “Plugins” folder of the engine.

If the plugin is correctly installed, we will see it in the “Plugins” section of our Unreal Engine project:

The plugin can be now enabled by clicking on the checkbox to the left of the Volinga logo. A warning message will appear to inform us this is a plugin in beta version:

Now, Volinga Renderer plugin for Unreal Engine is ready to be used.

Pro tip: To enhance your experience while using VolingaRenderer, we recommend you to use FXAA anti-aliasing method. This can be configured in the project settings under the “Engine - Rendering” section.

2.4 Volinga NeRF Actor

Volinga NeRF Actor is the core actor of the plugin. It acts as a placeholder for NeRFs . Volinga NeRF Actor is composed by a cubic mesh, which limits the regions where NeRF will be rendered, and a NVOL asset, that will hold the NeRF. Using Volinga NeRF Actor is as easy as dragging an NVOL asset to the viewport:

If several Volinga NeRF Actors are present in the same level, only the one that was added in the first place will be rendered.

Volinga NeRF actor also allows to use Unreal’s gizmos for translation and rotation, and also allows to use the scale to crop the NeRF (whenever unbound property of NeRF is disabled):

NeRF rendering might take many resources. Therefore, you can disable it while working in editor using editor preview 💚 button.

Volinga NeRF Actor allows for compositing NeRFs and 3D objects in a seamless way:

Modifying the properties under NeRF Settings of Volinga NeRF Actor, we can modify different properties of Volinga Renderer. There are two different kinds of NeRFs that a Volinga NeRF Actor can hold:

2.4.1 NeRFacto

NeRFacto is the default method for real data captures of static scenes included in NeRFStudio framework (https://docs.nerf.studio/). Volinga provides a modified version to make it run real-time.

The properties that can be modified when using a NeRFacto based NVOLs are the following:

Pro tip: To import an NVOL file into Unreal Engine, we just need to drag the file and drop it into the content drawer .
Pro tip: If your are lost in NeRF hallucinations, you can also use unbound to find the center of the scene.
Pro tip: Don't set this option if you're using nDisplay

Pro Tip: Sometimes NeRF are too large, and the Sky Sphere can be visualized when looking a further regions. An easy solution for this is reducing the Scale Offset.

2.4.1 3D Gaussian Splatting.

3D Gaussian Splatting is a method devoped by INRIA and Max Planck Institute (https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/). It provide high quality reconstruction and high rendering perfomance. The properties that can be modified using a 3D Gaussian Splatting based NVOL are the following:

Pro tip: To import an NVOL file into Unreal Engine, we just need to drag the file and drop it into the content drawer .
3D Gaussian Splatting NVOL files can be considerably large. They may take some seconds to be loaded into Unreal Engine.
Pro tip: Don't set this option if you're using nDisplay

2.5 Editor Preview

NeRF rendering might take many resources. Therefore, Volinga plugin allows to disable it when working on other parts of the level.

To disable/enable Editor Preview 💚, we only need to click on the button with the Volinga icon next to the simulation button.

Whenever Editor Preview 💚 is enabled, the button will be displayed in green. If it is disabled, it will be displayed in gray.

2.6 Volinga and Disguise’s RenderStream

To use Volinga Renderer together with Disguise’s RenderStream, we recommend using the native integration which removes the need of using Unreal Engine and improves the performance. You can learn more about it here.

2.7 Volinga and Pixotope

Volinga can be used together with Pixotope using our custom plugin.

2.8 Volinga and Nuke Server

Volinga can be used in NukeX using Nuke Server plugin. Using a Level Sequencer and the Unreal Reader Node in Unreal, we can animate cameras and get the renderings into Nuke.

If you want to render a video using Movie Render Queue, or you are using Volinga in Nuke through Nuke Server, we recommend disabling dynamic resolution and setting the resolution multiplier to one to obtain the best quality output.

3. Volinga Exporter

Volinga Exporter is a tool provided by Volinga to convert .ckpt files trained using NeRFStudio and .ply files created using 3D Gaussian Splatting into NVOL files. In the case of NeRFStudio, Volinga Exporter only support Volinga model, which is an external method. You can add Volinga to your existing NeRFStudio installation using:

pip install git+https://github.com/Volinga/volinga-model

Or you can follow the instructions at https://github.com/Volinga/volinga-model. When you have added Volinga method, you an now train a new NeRF:

ns-train volinga --data /path/to/your/data --vis viewer

Once the training is done, you can find your checkpoint file in the outputs/path-to-your-data/volingafolder. Then, you can drag it to Volinga Suite to export it to NVOL.

In the case of 3D Gaussian Splatting, you can create .ply files following the instructions provided in https://github.com/graphdeco-inria/gaussian-splatting. You can convert it into an NVOL file following the same process.

NVOL files created using .ply files can be used for comercial purpose only if the user have been granted with commercial license from INRIA and Max Planck institute and has and Indie or Enterprise Subscription plan.

4. Volinga Creator

To train NeRFs using Volinga Creator you can just drag and drop the training images (.jpg, .jpeg, .png, .tif, .tiff) or a training video (.mp4, .mov).

After filling the file name and the #hastag, you can upload the media files and wait for your NVOL to be generated.

NVOLs created using Volinga creator can be used commercially if they are created under an Indie or an Enterprise subscription plan. There is no need for extra licensing.
If you are using Volinga Creator in a mobile browser, you will have the option to directly record a video from your camera. However, we recommend not to use this method, since the quality of the video recorded from the browser camera is low. Uploading a video from your camera roll will results in better quality.

4.1 Best practices for capturing scenes

When capturing videos or images to create NeRFs, the capturing process have a great influence on the output quality. We strongly recommend to follow this guide developed by Jonathan Stephens and Jared Heinly:

We also recommend to watch the episode of the podcast “Computer Vision Decoded” where they explain this guide in detail: https://www.youtube.com/watch?v=AQfRdr_gZ8g&t=4s

5. Volinga Desktop

Volinga Desktop allows to have the complete power of Volinga Suite in your computer. At this moment, Volinga Desktop provides two different modules: Volinga Installer and Volinga Exporter.

5.1. Volinga Installer.

Volinga installer provides a on-click installation for the different plugins of Volinga:

When a plugin is selected to be installed, a dialog window will pop up to select the installation path. For each of the plugins, the installation path will be different:

Unreal Engine: You will have to select Unreal Engine installation folder, which is usually under the path: “C:\Program Files\Epic Games\UE_5.X\Engine”:

Pixotope: You will have to select Pixotope Engine installation folder, which is usally under the path: “C:\Program Files\Pixotope\{Pixotope Version}\Pixotope Engine”.


You will have to select RenderStream projects folder, usually located under:
”C:\Users\{current-user}\Documents\RenderStream Projects”

Replacing current user with the name of the user.

Once the installation is finished, your plugin will be ready to be used!

5.2 Volinga Creator

In the Creator section you can create new NVOLs, either online like on the web or locally if you have a graphics card that meets the minimum specifications.

To create a new nvol, simply drag and drop a dataset onto the "+" button, or click on this same button to manually select the files. The allowed datasets are the following:

Once you have done this, you must select the type of training you want to do, NeRF or 3DGS, and whether you want to do it locally or online.

NOTE 1: Local training is only available for 3DGS.

NOTE 2: NVOLs generated locally by non-Enterprise clients will be generated with a watermark.

Next, a name for the NVOL must be configured and tags can be added if desired, after which the Process button can be clicked.

Advanced parameters

When performing a local training session, before starting it, a window will be displayed to select the advanced training parameters that can be configured:

The functionality of each of these parameters is specified below:

NOTE: we recommend not to modify the following parameters unless you feel very confident about it:

Once you have set all the parameters, press the Start button and wait for the process to finish. You can see the progress of the training in the gallery itself:

Continue Training and Retrain

One of the possibilities offered by Volinga Desktop is the possibility to retrain already trained datasets or to continue a training where you left off.


To retrain an NVOL, simply open the context menu and click on the "Retrain" option. This will load the dataset into the Creator and you can then click on Process and Start to start training with the selected advanced parameters.

The Retrain process takes advantage of the COLMAP data calculated in the original training to save training time and to be able to make quick tests with different settings.
NOTE: this option is not valid if you want to re-train a dataset by adding images, for that you will have to start a training from the beginning adding all the desired images or videos to the Creator interface.

Continue training

To continue training from where you left off, simply open the NVOL context menu and click Continue training. This will load the NVOL into the Creator and you can then click Process, select the desired final iterations, and click Start to continue training with the selected advanced parameters.

When continuing a training session, the advanced parameters that will be loaded will be the original ones, but they can be modified before continuing the training session.

Queue trainings

To queue trainings, simply start a new training with one of the methods mentioned above (Creator, Retrain or Continue training), which will add this new process to the list of pending tasks, which will appear in the gallery itself.

6. Known Limitations and Bugs