A preview of this full-text is provided by Springer Nature.
Content available from The Visual Computer
This content is subject to copyright. Terms and conditions apply.
The Visual Computer
https://doi.org/10.1007/s00371-024-03660-4
RESEARCH
Polynomial for real-time rendering of neural radiance fields
Liping Zhu1·Haibo Zhou1·Silin Wu1·Tianrong Cheng1·Hongjun Sun1
Accepted: 17 September 2024
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2024
Abstract
In neural radiance fields (NeRF), generating highly realistic rendering results requires extensive sampling of rays and online
query of multilayer perceptrons. However, this results in slow rendering speeds. Previous research has addressed this issue
by designing faster evaluation of neural scene representations or precomputing scene properties to reduce rendering time.
In this paper, we propose a real-time rendering method called PNeRF. PNeRF utilizes continuous polynomial functions to
approximate spatial volume density and color information. Additionally, we separate the view direction information from
the rendering equation, leading to a new expression for the volume rendering equation. By taking the starting coordinates
of the observation viewpoint and the observation direction vector as inputs to the neural network, we obtain the rendering
result for the corresponding observation ray. Thus, the rendering for each ray only requires a single forward inference of
the neural network. To further improve rendering speed, we design a six-axis spherical method to store the rendering results
corresponding to the starting coordinates of the observation viewpoint and the observation direction vector. This allows us
to significantly improve the rendering speed and maintain the rendering quality, with minimal storage space requirements.
Experimental validation on LLFF datasets demonstrates that our method improves rendering speed while preserving rendering
quality and requiring minimal storage space. These results indicate the potential of our method in the real-time rendering
field, providing an effective solution for more efficient rendering.
Keywords Real-time rendering ·Continuous polynomial functions ·Volume rendering ·Six-axis spherical
1 Introduction
There are diverse applications [1–13] in the research of
3D scenes. Recent research papers have explored the use
of implicit, coordinate-based neural networks as 3D repre-
sentations, which opens up the promising and new avenues
for the development of neural rendering. Examples of such
approaches include neural volumes [14] and NeRF [15],
which can simulate the 3D properties of scene objects from
BHaibo Zhou
1910055855@qq.com
Liping Zhu
zhuliping@cup.edu.cn
Silin Wu
2022211284@student.cup.edu.cn
Tianrong Cheng
2422184962@qq.com
Hongjun Sun
sunhj68@cup.edu.cn
1Beijing Key Laboratory of Petroleum Data Mining, China
University of Petroleum (Beijing), Beijing 102249, China
a set of calibrated images. Specifically, NeRF represents the
scene as a continuous volume function, enabling high-quality
rendering of novel views from arbitrary viewing angles,
including non-Lambertian effects. The NeRF function is
parameterized by MLP that maps continuous 3D positions to
corresponding volume densities and view-dependent color
information. The success of NeRF has spurred a wealth
of subsequent research, breaking through some limitations,
such as handling dynamic scenes [16,17], scene editing [18,
19], and relighting [20,21].
A common challenge faced by research based on NeRF
is the slow rendering speed, primarily due to the high sam-
pling requirements and expensive neural network queries.
Rendering a single ray requires querying the neural network
for the mappings of each 5D coordinate. Various approaches
have been explored to improve the computational efficiency
of NeRF [22–25], resulting in some progress in rendering
speed. However, interactive rendering requirements are still
far from being met.
In this work, we propose a real-time rendering method
with minimal storage cost. Our approach represents volume
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.