Rendering Gigaray Light Fields

Transcription

Rendering Gigaray Light Fields
EUROGRAPHICS 2013 / I. Navazo, P. Poulin
(Guest Editors)
Volume 32 (2013), Number 2
Rendering Gigaray Light Fields
C. Birklbauer, S. Opelt and O. Bimber
Institute of Computer Graphics, Johannes Kepler University Linz, Austria
Figure 1: A 2.54 gigaray, 360◦ panoramic light field (spatial resolution: 17,885×1,260 pixels, angular resolution: 11×11,
7.61 GB) at two different focus settings (top: far; center: near), and close-ups in native resolution (bottom), rendered at 8-43
fps (full aperture - smallest aperture, at a render resolution of 1280×720) using off-the-shelf graphics hardware.
Abstract
We present a caching framework with a novel probability-based prefetching and eviction strategy applied to atomic
cache units that enables interactive rendering of gigaray light fields. Further, we describe two new use cases that
are supported by our framework: panoramic light fields, including a robust imaging technique and an appropriate
parameterization scheme for real-time rendering and caching; and light-field-cached volume rendering, which
supports interactive exploration of large volumetric datasets using light-field rendering. We consider applications
such as light-field photography and the visualization of large image stacks from modern scanning microscopes.
Categories and Subject Descriptors (according to ACM CCS): I.4.1 [Image Processing and Computer Vision]: Digitization and Image Capture— I.3.3 [Computer Graphics]: Picture/Image Generation—I.3.3 [Computer Graphics]:
Picture/Image Generation—Display algorithms
1. Introduction and Contributions
With increasing resolution of imaging sensors, light-field
photography [Ng05] is becoming ever more practical and
might ultimately replace classical 2D imaging in photography. In contrast to common digital photography, light-field
photography enables, for instance, digital refocusing, perspective changes, and depth-reconstruction as a post-process
(i.e., after capturing) or 3D viewing via stereoscopic and
autostereoscopic displays. A variety of light-field cameras
are already commercially available (e.g., compact light-field
cameras such as Lytro and Raytrix, and high-resolution camc 2013 The Author(s)
c 2013 The Eurographics Association and Blackwell PublishComputer Graphics Forum ing Ltd. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ,
UK and 350 Main Street, Malden, MA 02148, USA.
era arrays such as Point Grey), and first light-field display
prototypes have also been presented [LWH∗ 11, JMY∗ 07].
To benefit from the great potential of light fields, however,
their spatial resolution must be in the same megapixel order
as the resolution of modern digital images. The additional
angular resolution must also be adequately high to prevent
sampling artifacts (in particular for synthetic refocusing).
The demand for increasing light-field resolution will require
billions of rays (gigarays) to be recorded and stored as gigabytes rather than megabytes of data that must be processed
and rendered in real-time with limited graphics memory. In
C. Birklbauer, S. Opelt & O. Bimber / Rendering Gigaray Light Fields
this paper, we make three contributions:
1) We describe a light-field caching framework that
makes real-time rendering of gigaray light fields possible.
It uses a probabilistic strategy that determines the likelihood
of atomic cache units (light-field pages of several kilobytes)
being required in future frames as a decision metric for loading, discarding, and additional prefetching. It integrates page
probabilities computed by dead reckoning for estimated future camera parameters with probabilities of pages within
the same neighborhood of the current camera parameters.
We show that our approach generally outperforms the classical caching strategy Least Recently Used (LRU), as well
as existing prefetching methods that apply caching on the
level of entire perspective images rather than on the level of
atomic page units. In addition to direct rendering of gigaray
light fields recorded, for instance, with high-resolution handheld cameras (e.g., [DLD12]) or with cameras used in arrays
or with mechanical gantries, we describe two new applications that our framework supports: panoramic light fields
and light-field-cached volume rendering.
2) We present a first approach to constructing and
rendering panoramic light fields (i.e., large field-of-view
gigaray light fields computed from overlapping, lowerresolution sub-light-field recordings). By converting overlapping sub-light-fields into individual focal stacks from
which a panoramic focal stack is computed, we remove the
need for a precise reconstruction of scene depth or estimation of camera poses. Figure 1 shows the first panoramic
light field. Real-time rendering of such large panoramic light
fields is made possible by our light-field caching framework.
3) We integrate our light-field caching into a volume
renderer by using idle times for filling a cache-managed gigaray light field. New images are then composed from highresolution light-field rendering and from volume rendering
– depending on the state of the light-field cache. The image quality our method achieves when exploring large volumetric datasets interactively is considerably higher than the
low level of detail produced by a volume renderer at the
same frame rate. We focus on visualizing data from modern scanning microscopes that produce image stacks with a
megapixel lateral resolution and with possibly many hundreds to thousands of slices in axial direction.
2. Related Work
Caching is applied whenever data size and bandwidth limits
make direct access and on-demand transfer of data impossible. This often occurs when viewing high-resolution 2D images (e.g., [KUDC07]), large image stacks (e.g., [TMJ07]),
or extensive volumes (e.g., [CNLE09]). Since light fields
are often represented as a collection of perspective images,
caching on a per-image basis is an option. Prediction models,
such as dead reckoning, have already been used for determining the set of light-field perspective images required for
rendering [BEA∗ 08] or for streaming [RKG07]. Using entire images as pages to be cached is, however, inefficient for
large light fields – especially in situations in which smaller
parts of many different light-field perspectives are required
for rendering (e.g., reduced field of view or varying aperture
settings). Our approach manages atomic cache units (i.e.,
light-field pages that are much smaller than the full image
size) and supports a novel probability-based prefetching and
eviction strategy that combines dead reckoning and heuristic
parameter variations. Both optimizes caching and data transfer from main memory to graphics memory, but requires an
appropriate ray sampling and page testing that operates on
a page level. We explain how ray sampling and page testing can be computed for different light-field parameterizations, such as conventional two-plane and spherical, as well
as for a novel panoramic parameterization. This enables efficient rendering of gigaray light fields and makes two additional applications possible: High-quality interactive exploration of large volumetric datasets and visualization of
panoramic light fields.
Light-field compression (e.g. as in [RKG07]) is another
option to overcome bandwidth limitations. As compression
is complementary to caching, it is an optional extension of
the methods presented in this paper.
Conventional panorama imaging techniques assemble
large image panoramas with a wide field of view from several smaller sub-images. Such techniques are an integral part
of digital photography often supported by modern camera
hardware. We introduce first techniques for imaging and visualizing panoramic light fields. To ensure robust registration and blending of sub-light-fields without the need for
precise depth reconstruction or camera pose estimation, we
apply common panorama techniques, such as those presented in [BL07, UES01], as a basis for our approach. The
main difference from regular image panoramas, however, is
that we must additionally encode directional information in a
consistent way. With a focus on generating only two horizontal perspectives, this has been achieved in [PBP01], where a
stereo panorama is created either with the help of special optics or with a regular camera rotating around one axis. Generally, this would allow considering more than two perspectives. Yet, this has not been investigated. Concentric mosaics
[SH99] are panoramas that contain more than two perspectives. They are captured by slit cameras moving in concentric circles. By recording multiple concentric mosaics at various distances from the rotation axis, novel views within the
captured area (i.e., a horizontal 2D disk) can be computed.
This idea was extended to the vertical axis in [LZWS00],
which enables rendering of novel views within the resulting 3D cylindrical space. In contrast to concentric mosaics,
our panoramic light fields are captured with regular (mobile)
light-field cameras in exactly the same way image panoramas are recorded with conventional cameras (i.e., via rotational movement of the camera). Neither a precise, mechanically adjusted camera movement, nor specialized optics are
required in our case. Furthermore, we introduce a novel parameterization scheme for cylindrical panorama light fields
that is used for ray sampling and page testing. Thus, realc 2013 The Author(s)
c 2013 The Eurographics Association and Blackwell Publishing Ltd.
C. Birklbauer, S. Opelt & O. Bimber / Rendering Gigaray Light Fields
time rendering of gigaray panoramic light fields is enabled
by the light-field caching approach presented in this paper.
Since light-field rendering is a low-cost image-based technique, it has been used in combination with volume rendering before. Approaches such as that presented in [RSTK08]
convert large volumetric datasets into a light-field representation in an offline preprocessing step. The light-field data
is then used for rendering the volumetric content at interactive frame rates that cannot be supported by online volume rendering. For another application, we integrated our
light-field caching method into a volume renderer to support
interactive viewing of large volumetric datasets. In contrast
to previous approaches, we apply light-field caching, and
use the volume renderer’s idle times to fill and update the
cache dynamically on the fly, based on interaction predictions made by our probability-based prefetching and eviction strategy. Using volume caching methods (such as texture bricking with octree space partitioning), current GPUimplemented volume renderers achieve fast frame rates and
adequate image quality if the data required for rendering one
frame fits entirely into the graphics memory. For large, but
mainly opaque and sparse volumes real-time rates are possible, as shown in [CNLE09]. However, highly detailed and
transparent volumes (as created by scanning microscopes,
for example) are significantly larger and complex rendering
methods (e.g., when simulating physically based light transport with Monte Carlo ray tracing) make real-time rates difficult to achieve for these cases. Since lateral and axial resolutions of scanning microscopes are increasing continuously,
we believe that a dynamically updated light-field cache and
integrated image-based rendering as part of a volume renderer can be beneficial to the interactive exploration of such
data sets.
3. Light-Field Caching
We have developed a CUDA-based light-field renderer extended by a software-managed cache. In a preprocessing
step, the light-field data is split into atomic pages of pixel
blocks that are stored in the main memory or on hard disk,
and are managed by the cache. Similar to virtual memory
and virtual texturing, our system maintains an indirection
structure (i.e., an index table) that represents the current state
of the cache. It is used by the rendering CUDA-kernel to
determine whether the required light-field data is currently
available in the graphics memory and where it is located.
In this section, we describe our caching framework (including ray sampling, page testing, and caching strategies),
which is based on a conventional two-plane light-field parameterization as applied to data recorded by high-resolution
camera arrays or by mechanical (horizontally / vertically
shifting) camera gantries. In sections 4 and 5, we show how
this framework makes possible two novel applications that
require different parameterizations. For other light-field parameterizations, only the methods for ray sampling and page
c 2013 The Author(s)
c 2013 The Eurographics Association and Blackwell Publishing Ltd.
testing must be redefined, while our caching strategies remain the same.
3.1. Ray Sampling and Page Testing
Ray sampling for a two-plane parameterization of a light
field is straightforward, and mainly follows the approach described by [IMG00] (cf. figure 2a for an example in flatland):
Consider the two parallel planes U and S (which correspond
to the U,V and S, T planes in three-dimensional space), the
focal plane F, the image plane I, and the position of the rendering camera c. The projection f in F of each pixel i in I
is computed first. The intersections s of the rays from f to
samples in U that are within the defined aperture a are determined next. Note, that the aperture is a 2D disk in U that is
always centered around the projection of i in U. The integral
of the color responses at light-field coordinates u, s is the resulting color for the pixel i. Note, that while c, I, a, and F
can be freely chosen during rendering, U and S are defined
by the light-field camera used.
Computing the pages required for caching is inverse to
sampling rays for rendering a new image from the light field.
In our example, the light-field pages that we manage during
caching are patches of equal size in the S plane. If, for instance, we index the cameras of a camera array in the U
plane, the pixels of the corresponding images are indexed in
the S plane. In this case, each page contains an image portion of a corresponding camera. Thus, we can index pages
with light-field coordinates u, s, – but retrieve entire image
patches instead of individual pixel values. The pages required for rendering (i.e., those that should be available in
the graphics memory) can be determined as follows (cf. figure 2b for an example in flatland): For every sample u in
U, we first project each patch s in S from u onto F. The resulting projection f is then projected from c onto U. If the
projection in U overlaps (fully or partially) with the aperture
a centered at u and is contained within the frustum of the
rendering camera, the page at coordinates u, s is required for
rendering with the corresponding parameter set c, I, a, and
F. We explain further below under which conditions these
pages are loaded into the cache, since this depends on the
applied caching strategy.
Figure 2: Ray sampling (a) and page testing (b) for twoplane parameterization: U, S are the planes of the light field;
c is the camera position; I is the image plane; F is the focal
plane; i is the rendered pixel; u, s are the ray (a) or patch (b)
coordinates; f is the pixel (a) or patch (b) projection; a is
the aperture; red area is the overlap with the aperture.
C. Birklbauer, S. Opelt & O. Bimber / Rendering Gigaray Light Fields
To reduce visual image degradation caused by missing
pages, we store a complete but low-resolution fallback lightfield in our cache that is indexed in case of page faults (see
supplementary material for details).
3.2.1. Least Recently Used
We implemented a CUDA-accelerated version of LRU as a
reference caching strategy. It is one of the most common
caching strategies, and can be adapted well to light-field rendering. For each frame, the required pages are computed
(section 3.1) and compared with the pages in the cache to
decide which data is additionally required. If no free cache
slots are available, LRU always replaces least recently used
pages.
3.2.2. Probability-Based Prefetching and Eviction
Figure 3: Probability-based prefetching and eviction: dead
reckoning (a), parameter neighborhood (b), and probability
integration (c).
We investigated a novel probabilistic approach that combines dead reckoning and heuristic parameter variations as
an alternative decision metric for loading, discarding, and
additional prefetching.
At a constant frame rate, it applies dead reckoning to
recorded sets of previously adjusted virtual camera parameters at past frames to estimate the parameter sets at future
frames. This is illustrated in figure 3a, where f = 0 is the
parameter set for the current frame, and f = 1...n are the
parameter sets at the n future frames. Camera parameters at
future frames are estimated as follows:
a f = a0 + v0 · t f + 1/2 · c0 · t 2f ,
Note that we can compute all projections efficiently by
means of ray-plane intersections because the number of projected points (i.e., sample points or patch corners) is relatively small.
3.2. Caching Strategies
In section 3.1, we explained how the pages required for rendering an image for a given set of virtual camera parameters
(e.g., position, aperture, focus, field-of-view) are determined
and how these images are rendered. Since our light fields
are so large that all pages cannot be loaded into the available graphics memory at one time, we need a caching strategy that minimizes (i) visual image degradation due to page
faults and (ii) the time-demanding uploads of missing pages
into the graphics memory. We implemented three caching
strategies for determining which pages are to be loaded into
the cache and which pages are to be discarded if the cache
is filled: dead reckoning on a perspective image level similar as in existing approaches [BEA∗ 08, RKG07] (in the following referred to as perspective dead reckoning), as well
as classical LRU and our new probability-based prefetching
and eviction method that both operate on an atomic lightfield page level. We compare all methods with on-demand
loading (i.e., no caching) in section 6. Note that our cache
occupies a fixed fraction of the graphics memory.
(1)
where a f is an individual camera parameter at future frame
f , a0 is the parameter at the current frame, v0 and c0 are the
most recent velocity and acceleration of parameter changes
at the current frame, and t f is the look-ahead time span from
the current to the f th future frame (i.e., t f = f / f ramerate).
Note that this estimation is repeated for the entire set of
camera parameters. For each estimated future parameter set,
we compute a probability that is inversely proportional to
the look-ahead frame span, as illustrated in figure 3a. We apply an exponential probability fall-off function in our case:
p( f ) = 1/2 f . Each parameter set at the current and at future frames allows indexing the corresponding pages, as explained in section 3.1, and the computed probability is assigned to these pages. Thus, each parameter set (current and
future estimations) leads to page-specific probabilities, as illustrated on the right-hand side of figure 3a.
Additionally, we compute separate probabilities for the
neighborhood of each parameter at the current frame ( f = 0),
as illustrated in figure 3b. Here, a0 = 0 represents one individual camera parameter that has been adjusted at the current
frame, and a0 = −m, ..., −3, −2, −1, 1, 2, 3, ..., m are the parameter values in the neighborhood at fixed step widths in
both directions. For each currently adjusted parameter and
its 2m neighbors, we compute probabilities that fall off exponentially with the distance in steps between the neighbor and the current parameter adjustment: p(a0 ) = 1/2|a0 | .
c 2013 The Author(s)
c 2013 The Eurographics Association and Blackwell Publishing Ltd.
C. Birklbauer, S. Opelt & O. Bimber / Rendering Gigaray Light Fields
These probabilities are assigned to corresponding pages that
can again be indexed as described in section section 3.1.
While dead reckoning derives probabilities of pages that
belong to potential future parameter sets on the basis of previous parameter adjustments, the parameter neighborhood
estimates probabilities of pages for parameters that are similar to the adjusted parameters without considering previous
adjustments. To emphasize application specific navigation
behaviors, we further weight all probabilities with constant,
heuristically determined weights. For each page, we integrate all of these probabilities and finally normalize the integrals of all pages. Note, that since the adjusted parameters
at the current frame are always assigned to the highest probabilities, they will remain the highest values among the the
final (integrated and normalized) page probabilities (cf. figure 3c). Instead of prefetching only pages that are missing at
present and evicting pages that have been least recently used
(as explained for LRU in section 3.2.1), we additionally upload estimated future pages (prioritized with their probability) and consider pages with the lowest integral probabilities
as the first candidates to be discarded.
To implement perspective dead reckoning as in existing approaches [BEA∗ 08, RKG07], we disable the parameter neighborhood probabilities and index, load and discard
(based on the computed dead reckoning probabilities only)
entire perspective images rather than light-field pages.
4. Panoramic Light Fields
In this section, we present a first approach to constructing and rendering panoramic light fields (i.e., large fieldof-view light fields computed from overlapping sub-lightfield recordings). Figure 1 illustrates the first panoramic
light field. Real-time rendering of such large panoramic light
fields is made possible by our light-field caching framework
(section 3).
4.1. Capturing and Construction
We capture overlapping sub-light-fields of a scene in the
course of a rotational movement of a mobile light-field camera and convert each sub-light-field into a focal stack using synthetic aperture reconstruction [IMG00]. For lightfield cameras that directly deliver a focal stack, this step is
not necessary. For experimental proof of concept, we used
a Raytrix R11, whose output was converted into a classical two-plane light-field representation that can be indexed
with u, v, s,t coordinates. Next, we compute an all-in-focus
image for each focal stack by extracting and composing
the highest-frequency image content throughout all focal
stack slices. The registration and blending parameters are
then computed for the resulting (overlapping) all-in-focus
images. For this purpose, we apply conventional panorama
stitching techniques (based on [BL07, UES01]), such as
SURF feature extraction, pairwise feature matching and
c 2013 The Author(s)
c 2013 The Eurographics Association and Blackwell Publishing Ltd.
Figure 4: Panoramic light-field construction from a set of
overlapping sub-light-fields that are captured in the course
of a circular movement of the light-field camera.
RANSAC outlier detection, bundle adjustment, wave correction, exposure compensation, and computation of blending
seams. The registration and blending parameters derived for
the all-in-focus images are then applied to all corresponding
slices of the focal stacks. For composition, we use a fourdimensional (three rotations and focal length) motion model
for registration and multi-band blending. The result is a registered and seamlessly blended panoramic focal stack that
can be converted into a light field with linear view synthesis,
as described in [LD10]. This process is summarized in figure
4.
The advantage of choosing an intermediate focal stack
representation is that conventional image panorama techniques can be used for robust computation of a panoramic
light field without precise reconstruction of scene depth or
estimation of camera poses, which both depend on dense
scene features. Only a few scene features in the overlapping
areas of the all-in-focus images are necessary to achieve the
same registration and blending quality as for regular image
panoramas. However, the focal stack of a scene covers no
more than a 3D subset of the full 4D light field. This can lead
to artifacts at occlusion boundaries and prevents the correct
reconstruction of anisotropic reflections. Therefore, our current approach is limited to Lambertian scenes with modest
depth discontinuities as in [LD10].
Applying linear view synthesis to create a panoramic
light field from a panoramic focal stack requires a multipleviewpoint circular projection instead of a single-viewpoint
projection for rendering. This is similar to the rendering of
omnidirectional stereo panoramas, as in [PBP01]. The following subsection explains the ray sampling and page testing methods that are suitable for rendering and caching in
this case.
4.2. Ray Sampling and Page Testing
In contrast to the two-plane parameterization (section 3.1),
the parameterization for our panoramic light fields is not
symmetric in both directions. It requires a multi-view circular projection in one direction and a multi-view perspective
projection in the other (cf. figure 5a,b). This results in a 3D
C. Birklbauer, S. Opelt & O. Bimber / Rendering Gigaray Light Fields
cylindrical viewing space, as illustrated in figure 5.
The perspective of the rendering camera is defined by a
circle C of radius r in the U/S plane (i.e., a plane that intersects the cylinders horizontally, cf. figure 5a), a sampling
direction along the angle α from C, and a height h in the V /T
plane (i.e., a plane that intersects the cylinders vertically, cf.
figure 5b) and not by a position in 3D space as in the twoplane case. While α and h define the horizontal and vertical
perspectives, r constrains the maximum horizontal parallax
(a large r corresponds to a strong horizontal parallax). Furthermore, the image surface IJ, the parameter surfaces UV
and ST , and the focal surface FG are now concentric cylinders. The field of view in both directions can still be defined
on FG.
Ray-sampling for rendering panoramic images from
a panoramic light field with given camera parameters
(α, r, h, a, IJ, FG) is done as follows (cf., figures 5a,b): For
all points in C, we consider the rays that leave C at the same
angle α (defining the horizontal perspective) and that pass
through all points along the J direction of IJ (the selected
height h defines the vertical perspective). For all of these
rays, we compute the intersections f , g on FG. From these
intersections, we compute rays that intersect UV at angles β
and at coordinates v that belong to valid horizontal and vertical perspectives (i.e., those computed by linear view synthesis as explained in section 4.1). The intersection of these rays
on ST results in the final ray indices: β, v, s,t. Note that for
our circular projection, horizontal perspectives are defined
by angles rather than by positions. Therefore, β instead of u
must be used for sampling. However, we still compute the
corresponding u coordinates for each β to simplify our aperture test. Since points on UV are not coplanar, we use a 3D
sphere as an aperture representation a because we used a 2D
disc for the two-plane parameterization. Its radius defines
the aperture opening. Those rays with intersections at u, v
that are contained by a are integrated to compute the color
of pixel i, j on the panoramic image surface IJ.
The pages that are required for caching can be determined
as follows (cf. figure 5c,d): Analogously to the two-plane
parameterization, our pages are patches on ST and represent
an image portion of a panoramic light field perspective, indexed by β, v. Accordingly, we index a patch within a lightfield perspective on ST with the coordinate pair s,t. For every light-field perspective β, v, we first project each patch
on ST onto FG as follows: In the U/S plane (cf. figure 5c),
we compute a circular projection from coordinates u at constant coordinate v in constant direction β over all of the pixel
columns of the patch. Thus, one pixel column is projected
from exactly one u, v coordinate in direction β from U. Note
that v and β are constant because they are given for each
light-field perspective (defining the light field’s horizontal
and vertical perspectives). In the V /T plane (cf. figure 5d),
each of these pixel columns is projected perspectively from
its corresponding u, v coordinates onto FG, which leads to
a vertical line at coordinate f on FG. Repeating this for all
pixel columns of a patch results in the area projection f , g of
the patch on FG.
Since we require a circular projection in the U/S plane,
the aperture test for a patch also differs from that of the twoplane parameterization. We can carry out this test separately
for each of the patch’s projected pixel columns at coordinate
f on FG: First, the projected pixel column is projected perspectively onto UV from the point in C that can be connected
with coordinate f in the direction of the horizontal perspective angle α of the rendering camera. We then test whether
this projection overlaps (fully or partially) with the aperture
a that is centered at the point u, v (i.e., the initial origin for
projecting the pixel column onto FG). Since all our cylinder surfaces and C are concentric, the aperture tests for all
projected pixel columns of the same patch lead to the same
result. Thus, one aperture test for a single, arbitrary pixel
column per patch is sufficient. Only if this aperture test is
positive and the patch projection f , g falls within the viewing frustum of the rendering camera (defined on FG), the
page β, v, s,t is required for rendering with the chosen parameter set α, r, h, a, IJ, FG of the virtual panoramic camera.
Otherwise, it is not.
Analogously to the two-plane parameterization, we use
Figure 5: Ray sampling (a,b) and page testing (c,d) for parameterization of panoramic light fields: UV, ST are the cylindrical
parameter surfaces of the light field; the perspective of the rendering camera is defined by the circle C with radius r at hight h
and sampling direction α; IJ is the image surface; FG is the focal surface; i, j is the rendered pixel; β, v, s,t are the ray (a,b) or
the patch (c,d) indices; u is the coordinate in U corresponding to β, f (a) or β, s (b); f , g are the ray projection coordinates; a
is the aperture; red area is the overlap with the aperture.
c 2013 The Author(s)
c 2013 The Eurographics Association and Blackwell Publishing Ltd.
C. Birklbauer, S. Opelt & O. Bimber / Rendering Gigaray Light Fields
Figure 6: Visualization of a drosophila (4,096×4,096×61 volume resolution, 2.86 GB – 6,016×6,016×31×31 light-field resolution, 34.8 gigarays, 97.17 GB, rendered using a spherical parameterization): full-resolution volume rendering at 0.4 fps (a),
volume rendering preview at 25 fps (b), our method at 25 fps (c). Color-coded contributions of different sources during rotation
(d-g): full-resolution volume (green), volume preview (red), full-resolution light field (gray), and fallback light field (blue). The
visible seams are the result of the microscope’s scanning process, not of visualization.
ray-cylinder intersections to project points and patches efficiently. Given this ability to compute the pages required for
our panoramic light-field parameterization, we use the exact
same caching strategies as explained in section 3.2.
5. Light-Field-Cached Volume Rendering
Advances in imaging technology lead to ever larger image
data sets. Modern scanning microscopes, for instance, produce image stacks with a megapixel lateral resolution and
with possibly many hundreds to thousands of slices in axial direction. This trend will continue, resulting in very large
volumetric data sets that are difficult to explore interactively
because the complexity of volume rendering is proportional
to the spatial and lateral resolution of the data.
Light-field rendering is a fast and simple image-based rendering method that requires precomputed or precaptured image data. For volume rendering, each expensively computed
image is discarded after viewing parameters changes, while
the renderer is idle if the viewing parameters do not change
and the visualization need not to be updated.
In this section, we present a combination of light-field and
volume rendering to enable high-quality interactive explorations of large volumetric data sets. We use the idle times
of the volume renderer for filling our light-field cache. The
final images are then composed from both, light-field rendering and volume rendering, depending on the state of the
light-field cache. Our method leads to better image quality
than the low level of detail that can be achieved by a volume renderer at the same frame rate. Figure 6 illustrates an
example, and section 6 presents quantitative measures. The
following subsections explain how we integrate our lightfield caching into a volume renderer.
5.1. Dynamic Page Updates
Since in our case volume rendering is largely independent from light-field rendering, an arbitrary volume renderer
can be used in combination with light-field caching. If the
c 2013 The Author(s)
c 2013 The Eurographics Association and Blackwell Publishing Ltd.
volume renderer already supports own acceleration strategies, this does not conflict with our approach but increases
the overall rendering performance. Many volume renderers
switch to a fast preview mode with a lower level of detail
(LOD) to achieve interactive frame rates during user interactions. We, however use our cache-managed light-field rendering to support a much higher LOD at the same frame
rate. We apply probability-based prefetching and eviction
for caching, as explained in section 3.2.2. A two-plane lightfield parameterization is adequate if the data set is viewed
mainly in axial direction (i.e., if an image stack is explored
top down). To enable surround navigation we support an
additional spherical light-field parameterization. We explain
ray sampling and page testing for this case in section 5.2.
When the user stops interacting with the volume (i.e., at
constant viewing parameters), the volume renderer first computes and displays a full-resolution image for the current
rendering camera. When otherwise idle, it renders pages of
the cached light-field data structure in the background until the user starts to interact. For this purpose, we use virtual perspective cameras that are uniformly distributed on a
bounding sphere U which encloses the volume. These cameras point towards the volume center and have a field of view
that includes the bounding box of the volume. The sampling
rate that we use for rendering each page (i.e., the ray casting resolution of the volume renderer) is chosen such that it
matches the sampling rate of the rendering camera at a common reference plane inside the volume. The focal plane F
of the rendering camera can be used for this. The page probabilities, as computed in section 3.2.2 determine the proper
order in which the pages are rendered into the cache. Pages
with a high probability are rendered first.
For fast image rendering during user interaction (i.e.,
while changing the viewing parameters), we compute the
per-pixel LODs that are achieved by light-field rendering
(using the pages of the full-resolution light field and, if necessary, the pages of the fallback light field as explained in the
supplementary material) and by the fast preview of the volume renderer. The final image is then assembled from pixels
C. Birklbauer, S. Opelt & O. Bimber / Rendering Gigaray Light Fields
of the sources that have the highest LOD per pixel (either the
full-resolution / fallback light field or the volume rendering
preview). Figure 6 illustrates an example. More implementation details and results are provided in the supplementary
material.
5.2. Ray Sampling and Page Testing
Page testing and ray sampling for a spherical parameterization are similar to their two-plane parameterization counterparts (section 3.1), as illustrated in figures 7a,b. The differences are that samples on U are located on a spherical surface instead of a plane, and that S for all u are non-coplanar
but tangential to a spherical surface that is parallel to U.
Since point and patch samples projected on U are also not
coplanar, we use a 3D sphere (with a radius r that defines
the aperture) instead of a 2D disc to determine the overlap
of the samples with a. Points or patches that are inside the
sphere (fully or partially) are covered by the aperture. Except
for these differences, the methods for ray sampling and page
testing are identical to the methods described in section 3.1.
Projections are efficiently computed with ray-plane and raysphere intersections, and again the computed page indices
are required for the applied caching strategy (section 3.2).
An additional clustering of U to accelerate ray sampling and
page testing is explained in the supplementary material.
6. Results
Without caching (i.e., for on-demand loading of light-field
data), the light-field pages that are required for a frame must
be uploaded into the graphics memory before rendering.
Thus, the achieved frame rate is inversely proportional to
the upload time plus the render time. The more pages to be
uploaded, the lower the frame rate. Performance measurements of light-field caching on a Quad-Core 2.67GHz with
NVIDIA GeForce GTX 580 (1.54GHz, 3GB graphics memory, of which 1GB was used as cache) are presented in the
Figure 7: Ray sampling (a) and page testing (b) for spherical parameterization: U is the spherical parameter surface
of the light field; S is the tangential light field plane; c: is the
camera position; I is the image plane; F is the focal plane;
i is the rendered pixel; u, s are the ray (a) or patch (b) coordinates; f is the pixel (a) or patch (b) projection; a is the
aperture; red area is the overlap with the aperture.
captions of figures 1, 6, and 8. For the chosen cache size,
pages of 200 KB were found to be optimal (see supplementary material for details). Note that the measured frame rates
depend on the adjusted aperture size – which we varied between a full aperture (i.e., sampling rays from all perspectives until the cache / graphics memory is entirely filled) and
the smallest possible aperture (i.e., sampling rays from one
perspective).
Caching requires carrying out additional strategy computations for each frame, which costs extra time. While for
LRU, only the missing pages for the current frame are identified and loaded before rendering, our probability-based
prefetching and eviction method furthermore predicts future
pages while loading the pages for the current frame. In contrast to LRU, it thus allows uploading additional pages in
parallel to rendering. For comparing the caching strategies,
we enforce the overall frame rate to not fall below a constant minimum. Therefore, the page testing and upload time
slots must be constrained to a fixed maximum duration that
is available for the prefetching of additional pages. If not all
required pages can be uploaded in time, the image that is
rendered from the available pages is incomplete and suffers
from visual degradation when sampled from a partial light
Figure 8: A 1.44 gigaray light field of a synthetic scene (spatial resolution: 2,048×2,048, angular resolution: 19×19, 4.23 GB)
rendered with our caching approach at 5-40 fps (full aperture - smallest aperture, at a render resolution of 1280×720, using a
two-plane parameterization). Top row: Visual degradation due to missing light-field data during rendering. The plots indicate
the amount of missing light-field data in % (y-axis) at different frames (x-axis). The red color illustrates the amount of missing
light-field data at local image regions for the current frame. Bottom row: Close-ups showing focus errors and missing pixels due
to incomplete light-field data. The columns present different caching strategies: ground truth (a,f), on-demand loading (b,g),
perspective dead reckoning (c,h), LRU (d,i), and our approach (e,j).
c 2013 The Author(s)
c 2013 The Eurographics Association and Blackwell Publishing Ltd.
C. Birklbauer, S. Opelt & O. Bimber / Rendering Gigaray Light Fields
Figure 10: Measured SSIM index (y-axis) when comparing
light-field-cached volume rendering (red) or volume rendering preview (blue) with the full-resolution volume rendering
(ground truth) for a sequence of frames (x-axis: rendering
duration in seconds) during an identical navigation trace at
the same average frame rate of 25 fps. A SSIM index of 1.0
indicates a perfect match. For visual reference: the SSIM indices when comparing figures 6b,c with figure 6a are 0.79
and 0.96, respectively.
Figure 9: Measured visual degradation (y-axis: missing ray
contributions in %) for a sequence of frames (x-axis) during identical navigation traces for (a) two-plane parameterization with the light field shown in figure 8, (b) the
panoramic parameterization with the light field shown in figure 1, and (c) the spherical parameterization with the light
field shown in figure 6. The same maximum strategy computation and upload times were chosen in all cases. While ondemand loading (green) quickly becomes unusable, caching
reduces the number of degraded pixels for a minimum constant frame rate. Our probability-based approach (black)
generally leads to less image degradation than perspective
dead reckoning (blue), or than LRU (red) that operates on
an atomic page level.
field. Thus, instead of determining frame rates for a constant
image quality, we measure visual degradation for a constant
frame rate.
Figures 8 and 9 presents visual degradation measurements for the three caching strategies (LRU, perspective
dead reckoning, and our probability-based method), and for
on-demand loading (i.e., without any caching applied). Visual degradation is computed as the normalized average of
the missing ray contributions (using a triangle weight function for aperture filtering, as in [IMG00]) that are required
for rendering all pixel of a frame. For each parameterization, these measurements were taken for identical navigation
traces and with identical maximum computation and upload
times for all strategies. Without caching (i.e., for on-demand
loading) a large number of light-field pages may have to be
uploaded before rendering high-resolution (spatial and angular) light fields. Pages that already exist in the graphics
memory are simply overwritten, even if they could be rec 2013 The Author(s)
c 2013 The Eurographics Association and Blackwell Publishing Ltd.
used. Any caching strategy will improve this situation. For
the same frame rate, our probability-based strategy generally
leads to less visual degradation than perspective dead reckoning, or than LRU that operates on an atomic page level.
Timings of the strategies’ different computation phases are
provided in the supplementary material.
Figure 10 illustrates for the z-stack shown in figure 6 that
our light-field-cached volume rendering leads to better image quality than the low-level-of-detail preview of a volume
renderer at the same average frame rate (25 fps in this example, on the same hardware as above). We measured the
Structural SIMilarity (SSIM) index [WBSS04] between images rendered by our method (LFCVR), or by the volume
renderer at 25 fps (preview), and the corresponding fullresolution volume rendering at 0.4 fps for a sequence of
frames during identical navigation traces (using a spherical
parameterization). For volume rendering, we used the Visualization Toolkit (VTK). Any faster volume renderer will
lead to a quicker filling of the light-field cache and therefore to a faster transition from volume rendering to lightfield rendering. Light-field-cached volume rendering only
becomes inefficient if the volume renderer approaches the
quality and performance of light-field rendering.
7. Limitations and Future Work
In our current implementation of light-field caching, we assume that light fields fit entirely into the main memory.
We are planning to introduce a second cache level that allows uploading light-field pages dynamically from hard disk.
Currently, we consider only two levels of detail (the fullresolution light field and the fallback light field). MIP mapping in combination with dynamic page sizes can reduce
aliasing during rendering and leads to a better usage of the
available cache size and memory bandwidth. A more advanced, non-linear parameter prediction will also lead to
improvements. Furthermore, a hierarchical clustering of the
C. Birklbauer, S. Opelt & O. Bimber / Rendering Gigaray Light Fields
light-field data in combination with compression will result
in more efficient probability calculations and page uploads.
Uniform and insufficient sampling of light-field perspectives (i.e., samples on U) leads to sampling artifacts (i.e.,
ghosting) in the rendered images. On-demand creation and
dynamic management of new and unstructured light-field
perspectives based on the user interaction can reduce these
artifacts. This would be particularly valuable for light-fieldcached volume rendering. A faster volume renderer (e.g.,
running on a second GPU) can lead to an additional speed-up
and a quicker transition from volume rendering to light-field
rendering. However, light-field-cached volume rendering is
useful only for navigation tasks. If other rendering parameters (such as the transfer function) are changed, then the
light-field cache must be reset and newly filled. Thus, for
frequent changes in rendering settings other than the viewing parameters, light-field-cached volume rendering will not
be more efficient than standard volume rendering.
For imaging panoramic light fields, reliable direct registering of sub-light fields instead of registering focal stacks
computed from them would ease our current confinement
to Lambertian scenes with modest depth discontinuities and
would better model the actual capturing process. This, however, might be less robust since it relies on precise registration of sub-light-fields that requires dense and robust scene
features. We will investigate this in the future.
8. Summary and Conclusion
This paper makes three contributions: First, we have described a caching framework that enables interactive rendering of gigaray light fields. By applying a new probabilitybased prefetching and eviction strategy that combines dead
reckoning and heuristic parameter variations based on
atomic cache units, it does not only support an efficient
transfer of light field data to the graphics board for local
rendering, but can also be beneficial for interactive lightfield streaming over a network. It outperforms LRU as classical caching strategy, as well as perspective dead reckoning approaches that have been applied to light fields before.
Second, we have presented a robust panoramic light-field
imaging technique, the very first panoramic light field, and
an appropriate parameterization scheme for rendering and
caching such light fields. Third, we have shown that integrating light-field caching into a volume renderer achieves
high and detailed image quality when exploring large volumetric datasets interactively.
We have considered applications, such as light-field photography and -imaging, in which interest is growing, and the
visualization of increasingly large image stacks from evolving scanning microscopes.
Acknowledgments
We thank the Carl Zeiss AG and the Raytrix GmbH for
providing data. This project is funded by the Austrian
Science Fund (FWF) under contract number P 24907N23.
References
[BEA∗ 08] B LASCO J., E SCRIVÀ M., A BAD F., Q UIRÓS R.,
C AMAHORT E., V IVÓ R.: A generalized light-field api and management system. In Proc. International Conferences in Central
Europe on Computer Graphics, Visualization and Computer Vision (2008), pp. 87–94. 2, 4, 5
[BL07] B ROWN M., L OWE D. G.: Automatic panoramic image
stitching using invariant features. Int. J. Comput. Vision 74, 1
(2007), 59–73. 2, 5
[CNLE09] C RASSIN C., N EYRET F., L EFEBVRE S., E ISEMANN
E.: Gigavoxels: ray-guided streaming for efficient and detailed
voxel rendering. In Proc. oSymposium on Interactive 3D Graphics and Games (2009), pp. 15–22. 2, 3
[DLD12] DAVIS A., L EVOY M., D URAND F.: Unstructured light
fields. Computer Graphics Forum (2012), 305–314. 2
[IMG00] I SAKSEN A., M C M ILLAN L., G ORTLER S. J.: Dynamically reparameterized light fields. ACM Trans. Graph. (2000),
297–306. 3, 5, 9
[JMY∗ 07] J ONES A., M C D OWALL I., YAMADA H., B OLAS M.,
D EBEVEC P.: Rendering for an interactive 360deg light field
display. In ACM Trans. Graph. (2007). 1
[KUDC07] KOPF J., U YTTENDAELE M., D EUSSEN O., C OHEN
M. F.: Capturing and viewing gigapixel images. ACM Trans.
Graph. (2007). 2
[LD10] L EVIN A., D URAND F.: Linear view synthesis using a
dimensionality gap light field prior. In Proc. IEEE Computer
Vision and Pattern Recognition (2010), pp. 1831 –1838. 5
[LWH∗ 11] L ANMAN D., W ETZSTEIN G., H IRSCH M., H EI DRICH W., R ASKAR R.: Polarization Fields: Dynamic Light
Field Display using Multi-Layer LCDs. ACM Trans. Graph. 3
(2011), 1–9. 1
[LZWS00] L I J., Z HOU K., WANG Y., S HUM H.: A novel
image-based rendering system with a longitudinally aligned camera array. Computer Graphics Forum (2000), 107–114. 2
[Ng05] N G R.: Fourier slice photography. In ACM Trans. Graph.
(2005), pp. 735–744. 1
[PBP01] P ELEG S., B EN -E ZRA M., P RITCH Y.: Omnistereo:
panoramic stereo imaging. IEEE Trans. Pattern Analysis and
Machine Intelligence 23, 3 (2001), 279–290. 2, 5
[RKG07] R AMANATHAN P., K ALMAN M., G IROD B.: Ratedistortion optimized interactive light field streaming. IEEE
Trans. Multimedia 9 (2007), 813–825. 2, 4, 5
[RSTK08] R EZK -S ALAMA C., T ODT S., KOLB A.: Raycasting
of light field galleries from volumetric data. Computer Graphics
Forum 27, 3 (2008), 839–846. 3
[SH99] S HUM H., H E L.: Rendering with concentric mosaics. In
ACM Trans. Graph. (1999), pp. 299–306. 2
[TMJ07] T ROTTS I., M IKULA S., J ONES E. G.: Interactive visualization of multiresolution image stacks in 3d. NeuroImage 35,
3 (2007), 1038 – 1043. 2
[UES01] U YTTENDAELE M., E DEN A., S ZELISKI R.: Eliminating ghosting and exposure artifacts in image mosaics. In Proc.
IEEE Conference on Computer Vision and Pattern Recognition
(2001), pp. 509–516. 2, 5
[WBSS04] WANG Z., B OVIK A. C., S HEIKH H. R., S IMON CELLI E. P.: Image quality assessment: From error visibility to
structural similarity. IEEE Trans. Image Processing 13, 4 (2004),
600–612. 9
c 2013 The Author(s)
c 2013 The Eurographics Association and Blackwell Publishing Ltd.