# GPU Volume Rendering - Graphics

This is a discussion on GPU Volume Rendering - Graphics ; I'm reading the GPU Gems I Chapter 39 article &quot;Volume Rendering Techniques&quot;. For those that have the book, there is a paragraph on page 672 I do not understand: &quot;When a data set is stored as a set of 2D ...

1. ## GPU Volume Rendering

I'm reading the GPU Gems I Chapter 39 article "Volume Rendering
Techniques". For those that have the book, there is a paragraph on
page 672 I do not understand:

"When a data set is stored as a set of 2D texture slices, the proxy
polygons are simply rectangles aligned with the slices. Despite being
faster, this approach has several disadvantages. First, it requires
three times more memory, because the data slices need to be replicated
along each principal direction."

Why does it take three times more memory? If you had a 128x128x128
volume, if you were using 2D slices instead of a 3D texture, wouldn't
you just have a 128 element array of 128x128 2D texture slices? It
seems the memory is the same as a 128x128x128 3D texture. And I don't
understand what "the data slices need to be replicated along each
principal direction" means.

2. ## Re: GPU Volume Rendering

<dragonslayer008@hotmail.com> wrote in message
> I'm reading the GPU Gems I Chapter 39 article "Volume Rendering
> Techniques". For those that have the book, there is a paragraph on
> page 672 I do not understand:
>
> "When a data set is stored as a set of 2D texture slices, the proxy
> polygons are simply rectangles aligned with the slices. Despite being
> faster, this approach has several disadvantages. First, it requires
> three times more memory, because the data slices need to be replicated
> along each principal direction."
>
> Why does it take three times more memory? If you had a 128x128x128
> volume, if you were using 2D slices instead of a 3D texture, wouldn't
> you just have a 128 element array of 128x128 2D texture slices? It
> seems the memory is the same as a 128x128x128 3D texture. And I don't
> understand what "the data slices need to be replicated along each
> principal direction" means.

Suppose your slices are z-slices, so your rectangles occur with constant z.
When the view direction is in the z-direction, everything renders fine. Now
move the camera so that the view direction is in the x-direction. You are
looking at the rectangles "edge on", which does not render very well. You
can add more rectangles that are perpendicular to the x-direction, but then
you do not have any 2D textures to go with them. This is where the 3D
texture comes in, because you can have rectangles perpendicular to x,
rectangles perpendicular to y, and rectangles perpendicular to z, all
rectangles assigned *3D texture coordinates*.

In the 2D approach, you can construct a set of textures for the x-direction
rectangles from the z-direction textures (and the same for the y-direction
rectangles). "Principal direction" in that context means coordinate axis
directions.

Even the 3D-texture approach is not without problems, especially for medical
images where the spacing in the z-direction tends to be larger than in the
x- and y-directions. The transition of view direction from mostly z-facing
to
mostly x-facing (or y-facing) is noticeable.

--
Dave Eberly
http://www.geometrictools.com

3. ## Re: GPU Volume Rendering

> Suppose your slices are z-slices, so your rectangles occur with constant z.
> When the view direction is in the z-direction, everything renders fine. Now
> move the camera so that the view direction is in the x-direction. You are
> looking at the rectangles "edge on", which does not render very well. You
> can add more rectangles that are perpendicular to the x-direction, but then
> you do not have any 2D textures to go with them.
>
> In the 2D approach, you can construct a set of textures for the x-direction
> rectangles from the z-direction textures (and the same for the y-direction
> rectangles). "Principal direction" in that context means coordinate axis
> directions.

Thanks for your reply. What is the viewer is looking at and angle
though--how does the 2D approach work? From my understanding, the
rectangles are parallel to the view plane, so they won't line up with
any of the 3-principal axes.

4. ## Re: GPU Volume Rendering

On Aug 29, 12:27 pm, dragonslayer...@hotmail.com wrote:
> > Suppose your slices are z-slices, so your rectangles occur with constant z.
> > When the view direction is in the z-direction, everything renders fine. Now
> > move the camera so that the view direction is in the x-direction. You are
> > looking at the rectangles "edge on", which does not render very well. You
> > can add more rectangles that are perpendicular to the x-direction, but then
> > you do not have any 2D textures to go with them.

>
> > In the 2D approach, you can construct a set of textures for the x-direction
> > rectangles from the z-direction textures (and the same for the y-direction
> > rectangles). "Principal direction" in that context means coordinate axis
> > directions.

>
> Thanks for your reply. What is the viewer is looking at and angle
> though--how does the 2D approach work? From my understanding, the
> rectangles are parallel to the view plane, so they won't line up with
> any of the 3-principal axes.

Do not waste too mach of your time for an obsolete volume rendering
techniques; once you understood texture mapping VR techniques move to
volumetric ray-tracing/casting, it provides dramatically better
rendering quality, the trick is to make it run fast on GPU. Multi-core
CPU currently outperforms GPU in this regards but I beleave it is
matter of time when GPU can beat CPU in ray-tracing.

..

5. ## Re: GPU Volume Rendering

On Sep 1, 2:35 pm, "stefanba...@yahoo.com" <stefanba...@yahoo.com>
wrote:
> On Aug 29, 12:27 pm, dragonslayer...@hotmail.com wrote:
>
>
>
> > > Suppose your slices are z-slices, so your rectangles occur with constant z.
> > > When the view direction is in the z-direction, everything renders fine. Now
> > > move the camera so that the view direction is in the x-direction. You are
> > > looking at the rectangles "edge on", which does not render very well. You
> > > can add more rectangles that are perpendicular to the x-direction, but then
> > > you do not have any 2D textures to go with them.

>
> > > In the 2D approach, you can construct a set of textures for the x-direction
> > > rectangles from the z-direction textures (and the same for the y-direction
> > > rectangles). "Principal direction" in that context means coordinate axis
> > > directions.

>
> > Thanks for your reply. What is the viewer is looking at and angle
> > though--how does the 2D approach work? From my understanding, the
> > rectangles are parallel to the view plane, so they won't line up with
> > any of the 3-principal axes.

>
> Do not waste too mach of your time for an obsolete volume rendering
> techniques; once you understood texture mapping VR techniques move to
> volumetric ray-tracing/casting,

What about a marching cubes approach? Shading can also be added fairly
easily on the polygons from the volumetric data (ie. using the

> it provides dramatically better
> rendering quality, the trick is to make it run fast on GPU. Multi-core
> CPU currently outperforms GPU in this regards but I beleave it is
> matter of time when GPU can beat CPU in ray-tracing.

If the entire volume must be visible at once (even occluded portions),
then yes, ray-tracing is probably the way to go. If not I would say
general techniques for optimizing 3d geometry painting will give much
better performance.

>
> .