Depth-buffer (or z-buffer) method , Computer Graphics

Depth-buffer (or z-buffer) Method

 Z-buffer method is a fast and easy technique for specifying visible-surfaces. Z-buffer method is also termed to as the z-buffer method, as object depth is normally measured by the view plane along the z-axis of a viewing system. Z-buffer algorithm compares surface depths at all pixel positions (x,y)  on the view plane. Now we are taking the subsequent assumption:

 ¾ Plane of projection is z=0 plane         

¾ Orthographic parallel projections

For all pixel positions (x,y) on the view plane, the surface along with the smallest z-coordinate at such position is visible. For illustration, Figure 3 illustrates three surfaces S1, S2, and S3, out of that surface S1 has the smallest z-value at (x,y) position. Hence surface S1 is visible at such position. Thus its surface intensity value at (x,y) is store in the refresh-buffer.

 

                                          98_Depth-buffer (or z-buffer) Method.png

This time the projection is orthographic and the projection plane is considers as the xy-plane. Hence, each (x,y,z) position upon the polygon surfaces corresponds to the orthographic projection point (x,y) upon the projection plane. Thus, for each pixel position (x,y) upon the view plane, compared the object depth via comparing z-values, as demonstrated in above figure 3.

For implementing z-buffer algorithm or method two buffer areas as two 2-D arrays are needed:

1) Depth-buffer: z-buffer (i,j), to store z-value, along with least z, amongst the earlier z- values for every (x,y) position upon the view plane.

2) Refresh-buffer: COLOR (i,j): for storing intensity values for all positions. We summarize the steps of a depth-buffer method or algorithm as given:

Given: A list of polygons is {P1,P2,.....,Pn}.

Step1: firstly all positions (x,y) in the depth-buffer are set to 1.0 (maximum depth) and the refresh-buffer is initialized to the background intensity that is Z-buffer(x,y):=1.0; and COLOR(x,y):= Background color.

Step2: For every position on each polygon surface that is listed in the polygon table, is after that processed is scan-converted, one scan line at a time. Computing the depth that is z- value, at each (x,y) pixel position. The computed depth is after that compared to the value previously stored in the depth buffer at such position to find out visibility.

the new depth value is stored in the depth-buffer, and the surface intensity at that position is found and placed  in the similar (x,y) location in the refresh-buffer, if the calculated z-depth is less than the value stored in the depth-buffer is as:

 

If z-depth< z-buffer(x,y), then set

    z-buffer(x,y)=z-depth;

    COLOR(x,y)=Isurf(x,y);      //    where  Isurf(x,y)  is the projected power or intensity

value of the polygon      

                                                   surface Pi at pixel position (x,y). 

After all this the surfaces have been processed, the depth buffer comprises depth values for the visible surfaces and for the refresh-buffer contains the consequent intensity values for such surfaces.

In terms of pseudo-code, we summarize the depth-buffer algorithm as given below:

Given: A list of polygons is {P1,P2,.....,Pn}

Output: A COLOR array that display the intensity of the visible polygon surfaces.

Initialize:

            z-buffer(x,y):=0; and 

                 COLOR(x,y):= Back-ground color.

      Begin

             For (each polygon P in the polygon list) do {

               For (each pixel (x,y) that intersects P) do  {

                    Calculate z-depth of P at (x,y)

                     If (z-depth < z-buffer[x,y]) then  {

      z-buffer(x,y)=z-depth;

                        COLOR(x,y)=Intensity of P at (x,y);    

                      }

                  }

              }

      display COLOR array.

Calculation of depth values, z, for a surface position (x,y):

We understood, for any polygon faces, the equation of the plane is in the form of: A.x+B.y+C.z+D=0  --------------------(1) , here A, B, C, D are identified to us.

To compute the depth value z, we have to resolve the plane equation (1) for z as:

z=(- A. x - B . y - D)/C   --------(2)

See a polygon in the figure 4 intersected by scan-lines at y and y - 1 on y-axis.

                                                           1138_Depth-buffer (or z-buffer) Method 1.png

 

This time, if at position (x, y) equation (2) estimates to depth z, so at next position (x+1,y) along the scan line, the depth zH can be acquired as:

 zH =[-A.(x+1) -B.y-D]/C   --------------(3)

 By equation (2) and (3), we get:

 z-zH =A/C

 zH =z-A/C   -----------------(4)

The ratio -A/C is constant for all surfaces. Thus we can acquire succeeding depth values across a scan-line from the previous values by a particular addition. On all scan-line, we begin by calculating the depth on the left edge of the polygon which intersects such scan-line and then proceed to compute the depth at each successive position across the scan -line from equation-(4) until we reach the right edge of the polygon.

As the same, if we are processing down, the vertical line x intersecting the (y-1)th scan- line on the point (x, y-1). Hence from Equation (2) the depth zv is acquired as:

zv=[-A.x-B.(y-1) -D]/C

=([-A.x-B.y-D]/C)+B/C

=z+B/C  ----------------(5)

Beginning at the top vertex, we can recursively compute the x position down the left edge of the polygon by the relation: x'=x-1/m, here m is the slope of the edge as in the figure 5. By using this x position here the depth z' at (x',y-1) on the (y-1) scan-line is acquired as:

z'=[-A.x'-B.(y-1) -D]/C

=[-A.(x-1/m) -B.(y-1) -D]/C

=z+(A/m+B)/C   ---------------------(6)

Because m=∞ for a vertical line, so equation (6) is as equation (5).

                                                       25_Depth-buffer (or z-buffer) Method 2.png

 

                          Figure: 5 Intersection position on successive scan lines along a left polygon edge

Hence, if we are processing down, then we can acquire succeeding depth values across a scan-line by the preceding values by a particular addition from the equation (5) as:

zv= z+B/C.

Hence, the summary of the above calculations are as given:

-        ¾ you can acquire succeeding depth values across a scan-line from the preceding values with a particular subtraction as: z'=z-A/C.

-        ¾ If we are processing down, then we can also obtain succeeding depth values across a scan-line from the preceding values with a particular addition as: z'= z+(A/m+B)/C. Conversely, if we are processing up, then we can acquire succeeding depth values across a scan-line from the preceding values by a particular subtraction, i.e., z'= z- (A/m+B)/C. 

The subsequent figure 6 that summarizes the above given calculations

                                            2390_Depth-buffer (or z-buffer) Method 3.png

         Figure: 6 Successive depth values, while processing left to right or processing up across a scan-line

Posted Date: 3/22/2013 8:34:45 AM | Location : United States






Your posts are moderated
Related Questions
Image Processing Process Images are the last product of most processes in computer graphics. The ISO that is International Standards Organization explains computer graphics as

Forensics: Accidents occur every minute. Very frequently, there are no witnesses except for the individuals concerned in the accident or worse yet, there are no surviving witnesse

Three-Dimensional Viewing Three dimensional objects are created using modelling coordinate system. The modelled objects are then placed in locations specified in the scene with

why overstriking is harmful.justify

Hue and Saturation: A light source produced by a sun or electric bulb emits all frequencies within the visible range to give white light. When this light is incident upon an objec

What is rotation?  A 2-D rotation is completed by repositioning the coordinates with a circular path, in the x-y plane by making an angle with the axes. The transformation is g

In this programming assignment, you will animate and pose 3D articulated characters by implementing forward and inverse kinematic methods. You should be able to animate character j

JPEG Graphics: Another graphic file format usually utilized on the Web to minimize graphics file sizes is the Joint Photographic Experts Group that is JPEG compression scheme. Not

Optical Character Recognition Software (OCR) Frequently, you will have printed matter and the other text to incorporate in your project although no electronic text file. Along

Explain the advantage and disadvantage of Raster CRT Advantages Allows solids, not just wireframes Leverages low-cost CRT technology (i.e., TVs) Bright, i.e.