Something More for Research

Explorer of Research #HEMBAD

Archive for the ‘OpenGL’ Category


Octree Textures on the GPU

Posted by Hemprasad Y. Badgujar on February 24, 2013

Octree Textures on the GPU

Sylvain Lefebvre

Samuel Hornus

Fabrice Neyret

Texture mapping is a very effective and efficient technique for enriching the appearance of polygonal models with details. Textures store not only color information, but also normals for bump mapping and various shading attributes to create appealing surface effects. However, texture mapping usually requires parameterizing a mesh by associating a 2D texture coordinate with every mesh vertex. Distortions and seams are often introduced by this difficult process, especially on complex meshes.

The 2D parameterization can be avoided by defining the texture inside a volume enclosing the object. Debry et al. (2002) and Benson and Davis (2002) have shown how 3D hierarchical data structures, named octree textures, can be used to efficiently store color information along a mesh surface without texture coordinates. This approach has two advantages. First, color is stored only where the surface intersects the volume, thus reducing memory requirements. Figures 37-1 and 37-2 illustrate this idea. Second, the surface is regularly sampled, and the resulting texture does not suffer from any distortions. In addition to mesh painting, any application that requires storing information on a complex surface can benefit from this approach.

37_octree_01.jpgFigure 37-1 An Octree Texture Surrounding a 3D Model

37_octree_02.jpgFigure 37-2 Unparameterized Mesh Textures with an Octree Texture

This chapter details how to implement octree textures on today’s GPUs. The octree is directly stored in texture memory and accessed from a fragment program. We discuss the trade-offs between performance, storage efficiency, and rendering quality. After explaining our implementation in Section 37.1, we demonstrate it on two different interactive applications:

  • A surface-painting application (Section 37.2). In particular, we discuss the different possibilities for filtering the resulting texture (Section 37.2.3). We also show how a texture defined in an octree can be converted into a standard texture, possibly at runtime (Section 37.2.4).
  • A nonphysical simulation of liquid flowing along a surface (Section 37.3). The simulation runs entirely on the GPU.

37.1 A GPU-Accelerated Hierarchical Structure: The N3-Tree

37.1.1 Definition

An octree is a regular hierarchical data structure. The first node of the tree, the root, is a cube. Each node has either eight children or no children. The eight children form a 2x2x2 regular subdivision of the parent node. A node with children is called an internal node. A node without children is called a leaf. Figure 37-3 shows an octree surrounding a 3D model where the nodes that have the bunny’s surface inside them have been refined and empty nodes have been left as leaves.

37_octree_03.jpgFigure 37-3 An Octree Surrounding a 3D Model

In an octree, the resolution in each dimension increases by two at each subdivision level. Thus, to reach a resolution of 256x256x256, eight levels are required (28= 256). Depending on the application, one might prefer to divide each edge by an arbitrary number N rather than 2. We therefore define a more generic structure called an N3 tree. In an N3-tree, each node has N 3 children. The octree is an N3-tree with N = 2. A larger value of N reduces the tree depth required to reach a given resolution, but it tends to waste memory because the surface is less closely matched by the tree.

37.1.2 Implementation

To implement a hierarchical tree on a GPU, we need to define how to store the structure in texture memory and how to access the structure from a fragment program.

A simple approach to implement an octree on a CPU is to use pointers to link the tree nodes together. Each internal node contains an array of pointers to its children. A child can be another internal node or a leaf. A leaf contains only a data field.

Our implementation on the GPU follows a similar approach. Pointers simply become indices within a texture. They are encoded as RGB values. The content of the leaves is directly stored as an RGB value within the parent node’s array of pointers. We use the alpha channel to distinguish between a pointer to a child and the content of a leaf. Our approach relies on dependent texture lookups (or texture indirections). This requires the hardware to support an arbitrary number of dependent texture lookups, which is the case for GeForce FX and GeForce 6 Series GPUs.

The following sections detail our GPU implementation of the N3-tree. For clarity, the figures illustrate the 2D equivalent of an octree (a quadtree).


We store the tree in an 8-bit RGBA 3D texture called the indirection pool. Each “pixel” of the indirection pool is called a cell.

The indirection pool is subdivided into indirection grids. An indirection grid is a cube of NxNxN cells (a 2x2x2 grid for an octree). Each node of the tree is represented by an indirection grid. It corresponds to the array of pointers in the CPU implementation described earlier.

A cell of an indirection grid can be empty or can contain one of the following:

  • Data, if the corresponding child is a leaf
  • The index of an indirection grid, if the corresponding child is another internal node

Figure 37-4 illustrates our tree storage representation.

37_octree_04.jpgFigure 37-4 Storage in Texture Memory (2D Case)

We note S = Su Sv Sw as the number of indirection grids stored in the indirection pool and R= (N x Su ) x (N x Sv ) x (N x Sw ) as the resolution in cells of the indirection pool.

Data values and indices of children are both stored as RGB triples. The alpha channel is used as a flag to determine the cell content (alpha = 1 indicates data; alpha = 0.5 indicates index; alpha = 0 indicates empty cell). The root of the tree is always stored at (0, 0, 0) within the indirection pool.

Accessing the Structure: Tree Lookup

Once the tree is stored in texture memory, we need to access it from a fragment program. As with standard 3D textures, the tree defines a texture within the unit cube. We want to retrieve the value stored in the tree at a point M U220A.GIF [0, 1]3. The tree lookup starts from the root and successively visits the nodes containing the point M until a leaf is reached.

Let I D be the index of the indirection grid of the node visited at depth D. The tree lookup is initialized with I 0= (0, 0, 0), which corresponds to the tree root. When we are at depth D, we know the index I D of the current node’s indirection grid. We now explain how we retrieve I D+1 from I D .

The lookup point M is inside the node visited at depth D. To decide what to do next, we need to read from the indirection grid ID the value stored at the location corresponding to M. To do so, we need to compute the coordinates of M within the node.

At depth D, a complete tree produces a regular grid of resolution N D N D N D within the unit cube. We call this grid the depth-D grid. Each node of the tree at depth D corresponds to a cell of this grid. In particular, M is within the cell corresponding to the node visited at depth D. The coordinates of M within this cell are given by frac(M x N D ). We use these coordinates to read the value from the indirection grid I D . The lookup coordinates within the indirection pool are thus computed as:


We then retrieve the RGBA value stored at P in the indirection pool. Depending on the alpha value, either we will return the RGB color if the child is a leaf, or we will interpret the RGB values as the index of the child’s indirection grid (I D+1) and continue to the next tree depth. Figure 37-5 summarizes this entire process for the 2D case (quadtree).

37_octree_05.jpgFigure 37-5 Example of a Tree Lookup

The lookup ends when a leaf is reached. In practice, our fragment program also stops after a fixed number of texture lookups: on most hardware, it is only possible to implement loop statements with a fixed number of iterations (however, early exit is possible on GeForce 6 Series GPUs). The application is in charge of limiting the tree depth with respect to the maximum number of texture lookups done within the fragment program. The complete tree lookup code is shown in Listing 37-1.

Example 37-1. The Tree Lookup Cg Code

float4 tree_lookup(uniform sampler3D IndirPool, // Indirection Pool

  uniform float3 invS, // 1 / S

  uniform float N,

  float3 M) // Lookup coordinates


  float4 I = float4(0.0, 0.0, 0.0, 0.0);

  float3 MND = M;

  for (float i=0; i// fixed # of iterations

    float3 P;

    // compute lookup coords. within current node

    P = (MND + floor(0.5 + * 255.0)) * invS;

    // access indirection pool

    if (I.w < 0.9)                   // already in a leaf?

      I =(float4)tex3D(IndirPool,P);// no, continue to next depth

 #ifdef DYN_BRANCHING // early exit if hardware supports dynamic branching

    if (I.w > 0.9)    // a leaf has been reached



    if (I.w < 0.1) // empty cell


    // compute pos within next depth grid

    MND = MND * N;


  return (I);


Further Optimizations

In our tree lookup algorithm, as we explained earlier, the computation of P requires a frac instruction. In our implementation, however, as shown Listing 37-1, we actually avoid computing the frac by relying on the cyclic behavior of the texture units (repeat mode). We leave the detailed explanations as an appendix, located on the book’s CD.

We compute P as


where D D is an integer within the range [0, S[.

We store D D instead of directly storing the I D values. Please refer to the appendix on the CD for the code to compute D D .

Encoding Indices

The indirection pool is an 8-bit 3D RGBA texture. This means that we can encode only 256 different values per channel. This gives us an addressing space of 24 bits (3 indices of 8 bits), which makes it possible to encode octrees large enough for most applications.

Within a fragment program, a texture lookup into an 8-bit texture returns a value mapped between [0,1]. However, we need to encode integers. Using a floating-point texture to do so would require more memory and would reduce performance. Instead, we map values between [0,1] with a fixed precision of 1/255 and simply multiply the floating-point value by 255 to obtain an integer. Note that on hardware without fixed-precision registers, we need to compute floor(0.5 + 255 * v) to avoid rounding errors.

37.2 Application 1: Painting on Meshes

In this section we use the GPU-accelerated octree structure presented in the previous section to create a surface-painting application. Thanks to the octree, the mesh does not need to be parameterized. This is especially useful with complex meshes such as trees, hairy monsters, or characters.

The user will be able to paint on the mesh using a 3D brush, similar to the brush used in 2D painting applications. In this example, the painting resolution is homogeneous along the surface, although multiresolution painting would be an easy extension if desired.

37.2.1 Creating the Octree

We start by computing the bounding box of the object to be painted. The object is then rescaled such that its largest dimension is mapped between [0,1]. The same scaling is applied to the three dimensions because we want the painting resolution to be the same in every dimension. After this process, the mesh fits entirely within the unit box.

The user specifies the desired resolution of the painting. This determines the depth of the leaves of the octree that contain colors. For instance, if the user selects a resolution of 5123, the leaves containing colors will be at depth 9.

The tree is created by subdividing the nodes intersecting the surface until all the leaves either are empty or are at the selected depth (color leaves). To check whether a tree node intersects the geometry, we rely on the box defining the boundary of the node. This process is depicted in Figure 37-6. We use the algorithm shown in Listing 37-2.

37_octree_06a.jpgFigure 37-6 Building an Octree Around a Mesh Surface

This algorithm uses our GPU octree texture API. The links between nodes (indices in the indirection grids) are set up by the createChild() call. The values stored in tree leaves are set up by calling setChildAsEmpty() and setChildColor(), which also set the appropriate alpha value.

Example 37-2. Recursive Algorithm for Octree Creation

void createNode(depth, polygons, box)

  for all children (i, j, k) within (N, N, N)

     if (depth + 1 == painting depth)       // painting depth reached?

        setChildColor(i, j, k, white)       // child is at depth+1


        childbox = computeSubBox(i, j, k, box)

        if (childbox intersect polygons)

           child = createChild(i, j, k)

           // recurse

           createNode(depth + 1, polygons, childbox)


          setChildAsEmpty(i, j, k)

37.2.2 Painting

In our application, the painting tool is drawn as a small sphere moving along the surface of the mesh. This sphere is defined by a painting center P center and a painting radius P radius. The behavior of the brush is similar to that of brushes in 2D painting tools.

When the user paints, the leaf nodes intersecting the painting tool are updated. The new color is computed as a weighted sum of the previous color and the painting color. The weight is such that the painting opacity decreases as the distance from P center increases.

To minimize the amount of data to be sent to the GPU as painting occurs, only the modified leaves are updated in texture memory. This corresponds to a partial update of the indirection pool texture (under OpenGL, we use glTexSubImage3D). The modifications are tracked on a copy of the tree stored in CPU memory.

37.2.3 Rendering

To render the textured mesh, we need to access the octree from the fragment program, using the tree lookup defined in Section 37.1.2.

The untransformed coordinates of the vertices are stored as 3D texture coordinates. These 3D texture coordinates are interpolated during the rasterization of the triangles. Therefore, within the fragment program, we know the 3D point of the mesh surface being projected in the fragment. By using these coordinates as texture coordinates for the tree lookup, we retrieve the color stored in the octree texture.

However, this produces the equivalent of a standard texture lookup in “nearest” mode. Linear interpolation and mipmapping are often mandatory for high-quality results. In the following section, we discuss how to implement these techniques for octree textures.

Linear Interpolation

Linear interpolation of the texture can be obtained by extending the standard 2D linear interpolation. Because the octree texture is a volume texture, eight samples are required for linear interpolation, as shown in Figure 37-7.

37_octree_07.jpgFigure 37-7 Linear Interpolation Using Eight Samples

However, we store information only where the surface intersects the volume. Some of the samples involved in the 3D linear interpolation are not on the surface and have no associated color information. Consider a sample at coordinates (ijk) within the maximum depth grid (recall that the depth D grid is the regular grid produced by a complete octree at depth D). The seven other samples involved in the 3D linear interpolation are at coordinates (i+1, jk), (ij+1, k), (ijk+1), (ij+1, k+1), (i+1, jk+1), (i+1, j+1, k), and (i+1, j+1, k+1). However, some of these samples may not be included in the tree, because they are too far from the surface. This leads to rendering artifacts, as shown in Figure 37-8.

37_octree_08.jpgFigure 37-8 Fixing Artifacts Caused by Straightforward Linear Interpolation

We remove these artifacts by modifying the tree creation process. We make sure that all of the samples necessary for linear interpolation are included in the tree. This can be done easily by enlarging the box used to check whether a tree node intersects the geometry. The box is built in such a way that it includes the previous samples in each dimension. Indeed, the sample at (ijk) must be added if one of the previous samples (for example, the one at (i-1, j-1, k-1)) is in the tree. This is illustrated in Figure 37-9.

37_octree_09.jpgFigure 37-9 Modifying the Tree Creation to Remove Linear Interpolation Artifacts

In our demo, we use the same depth for all color leaves. Of course, the octree structure makes it possible to store color information at different depths. However, doing so complicates linear interpolation. For more details, refer to Benson and Davis 2002.


When a textured mesh becomes small on the screen, multiple samples of the texture fall into the same pixel. Without a proper filtering algorithm, this leads to aliasing. Most GPUs implement the mipmapping algorithm on standard 2D textures. We extend this algorithm to our GPU octree textures.

We define the mipmap levels as follows. The finest level (level 0) corresponds to the leaves of the initial octree. A coarser level is built from the previous one by merging the leaves in their parent node. The node color is set to the average color of its leaves, and the leaves are suppressed, as shown in Figure 37-10. The octree depth is therefore reduced by one at each mipmapping level. The coarsest level has only one root node, containing the average color of all the leaves of the initial tree.

37_octree_10.jpgFigure 37-10 An Example of a Tree with Mipmapping

Storing one tree per mipmapping level would be expensive. Instead, we create a second 3D texture, called the LOD pool. The LOD pool has one cell per indirection grid of the indirection pool (see Figure 37-10, bottom row). Its resolution is thus S u S v S w (see “Storage” in Section 37.1.2). Each node of the initial tree becomes a leaf at a given mipmapping level. The LOD pool stores the color taken by the nodes when they are used as leaves in a mipmapping level.

To texture the mesh at a specific mipmapping level, we stop the tree lookup at the corresponding depth and look up the node’s average color in the LOD pool. The appropriate mipmapping level can be computed within the fragment program using partial-derivative instructions.

37.2.4 Converting the Octree Texture to a Standard 2D Texture

Our ultimate goal is to use octree textures as a replacement for 2D textures, thus completely removing the need for a 2D parameterization. However, the octree texture requires explicit programming of the texture filtering. This leads to long fragment programs. On recent GPUs, performance is still high enough for texture-authoring applications, where a single object is displayed. But for applications displaying complex scenes, such as games or simulators, rendering performance may be too low. Moreover, GPUs are extremely efficient at displaying filtered standard 2D texture maps.

Being able to convert an octree texture into a standard 2D texture is therefore important. We would like to perform this conversion dynamically: this makes it possible to select the best representation at runtime. For example, an object near the viewpoint would use the linearly interpolated octree texture and switch to the corresponding filtered standard 2D texture when it moves farther away. The advantage is that filtering of the 2D texture is natively handled by the GPU. Thus, the extra cost of the octree texture is incurred only when details are visible.

In the following discussion, we assume that the mesh is already parameterized. We describe how we create a 2D texture map from an octree texture.

To produce the 2D texture map, we render the triangles using their 2D (uv) coordinates instead of their 3D (xyz) coordinates. The triangles are textured with the octree texture, using the 3D coordinates of the mesh vertices as texture coordinates for the tree lookup. The result is shown in Figure 37-11.

37fig11.jpgFigure 37-11 Converting the Octree into a Standard 2D Texture

However, this approach produces artifacts. When the 2D texture is applied to the mesh with filtering, the background color bleeds inside the texture. This happens because samples outside of the 2D triangles are used by the linear interpolation for texture filtering. It is not sufficient to add only a few border pixels: more and more pixels outside of the triangles are used by coarser mipmapping levels. These artifacts are shown in Figure 37-12.

37_octree_12.jpgFigure 37-12 Artifacts Resulting from Straightforward Conversion

To suppress these artifacts, we compute a new texture in which the colors are extrapolated outside of the 2D triangles. To do so, we use a simplified GPU variant of the extrapolation method known as push-pull. This method has been used for the same purpose in Sander et al. 2001.

We first render the 2D texture map as described previously. The background is set with an alpha value of 0. The triangles are rendered using an alpha value of 1. We then ask the GPU to automatically generate the mipmapping levels of the texture. Then we collapse all the mipmapping levels into one texture, interpreting the alpha value as a transparency coefficient. This is done with the Cg code shown in Listing 37-3.

Finally, new mipmapping levels are generated by the GPU from this new texture. Figures 37-13 and 37-14 show the result of this process.

37_octree_13.jpgFigure 37-13 Color Extrapolation

37_octree_14.jpgFigure 37-14 Artifacts Removed Due to Color Extrapolation

Example 37-3. Color Extrapolation Cg Code

PixelOut main(V2FI IN,

  uniform sampler2D Tex) // texture with mipmapping levels


  PixelOut OUT;

  float4 res = float4(0.0, 0.0, 0.0, 0.0);

  float alpha = 0.0;

  // start with coarsest level

  float sz = TEX_SIZE;

   // for all mipmapping levels

   for (float i=0.0; i<=TEX_SIZE_LOG2; i+=1.0)


      // texture lookup at this level

      float2 MIP = float2(sz/TEX_SIZE, 0.0);

      float4 c = (float4)tex2D(Tex, IN.TCoord0, MIP.xy, MIP.yx);

      // blend with previous

      res = c + res * (1.0 - c.w);

      // go to finer level

      sz /= 2.0;


   // done - return normalized color (alpha == 1)

   OUT.COL = float4(,1);

   return OUT;


37.3 Application 2: Surface Simulation

We have seen with the previous application that octree structures are useful for storing color information along a mesh surface. But octree structures on GPUs are also useful for simulation purposes. In this section, we present how we use an octree structure on the GPU to simulate liquid flowing along a mesh.

We do not go through the details of the simulation itself, because that is beyond the scope of this chapter. We concentrate instead on how we use the octree to make available all the information required by the simulation.

The simulation is done by a cellular automaton residing on the surface of the object. To perform the simulation, we need to attach a 2D density map to the mesh surface. The next simulation step is computed by updating the value of each pixel with respect to the density of its neighbors. This is done by rendering into the next density map using the previous density map and neighboring information as input.

Because physical simulation is very sensitive to distortions, using a standard 2D parameterization to associate the mesh surface to the density map would not produce good results in general. Moreover, computation power could be wasted if some parts of the 2D density map were not used. Therefore, we use an octree to avoid the parameterization.

The first step is to create an octree around the mesh surface (see Section 37.2.1). We do not directly store density within the octree: the density needs to be updated using a render-to-texture operation during the simulation and should therefore be stored in a 2D texture map. Instead of density, each leaf of the octree contains the index of a pixel within the 2D density map. Recall that the leaves of the octree store three 8-bit values (in RGB channels). To be able to use a density map larger than 256×256, we combine the values of the blue and green channels to form a 16-bit index.

During simulation, we also need to access the density of the neighbors. A set of 2D RGB textures, called neighbor textures, is used to encode neighboring information. Let I be an index stored within a leaf L of the octree. Let Dmap be the density map and N a neighbor texture. The Cg call tex2D(Dmap,I) returns the density associated with leaf L. The call tex2D(N,I)gives the index within the density map corresponding to a neighbor (in 3D space) of the leaf L. Therefore, tex2D(Dmap, tex2D(N,I)) gives us the density of the neighbor of L.

To encode the full 3D neighborhood information, 26 textures would be required (a leaf of the tree can have up to 26 neighbors in 3D). However, fewer neighbors are required in practice. Because the octree is built around a 2D surface, the average number of neighbors is likely to be closer to 9.

Once these textures have been created, the simulation can run on the density map. Rendering is done by texturing the mesh with the density map. The octree is used to retrieve the density stored in a given location of the mesh surface. Results of the simulation are shown in Figure 37-15. The user can interactively add liquid on the surface. Videos are available on the book’s CD.

37_octree_15.jpgFigure 37-15 Liquid Flowing Along Mesh Surfaces

37.4 Conclusion

We have presented a complete GPU implementation of octree textures. These structures offer an efficient and convenient way of storing undistorted data along a mesh surface. This can be color data, as in the mesh-painting application, or data for dynamic texture simulation, as in the flowing liquid simulation. Rendering can be done efficiently on modern hardware, and we have provided solutions for filtering to avoid texture aliasing. Nevertheless, because 2D texture maps are preferable in some situations, we have shown how an octree texture can be dynamically converted into a 2D texture without artifacts.

Octrees are very generic data structures, widely used in computer science. They are a convenient way of storing information on unparameterized meshes, and more generally in space. Many other applications, such as volume rendering, can benefit from their hardware implementation.

We hope that you will discover many further uses for and improvements to the techniques presented in this chapter! Please see for updates of the source code and additional information.

37.5 References

Benson, D., and J. Davis. 2002. “Octree Textures.” ACM Transactions on Graphics (Proceedings of SIGGRAPH 2002) 21(3), pp. 785–790.

Debry, D., J. Gibbs, D. Petty, and N. Robins. 2002. “Painting and Rendering Textures on Unparameterized Models.” ACM Transactions on Graphics (Proceedings of SIGGRAPH 2002) 21(3), pp. 763–768.

Sander, P., J. Snyder, S. Gortler, and H. Hoppe. 2001. “Texture Mapping Progressive Meshes.” In Proceedings of SIGGRAPH 2001, pp. 409–416.


Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Addison-Wesley was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals.

The authors and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein.

NVIDIA makes no warranty or representation that the techniques described herein are free from any Intellectual Property claims. The reader assumes all risk of any such claims based on his or her use of these techniques.

The publisher offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales, which may include electronic versions and/or custom covers and content particular to your business, training goals, marketing focus, and branding interests. For more information, please contact:

        U.S. Corporate and Government Sales
(800) 382-3419

For sales outside of the U.S., please contact:

        International Sales

Visit Addison-Wesley on the Web:

Library of Congress Cataloging-in-Publication Data

GPU gems 2 : programming techniques for high-performance graphics and general-purpose
computation / edited by Matt Pharr ; Randima Fernando, series editor.
p. cm.
Includes bibliographical references and index.
ISBN 0-321-33559-7 (hardcover : alk. paper)
1. Computer graphics. 2. Real-time programming. I. Pharr, Matt. II. Fernando, Randima.

T385.G688 2005

GeForce™ and NVIDIA Quadro® are trademarks or registered trademarks of NVIDIA Corporation.

Nalu, Timbury, and Clear Sailing images © 2004 NVIDIA Corporation.

mental images and mental ray are trademarks or registered trademarks of mental images, GmbH.

Copyright © 2005 by NVIDIA Corporation.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior consent of the publisher. Printed in the United States of America. Published simultaneously in Canada.

For information on obtaining permission for use of material from this work, please submit a written request to:

       Pearson Education, Inc.
Rights and Contracts Department
One Lake Street
Upper Saddle River, NJ 07458

Text printed in the United States on recycled paper at Quebecor World Taunton in Taunton, Massachusetts.

Second printing, April 2005

Posted in CLOUD, CLUSTER, Computer Softwares, Computing Technology, CUDA, Graphics Cards, OpenCL, OpenGL, PARALLEL | 1 Comment »

OpenGL Lesson 2: Transformations and Timers

Posted by Hemprasad Y. Badgujar on October 1, 2012

Lesson 2: Transformations and Timers


Our last program was kind of lame. Aren’t we supposed to be doing 3D programming? It looked pretty 2D. Let’s make things a bit more interesting. We’ll make the shapes rotate in 3D.

To do this, we’ll have to understand a little about transformations in OpenGL. To think of them, imagine a bird flying around the scene. It starts out at the origin, facing the negative z direction. The bird can move, rotate, and even grow or shrink. Whenever we specify points to OpenGL using glVertex, OpenGL interprets them relative to our bird. So, if we shrink the bird by a factor of 2 and then move it 2 units to the right, from its perspective, then the point (0, 4, 0) relative to the bird is actually at (1, 2, 0). If instead, we rotate the bird 90 degrees about the x-axis and move it 2 units up, the point (0, 0, -1) relative to the bird is (0, -1, -2) in world coordinates. This is shown in the below picture, with my bird that I made out of silly putty. Note that to see it better, we’re viewing the scene from the side.

Transformations diagramAt this point, you may be thinking, “This is stupid. Why don’t we just specify all of the points directly?” Just hang on. This will become clear later in the course of this lesson.

We’re going to start with the code from the last lesson, with some of the comments removed. First of all, instead of using -5 for the z coordinates of all of the points, let’s just translate our bird 5 units forward, and then use 0 for their z coordinates. We translate by using a call to glTranslatef, with the amount that we want to translate in the x, y, and z directions.

    glLoadIdentity(); //Reset the drawing perspective
    glTranslatef(0.0f, 0.0f, -5.0f); //Move forward 5 units


    glVertex3f(-0.7f, -1.5f, 0.0f);
    glVertex3f(0.7f, -1.5f, 0.0f);
    glVertex3f(0.4f, -0.5f, 0.0f);
    glVertex3f(-0.4f, -0.5f, 0.0f);



    glVertex3f(0.5f, 0.5f, 0.0f);
    glVertex3f(1.5f, 0.5f, 0.0f);
    glVertex3f(0.5f, 1.0f, 0.0f);

    glVertex3f(0.5f, 1.0f, 0.0f);
    glVertex3f(1.5f, 0.5f, 0.0f);
    glVertex3f(1.5f, 1.0f, 0.0f);

    glVertex3f(0.5f, 1.0f, 0.0f);
    glVertex3f(1.5f, 1.0f, 0.0f);
    glVertex3f(1.0f, 1.5f, 0.0f);

    glVertex3f(-0.5f, 0.5f, 0.0f);
    glVertex3f(-1.0f, 1.5f, 0.0f);
    glVertex3f(-1.5f, 0.5f, 0.0f);


If we compile and run the program with these changes, it works the same, which is what we want.

I’d glossed over the meaning of the call to glLoadIdentity() in the last lesson. What it does is it resets our bird, so that it is at the origin and is facing in the negative z direction.

Now let’s use some more translating, so that whenever we specify points for a shape, they are relative to the shape’s center.

    glLoadIdentity(); //Reset the drawing perspective
    glTranslatef(0.0f, 0.0f, -5.0f); //Move forward 5 units

    glPushMatrix(); //Save the transformations performed thus far
    glTranslatef(0.0f, -1.0f, 0.0f); //Move to the center of the trapezoid


    glVertex3f(-0.7f, -0.5f, 0.0f);
    glVertex3f(0.7f, -0.5f, 0.0f);
    glVertex3f(0.4f, 0.5f, 0.0f);
    glVertex3f(-0.4f, 0.5f, 0.0f);


    glPopMatrix(); //Undo the move to the center of the trapezoid
    glPushMatrix(); //Save the current state of transformations
    glTranslatef(1.0f, 1.0f, 0.0f); //Move to the center of the pentagon


    glVertex3f(-0.5f, -0.5f, 0.0f);
    glVertex3f(0.5f, -0.5f, 0.0f);
    glVertex3f(-0.5f, 0.0f, 0.0f);

    glVertex3f(-0.5f, 0.0f, 0.0f);
    glVertex3f(0.5f, -0.5f, 0.0f);
    glVertex3f(0.5f, 0.0f, 0.0f);

    glVertex3f(-0.5f, 0.0f, 0.0f);
    glVertex3f(0.5f, 0.0f, 0.0f);
    glVertex3f(0.0f, 0.5f, 0.0f);


    glPopMatrix(); //Undo the move to the center of the pentagon
    glPushMatrix(); //Save the current state of transformations
    glTranslatef(-1.0f, 1.0f, 0.0f); //Move to the center of the triangle


    glVertex3f(0.5f, -0.5f, 0.0f);
    glVertex3f(0.0f, 0.5f, 0.0f);
    glVertex3f(-0.5f, -0.5f, 0.0f);


    glPopMatrix(); //Undo the move to the center of the triangle

Again, if we compile and run these changes, the program works the same.

There are two new and important functions used in this code: glPushMatrix() and glPopMatrix(). We use them to save and restore the state of our bird. glPushMatrix saves its state, and glPopMatrix restores it. Note that, like glBegin andglEnd, each call to glPushMatrix must have a corresponding call to glPopMatrix. We have to save the state of our bird using glPushMatrix in order to undo the move to the center of the shapes.

We can save more than one bird state at a time. In fact, we have a stack of saved states. Every time we call glPushMatrix, we add a state to the top of the stack, and every time we call glPopMatrix, we restore and remove the state at the top of the stack. The stack can store up to at least 32 different transformation states.

glPushMatrix and glPopMatrix are so named because OpenGL uses matrices to represent the state of our bird. For now, you don’t have to worry about how exactly the matrices work.

And now, we’ll actually change what our program does. Let’s make all of the shapes rotated by 30 degrees and shrink the pentagon to 70% of its original size.

float _angle = 30.0f;

//Draws the 3D scene
void drawScene() {

    glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
    glLoadIdentity(); //Reset the drawing perspective
    glTranslatef(0.0f, 0.0f, -5.0f); //Move forward 5 units

    glPushMatrix(); //Save the transformations performed thus far
    glTranslatef(0.0f, -1.0f, 0.0f); //Move to the center of the trapezoid
    glRotatef(_angle, 0.0f, 0.0f, 1.0f); //Rotate about the z-axis


    glVertex3f(-0.7f, -0.5f, 0.0f);
    glVertex3f(0.7f, -0.5f, 0.0f);
    glVertex3f(0.4f, 0.5f, 0.0f);
    glVertex3f(-0.4f, 0.5f, 0.0f);


    glPopMatrix(); //Undo the move to the center of the trapezoid
    glPushMatrix(); //Save the current state of transformations
    glTranslatef(1.0f, 1.0f, 0.0f); //Move to the center of the pentagon
    glRotatef(_angle, 0.0f, 1.0f, 0.0f); //Rotate about the y-axis
    glScalef(0.7f, 0.7f, 0.7f); //Scale by 0.7 in the x, y, and z directions


    glVertex3f(-0.5f, -0.5f, 0.0f);
    glVertex3f(0.5f, -0.5f, 0.0f);
    glVertex3f(-0.5f, 0.0f, 0.0f);

    glVertex3f(-0.5f, 0.0f, 0.0f);
    glVertex3f(0.5f, -0.5f, 0.0f);
    glVertex3f(0.5f, 0.0f, 0.0f);

    glVertex3f(-0.5f, 0.0f, 0.0f);
    glVertex3f(0.5f, 0.0f, 0.0f);
    glVertex3f(0.0f, 0.5f, 0.0f);


    glPopMatrix(); //Undo the move to the center of the pentagon
    glPushMatrix(); //Save the current state of transformations
    glTranslatef(-1.0f, 1.0f, 0.0f); //Move to the center of the triangle
    glRotatef(_angle, 1.0f, 2.0f, 3.0f); //Rotate about the the vector (1, 2, 3)


    glVertex3f(0.5f, -0.5f, 0.0f);
    glVertex3f(0.0f, 0.5f, 0.0f);
    glVertex3f(-0.5f, -0.5f, 0.0f);


    glPopMatrix(); //Undo the move to the center of the triangle

Now, our program looks like this:

Transformation screenshotWe introduced a new variable, _angle, which stores the number of degrees by which we want to rotate our shapes. We also use two new functions. We call glRotatef, which rotates our bird. Our call to glRotatef(_angle, 0.0f, 0.0f, 1.0f)rotates our bird by _angle degrees about the z-axis, while our call to glRotatef(_angle, 1.0f, 2.0f, 3.0f) rotates our bird by _angle degrees about the vector (1, 2, 3). We also call glScalef(0.7f, 0.7f, 0.7f), which shrinks our bird to 70% of its original size in the x, y, and z directions. If we were to call glScalef(2.0f, 1.0f, 1.0f) instead, we would double its size in the horizontal direction, according to its perspective.

It is important to note that glTranslatefglRotatef, and glScalef, may not be called in a glBeginglEnd block.

Now, let’s change the camera angle so that we look 10 degrees to the left.

float _cameraAngle = 10.0f;

//Draws the 3D scene
void drawScene() {

    glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
    glLoadIdentity(); //Reset the drawing perspective
    glRotatef(-_cameraAngle, 0.0f, 1.0f, 0.0f); //Rotate the camera
    glTranslatef(0.0f, 0.0f, -5.0f); //Move forward 5 units

Our program looks like this:

Rotated camera screenshotObserve that we use a special trick to change the camera angle. We just rotated the entire scene by 10 degrees in the opposite direction. This is a useful technique that you’ll use a lot in 3D programming.

Before we move on to timers, I’d like to explain glMatrixMode. If we call glMatrixMode(GL_MODEL_VIEW), we switch to setting transformations for the points in the scene. If we call glMatrixMode(GL_PROJECTION), like we did in handleResize, we switch to setting a special transformation that is applied to our points in addition to the normal transformations. Take a look at handleResize. We switched to the projection matrix mode, called glLoadIdentity() to reset all of its transformation and called gluPerspectivegluPerspective performs a weird transformation that gives our points “perspective”. Don’t worry about how exactly it works. You just have to know that we use GL_PROJECTION to set up our perspective and GL_MODEL_VIEW for everything else.

GL_PROJECTION is sometimes described as the transformation for the camera, but this isn’t exactly accurate, because light sources aren’t affected by the transformations in “projection” mode. It’s a bad idea to use it for setting the camera.

Now that we changed the camera angle, it’s harder to see everything, so let’s just change _cameraAngle to 0.


And now, let’s add some motion using GLUT timers. The basic idea behind timers is that we want some piece of code to execute every so often. In this case, let’s rotate the shapes by 2 degrees every 25 milliseconds. Here’s how we do it.

void update(int value) {
    _angle += 2.0f;
    if (_angle > 360) {
        _angle -= 360;

    glutPostRedisplay(); //Tell GLUT that the scene has changed

    //Tell GLUT to call update again in 25 milliseconds
    glutTimerFunc(25, update, 0);

Here’s our update function. First, we increase the angle by 2. If it gets above 360 degrees, we subtract 360, which doesn’t change the angle that the variable indicates. We don’t actually have to do that, but it’s better to keep angles small, because of issues related to float precision. I won’t really go into detail about that here. Then, we call glutPostRedisplay(), which tells GLUT that the scene has changed and makes sure that that GLUT redraws it. Finally, we call glutTimerFunc(25, update, 0), which tells GLUT to call update again in 25 milliseconds.

The value parameter is something that GLUT passes to our update function. It is the same as the last parameter we passed to glutTimerFunc for that function, so it will always be 0. We don’t need to use the parameter, so we just ignore it.

    glutTimerFunc(25, update, 0); //Add a timer

We add another call to glutTimerFunc to our main function, so that GLUT calls it for the first time 25 milliseconds after the program starts.

That’s it. Give the program a go. Download the source code, compile the program, and run it. Marvel at our accomplishment; we now have rotating shapes.

Posted in OpenGL | Leave a Comment »

OpenGL program Lesson 1: Basic Shapes

Posted by Hemprasad Y. Badgujar on October 1, 2012

Lesson 1: Basic Shapes

Try it Out

Let’s take a look at our first OpenGL program. download the “basic shapes” program, and compile and run it (details on how to do that can be found in “Part 0: Getting OpenGL Set Up”). Take a look at it, and hit ESC when you’re done. It should look like the following image:


Overview of How the Program Works

How does the program work? The basic idea is that we tell OpenGL the 3D coordinates of all of the vertices of our shapes. OpenGL uses the standard x and y axes, with the positive x direction pointing toward the right and the positive y direction pointing upward. However, in 3D we need another dimension, the z dimension. The positive z direction points out of the screen.

AxesHow does OpenGL use these 3D coordinates? It simulates the way that our eyes work. Take a look at the following picture.

EyeOpenGL converts all of the 3D points to pixel coordinates before it draws anything. To do this, it draws a line from each point in the scene to your eye and takes the intersection of the lines and the screen rectangle, as in the above picture. So, when OpenGL wants to draw a triangle, it converts the three vertices into pixel coordinates and draws a “2D” triangle using those coordinates.

The user’s “eye” is always at the origin and looking in the negative z direction. Of course, OpenGL doesn’t draw anything that is behind the “eye”. (After all, it isn’t the all-seeing eye of Sauron.)

How far away is the screen rectangle from your eye? Actually, it doesn’t matter. No matter how far away the screen rectangle is, a given 3D point will map to the same pixel coordinates. All that matters is the angle that your eye can see.

Going Through the Source Code

All of this stuff about pixel coordinates is great and all, but as programmers, we want to see some code. Take a look at main.cpp.

The first thing you’ll notice is the license indicating that the code, like all of my code on this site, is completely free. That’s right, F-R-E-E. You can even use it in commercial projects.

The second thing you’ll notice is that it’s heavily commented, so much so that it’s a bit of an eye sore. That’s because this is the first lesson. Other lessons will not be so heavily commented, but they’ll still have comments.

Let’s go through the file and see if we can understand what it’s doing.

#include <stdlib.h> //Needed for "exit" function

//Include OpenGL header files, so that we can use OpenGL
#ifdef __APPLE__

First, we include our header files. Pretty standard stuff for C++. If we’re using a Mac, we want our program to include GLUT/glut.h and OpenGL/OpenGL.h; otherwise, we include GL/glut.h.

using namespace std;

We’ll have this line near the top of main.cpp in all of our programs. It just makes it so that we don’t have to type std:: a lot; for example, so we can use cout instead of std::cout.

//Called when a key is pressed
void handleKeypress(unsigned char key, //The key that was pressed
                    int x, int y) {    //The current mouse coordinates
    switch (key) {
        case 27: //Escape key
            exit(0); //Exit the program

This function handles any keys pressed by the user. For now, all that it does is quit the program when the user presses ESC, by calling exit. The function is passed the x and y coordinates of the mouse, but we don’t need them.

//Initializes 3D rendering
void initRendering() {
    //Makes 3D drawing work when something is in front of something else

The initRendering function initializes our rendering parameters. For now, it doesn’t do much. We’ll pretty much always want to call glEnable(GL_DEPTH_TEST) when we initialize rendering. The call makes sure that an object shows up behind an object in front of it that has already been drawn, which we want to happen.

Note that glEnable, like every OpenGL function, begins with “gl”.

//Called when the window is resized
void handleResize(int w, int h) {
    //Tell OpenGL how to convert from coordinates to pixel values
    glViewport(0, 0, w, h);

    glMatrixMode(GL_PROJECTION); //Switch to setting the camera perspective

    //Set the camera perspective
    glLoadIdentity(); //Reset the camera
    gluPerspective(45.0,                  //The camera angle
                   (double)w / (double)h, //The width-to-height ratio
                   1.0,                   //The near z clipping coordinate
                   200.0);                //The far z clipping coordinate

The handleResize function is called whenever the window is resized. w and h are the new width and height of the window. The content of handleResize will be not change much in our other projects, so you don’t have to worry about it too much.

There are a couple of things to notice. When we pass 45.0 to gluPerspective, we’re telling OpenGL the angle that the user’s eye can see. The 1.0 indicates not to draw anything with a z coordinate of greater than -1. This is so that when something is right next to our eye, it doesn’t fill up the whole screen. The 200.0 tells OpenGL not to draw anything with a z coordinate less than -200. We don’t care very much about stuff that’s really far away.

So, why does gluPerspective begin with “glu” instead of “gl”? That’s because technically, it’s a GLU (GL Utility) function. In addition to “gl” and “glu”, some functions we call will begin with “glut” (GL Utility Toolkit). We won’t really worry about the difference among OpenGL, GLU, and GLUT.

//Draws the 3D scene
void drawScene() {
    //Clear information from last draw

The drawScene function is where the 3D drawing actually occurs. First, we call glClear to clear information from the last time we drew. In most every OpenGL program, you’ll want to do this.

    glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
    glLoadIdentity(); //Reset the drawing perspective

For now, we’ll ignore this. It’ll make sense after the next lesson, which covers transformations.

    glBegin(GL_QUADS); //Begin quadrilateral coordinates

    glVertex3f(-0.7f, -1.5f, -5.0f);
    glVertex3f(0.7f, -1.5f, -5.0f);
    glVertex3f(0.4f, -0.5f, -5.0f);
    glVertex3f(-0.4f, -0.5f, -5.0f);

    glEnd(); //End quadrilateral coordinates

Here, we begin the substance of our program. This part draws the trapezoid. To draw a trapezoid, we call glBegin(GL_QUADS) to tell OpenGL that we want to start drawing quadrilaterals. Then, we specify the four 3D coordinates of the vertices of the trapezoid, in order, using calls to glVertex3f. When we call glVertex3f, we are specifying three (that’s where the “3” comes from) float (that’s where the “f” comes from) coordinates. Then, since we’re done drawing quadrilaterals, we callglEnd(). Note that every call to glBegin must have a matching call to glEnd.

All of the “f”‘s after the vertex coordinates force the compiler to treat the numbers as floats. Technically, I don’t think that they’re necessary, but I’m going to be using them everywhere.

    glBegin(GL_TRIANGLES); //Begin triangle coordinates

    glVertex3f(0.5f, 0.5f, -5.0f);
    glVertex3f(1.5f, 0.5f, -5.0f);
    glVertex3f(0.5f, 1.0f, -5.0f);

    glVertex3f(0.5f, 1.0f, -5.0f);
    glVertex3f(1.5f, 0.5f, -5.0f);
    glVertex3f(1.5f, 1.0f, -5.0f);

    glVertex3f(0.5f, 1.0f, -5.0f);
    glVertex3f(1.5f, 1.0f, -5.0f);
    glVertex3f(1.0f, 1.5f, -5.0f);

Now, we draw the pentagon. To draw it, we split it up into three triangles, which is pretty standard for OpenGL. We start by calling glBegin(GL_TRIANGLES) to tell OpenGL that we want to draw triangles. Then, we tell it the coordinates of the vertices of the triangles.

OpenGL automatically puts the coordinates together in groups of three. Each group of three coordinates represents one triangle.

    glVertex3f(-0.5f, 0.5f, -5.0f);
    glVertex3f(-1.0f, 1.5f, -5.0f);
    glVertex3f(-1.5f, 0.5f, -5.0f);

Finally, we draw the triangle. We haven’t called glEnd() to tell OpenGL that we’re done drawing triangles yet, so it knows that we’re still giving it triangle coordinates.

    glEnd(); //End triangle coordinates

Now, we’re done drawing triangles, so we call glEnd().

Note that we could have drawn the above four triangles using four calls to glBegin(GL_TRIANGLES) and four accompanying calls to glEnd(). However, this makes the program slower, and you shouldn’t do it.

There are other things we can pass to glBegin in addition to GL_TRIANGLES and GL_QUADS, but triangles and quadrilaterals are the most common things to draw.

    glutSwapBuffers(); //Send the 3D scene to the screen

This line makes OpenGL actually move the scene to the window. We’ll call it whenever we’re done drawing a scene.

int main(int argc, char** argv) {
    //Initialize GLUT
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
    glutInitWindowSize(400, 400); //Set the window size

    //Create the window
    glutCreateWindow("Basic Shapes -");
    initRendering(); //Initialize rendering

This is the program’s main function. We start by initializing GLUT. Again, something similar will appear in all of our programs, so you don’t have to worry too much about it. In the call to glutInitWindowSize, we set the window to be 400×400. When we call glutCreateWindow, we tell it what title we want for the window. Then, we call initRendering, the function that we wrote to initialize OpenGL rendering.

    //Set handler functions for drawing, keypresses, and window resizes

Now, we point GLUT to the functions that we wrote to handle keypresses and drawing and resizing the window. One important thing to note: we’re not allowed to draw anything except inside the drawScene function that we explicitly give to GLUT, or inside functions that drawScene calls (or functions that they call, etc.).

    glutMainLoop(); //Start the main loop.  glutMainLoop doesn't return.
    return 0; //This line is never reached

Next, we call glutMainLoop, which tells GLUT to do its thing. This is, we tell GLUT to capture key and mouse input, to draw the scene when it has to by calling our drawScene function, and to do some other stuff.

glutMainLoop, like a defective boomerang, never returns. GLUT just takes care of the rest of our program’s execution. After the call, we have return 0 so that the compiler doesn’t complain about the main function not returning anything, but the program will never get to that line.

And that’s how our first OpenGL program works. You may want to try the exercises to get more familiar with what you just learned.

Posted in OpenGL, Project Related | Leave a Comment »

Getting OpenGL Set Up on Windows

Posted by Hemprasad Y. Badgujar on October 1, 2012


Getting OpenGL Set Up on Windows

 What exactly is OpenGL? It’s a way to draw stuff in 3D. It can also be used for 2D drawing, but this site doesn’t focus on that. There are better tools for straight 2D drawing, such as SDL and Allegro.

The graphics card is where the 3D computation happens. The purpose of OpenGL is to communicate with the graphics card about your 3D scene.

So why not talk to the graphics card directly? Each graphics card is a little different. In a sense, they all speak different “languages”. To talk to them all, you can either learn all of their languages, or find a “translator” that knows all of their languages and talk to the translator, so that you only have to know one language. OpenGL serves as a “translator” for graphics cards.

This lesson will explain how to get OpenGL and GLUT set up on Windows. We’ll use the Visual C++ Express 2005 IDE to edit, compile, and run our programs. Visual C++ Express is a free IDE. To use it, you will have to register, which is free, within 30 days. You can use another IDE if you prefer, but getting it set up will be a little different.

Downloading and Installing

First, download and install the necessary software using the following instructions:

  1. Download Visual C++ Express and the Microsoft Platform SDK from the Microsoft website. Note that when you download the SDK, it may say something about Windows Server. Don’t worry about that; it’ll install just fine on any modern version of Windows.
  2. Install Visual C++ and the SDK.
  3. Download the OpenGL installer from here and the GLUT binary from here.
  4. Run the OpenGL installer.
  5. Extract GLUT to the directory of your choice. You can do this by creating a new directory, locating and opening the ZIP file using Windows Explorer, and copying the files to the new directory using copy-paste. Alternatively, you can use a free program like WinZip to extract GLUT.
  6. In the directory to which you extracted GLUT, make two folders, one called “include” and one called “lib”. In the “include” folder, create another folder called “GL”, and move glut.h to that folder. Move all of the other files that you extracted for GLUT into the “lib” folder.
  7. Run Visual C++ Express. Go to Tools -> Options, then Projects and Solutions -> VC++ Directories. Note where it says “Show directories for”. You’ll want to change the directories for include files by adding “x\include”, “y\include”, and “z\Include” and to change the directories for library files by adding “x\lib”, “y\lib”, and “z\Lib”, where “x” is the folder where you installed OpenGL, “y” is the folder where you extracted GLUT, and “z” is the folder where you installed the Microsoft Platform SDK.
  8. Change your PATH environment variable as follows: go to the control panel, and go to System. Go to the “Advanced” tab and click on “Environment Variables”. Find the “PATH” (or “Path”) variable. Change it by adding “;x\lib;y\lib;z\Lib” (without the quotes) to the end of it, where again, “x“, “y“, and “z” are the folders where you installed OpenGL, GLUT, and the Microsoft Platform SDK. Make sure there are no spaces before or after the semicolons.
  9. Reboot your computer, so that Windows will recognise the changes to the PATH environment variable.

Compiling and Running the Test Program

To make sure that everything was set up correctly, we’re going to see if we can get a test program to work.

  1. Download this test program and extract it somewhere on your computer.
  2. Run Visual Studio C++. Go to File -> New -> Project From Existing Code.
  3. Click next to indicate that you are making a Visual C++ project.
  4. Set the project file location to the folder to which you extracted the test program. Enter in a name for your project (such as “cube”) and click next.
  5. Change the project type from “Windows application project” to “Console application project”, and click next.
  6. Click next, then click finish to finish creating the project.
  7. Go to Project -> Properties. Click on Configuration Properties. Click the “Configuration Manager” button in the upper-right corner. Change the “Active solution configuration” from “Debug” to “Release”. Click close, then click OK.
  8. In Project -> Properties, go to Configuration Properties -> General. Where it shows the output directory as “Release”, backspace the word “Release”, and click OK. This makes Visual C++ put the executable in the same directory as the source code, so when our program needs to open a file, it looks for it in that directory. In this case, the program will have to load in an image file called “vtr.bmp”.
  9. Go to Build -> Build project_name to build your project.
  10. There should be two warnings about ignoring /INCREMENTAL. You don’t have to, but if you want, you can fix them as follows. In Project -> Properties, go to Configuration Properties -> Linker, and change “Enable Incremental Linking” from “Yes (/Incremental)” to “No (/Incremental:No)”.
  11. Run the program by going to Debug -> Start Without Debugging. If all goes well, the test program should run.

Note that you’ll have to set up a project every time you want to work on a program from my site, so you’ll have to repeat steps 1 – 11 above.

I’d like to point out a couple of things about the program. First of all, notice that the project has a file called “Makefile”. It’s not used on Windows; it’s only needed for Linux, Mac OS X, and other UNIX-based operating systems. But Visual C++ will automatically ignore the file, so you don’t have to worry about removing it from the project.

Secondly, take a look at main.cpp. Notice that the include directives for OpenGL appear after the #include directives for the normal C++ include files. If we had them in the opposite order, because of the Windows download that I recommended for GLUT, you’d get a compiler error. So make sure that the include files for the standard C++ stuff appear before the include files for OpenGL.

Whew! We’ve finished setting up OpenGL. Now we’re ready to learn how to program in 3D.



Posted in OpenGL | 1 Comment »

Extracts from a Personal Diary

dedicated to the life of a silent girl who eventually learnt to open up

Num3ri v 2.0

I miei numeri - seconda versione


Just another site

Algunos Intereses de Abraham Zamudio Chauca

Matematica, Linux , Programacion Serial , Programacion Paralela (CPU - GPU) , Cluster de Computadores , Software Cientifico




A great site

Travel tips

Travel tips

Experience the real life.....!!!

Shurwaat achi honi chahiye ...

Ronzii's Blog

Just your average geek's blog

Karan Jitendra Thakkar

Everything I think. Everything I do. Right here.

Chetan Solanki

Helpful to u, if u need it.....


Explorer of Research #HEMBAD


Explorer of Research #HEMBAD


A great site


This is My Space so Dont Mess With IT !!

%d bloggers like this: