Ok, so I do realise its been a LONG time coming, but I have some great news, I

And to prove that, I have the code up and running for lesson 5 up on github. Now, the way I see it, rendering a 3D scene onto the display surface (screen), is very much like taking a photograph. Imagine with me, if we wanted to take a photo of a box with our camera, what would we do?

The first thing we would do, is setup the world, scene or whatever you want to call it. This means that we need to take everything we want to display on screen and place it in its correct position in the world. This is also the same in OpenGL, in terms of loading up the model vertices, textures, colours and everything else that represents the model and loading them into VertexBufferObjects [VBOs for short] which, to the graphics card, represent the world

Next up, we move the model to its correct place, and rotate it by certain degrees, till we get its correct placement in the world. Through OpenGL this is done by multiplying the translation (position) & rotation matrices by every single vertex in the model, to correctly move the model to the place we want in the world. These two steps constitute us building the

Later, after the world is set up, we start setting up our shot. Usually, this means setting up where the camera is, its orientation and where exactly it would point. On the OpenGL-end this I've done through a call to gluLookAt which simply takes as input 3D vectors specifying (in exact match to real world scenario) where the camera is, where it points and its orientation (mostly known in graphics programming as the Up-Vector). This later step constitutes the

In OpenGL, the first two constitute the ModelView matrix.

Last, but certainly not least, having setup the scene and chosen our camera, we turn our attention to the camera lens. We adjust what type of lens it is (wide/narrow lens) & its range (what distance it focuses on). This, again, is almost directly mapped to a call from gluPerspective which takes the camera's field of view angle, the width/height ratio and the near and far cutoff distances. This final step, constitutes the

When it comes to the shaders then, it is fairly simple from there on. Given each vertex, and attributes for the translation/rotation of the model, the vertex shader simply computes the screen position of the vertex by multiplying all those matrices together

On a side note, the Modern OpenGL Wikibook does the computation for the rotation/translation matrix in the c code for the drawing. I moved the construction of matrices and their multiplication to the shaders in order to have this piece of computations run on the GPU, which is generally much better at these sort of floating point calculations

So, there you have it, we are finally in 3D! Next up, we will be adding some more texture (no pun intended) to the work, by adding in model textures! Till then

*think*I finally understand basic 3D concepts in addition to vertex/fragment shaders and how they work!And to prove that, I have the code up and running for lesson 5 up on github. Now, the way I see it, rendering a 3D scene onto the display surface (screen), is very much like taking a photograph. Imagine with me, if we wanted to take a photo of a box with our camera, what would we do?

The first thing we would do, is setup the world, scene or whatever you want to call it. This means that we need to take everything we want to display on screen and place it in its correct position in the world. This is also the same in OpenGL, in terms of loading up the model vertices, textures, colours and everything else that represents the model and loading them into VertexBufferObjects [VBOs for short] which, to the graphics card, represent the world

Next up, we move the model to its correct place, and rotate it by certain degrees, till we get its correct placement in the world. Through OpenGL this is done by multiplying the translation (position) & rotation matrices by every single vertex in the model, to correctly move the model to the place we want in the world. These two steps constitute us building the

**Model Matrix**(*or how the world is modeled*)

Later, after the world is set up, we start setting up our shot. Usually, this means setting up where the camera is, its orientation and where exactly it would point. On the OpenGL-end this I've done through a call to gluLookAt which simply takes as input 3D vectors specifying (in exact match to real world scenario) where the camera is, where it points and its orientation (mostly known in graphics programming as the Up-Vector). This later step constitutes the

**View Matrix**(*or where we're looking and how we're looking at it*)In OpenGL, the first two constitute the ModelView matrix.

Last, but certainly not least, having setup the scene and chosen our camera, we turn our attention to the camera lens. We adjust what type of lens it is (wide/narrow lens) & its range (what distance it focuses on). This, again, is almost directly mapped to a call from gluPerspective which takes the camera's field of view angle, the width/height ratio and the near and far cutoff distances. This final step, constitutes the

**Projection Matrix**(*or, literally, how the 3D model is projected/perceived onto a 2D surface*)When it comes to the shaders then, it is fairly simple from there on. Given each vertex, and attributes for the translation/rotation of the model, the vertex shader simply computes the screen position of the vertex by multiplying all those matrices together

On a side note, the Modern OpenGL Wikibook does the computation for the rotation/translation matrix in the c code for the drawing. I moved the construction of matrices and their multiplication to the shaders in order to have this piece of computations run on the GPU, which is generally much better at these sort of floating point calculations

So, there you have it, we are finally in 3D! Next up, we will be adding some more texture (no pun intended) to the work, by adding in model textures! Till then

## No comments:

## Post a Comment