Introduction

triangle

Shaders are the modern way to work with vertex. They are little functions provide us a great flexibility to transform the vertex through a standard pipeline.

 

 

Why shaders?

The main reason is they are design as a little functions which allow to transform vertex and matrices in a very simple way, just transforming a simple vertex input to a single vertex output. Although this way to transform vertex introduce some complexity, at the same time it gives great flexibility and defines an standard pipeline graphics processor:

  1. Vertex processor: in this stage we are gonna transform our vertex.
  2. Geometry processor: so far we not apply anything in this stage, however it is the responsible to know the topology and emit results using several vertex at the same time.
  3. Clipper: simple, it just clips the primitive to the normalized box we defined on the previous tutorial.
  4. Rasterizer: It renderizes all the geometry previously defined on the screen.
    1. Fragment processor: Transform the colors of each pixel, e.g.: by sampling a texture.

 

Shader structure

As a well defined pipeline it describes a way to write a shader. It will has:

  • in variables, e.g.: in type variable_name.
  • out variables, e.g.: out type variable_name.
  • uniform variables: e.g.: uniform type variable_name. They will not be changed across the time, i.e., for each time all vertex are processed, all vertex will apply the same value for that variable. This will allow us to apply transformation to the coordinates for all vertex based on external values.
  • application which transforms the vertex.

where type can be any of the already known types (int, float, bool) and GLSL containers: vecN, bvecN, ivecN, uvecN or dvecN. N is the number of components of the type, from 1 to 4. The following represents a shader skeleton:

#version version_number
in type input_variable_name;
out type output_variable_name;
  
void main()
{
  output_variable_name = f(input_variable_name);
}

 

 

Development

Vertex shader

This is our vertex shader in our application:

// Vertex shader. This will modify the vertex attributes
static const char* pVS = "                                                    \n\
#version 330                                                                  \n\
                                                                              \n\
layout (location = 0) in vec3 Position;                                       \n\
                                                                              \n\
uniform float glScale;                                                        \n\
                                                                              \n\
void main()                                                                   \n\
{                                                                             \n\
    gl_Position = vec4(glScale * Position.x, Position.y, Position.z, 1.0);    \n\
}";

 we will have as an input a vector of three componentes, i.e. vertex, which will be transformed to a vector of four components by modifiying its width (x coordinate). layout (location = 0) creates a binding between the attribute name and the attribute buffer. The remaining explanation is for gl_Position, which is defined as the output of the shader, it containts its position.

 

Fragment shader

// Fragment shared. This will modify the pixel colors
static const char* pFS = "                                                    \n\
#version 330                                                                  \n\
                                                                              \n\
out vec4 FragColor;                                                           \n\
                                                                              \n\
void main()                                                                   \n\
{                                                                             \n\
    FragColor = vec4(0.0, 0.0, 1.0, 0.8);                                     \n\
}";

Have a look to previous code, no matter what color input we have, for all vertex we are gonna set the same color values.

 

Binding shaders with the application

We now have our shaders but how are they gonna be executed? Well, first of all we need to know that shaders are always there, if we don't define them some default code will be executed anyway. So, it is not necessary to specify when execute our shared, we just need to tell the application they are there:

void compileShaders()
{
    GLuint shaderProgram = glCreateProgram();
    if (shaderProgram == 0) {
        fprintf(stderr, "Error creating shader program\n");
        exit(1);
    }
    // Create and attach each shader processor to the program object
    addShader(shaderProgram, pVS, GL_VERTEX_SHADER);
    addShader(shaderProgram, pFS, GL_FRAGMENT_SHADER);
    GLint Success = 0;
    GLchar ErrorLog[1024] = { 0 };
    glLinkProgram(shaderProgram);
    // Verify shaders
    glGetProgramiv(shaderProgram, GL_LINK_STATUS, &Success);
	if (Success == 0) {
		glGetProgramInfoLog(shaderProgram, sizeof(ErrorLog), NULL, ErrorLog);
		fprintf(stderr, "Error linking shader program: '%s'\n", ErrorLog);
        exit(1);
	}
    glValidateProgram(shaderProgram);
    glGetProgramiv(shaderProgram, GL_VALIDATE_STATUS, &Success);
    if (!Success) {
        glGetProgramInfoLog(shaderProgram, sizeof(ErrorLog), NULL, ErrorLog);
        fprintf(stderr, "Invalid shader program: '%s'\n", ErrorLog);
        exit(1);
    }
    // Install the program object as part of current rendering state
    glUseProgram(shaderProgram);
    // Links the shader uniform variable with the defined one here which is
    // incremented on the render function
    gScaleLocation = glGetUniformLocation(shaderProgram, "glScale");
}

 

And yes, you are right once we have a shader program we just need to write glUseProgram(shaderProgram) to apply our shader. The previous lines are necessary but they does not have any relationship at all with vertex nor colors, they just compiles, links and verifies the shaders.

 

Move the triangule!

As mentioned on the previous tutorial the render scene will only take place once if we do not redimensionate the window so in order to be able to watch the transformation we need to register another GLUT callback: glutIdleFunc(RenderSceneCB); this will just register the same "main renderer" as we already did with glutDisplayFunc(RenderSceneCB); but now this function will be called continuously. So, adding some new logic with memory, e.g. a value incremented each time the function is called, we will be able to appreciate some movement. Now our rendeded look like this:

static void RenderSceneCB()
{
    // Cleans the color buffer
    glClear(GL_COLOR_BUFFER_BIT);
    static float scale = 0.0f;
    scale += 0.01;
    glUniform1f(gScaleLocation, sinf(scale));
    glColor3f(0.5, 0.0, 0.3);
    // Activates the first array
    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
    // Draw a triangle
    glDrawArrays(GL_TRIANGLES, 0, 3);
    glDisableVertexAttribArray(0);
    glutSwapBuffers(); 
}

 where we added our uniform variable gScaleLocation which will be incremented 0.01 each time the function is called. So, depending on your hardware capabilities the change will be fast or slow.

 

All code is available here.