Skip to content
Tags

72. Shaders – what are they?

June 1, 2013

This is the first of a series of posts on shaders – what they are, how you can use them – and maybe, how you can write your own. You should know something about meshes and vertices first, so if you haven’t done so, please read my early posts on that.

Early in my career, I was preparing an explanation of a very complex proposal for a big company board. The chairman took me aside and said

Put it on one page in kindergarten language

which was good advice. This explanation is going to take more than one page, but I’m going to make it as simple as I can, with no apologies for any kindergarten language, if it helps you and me understand what is going on.

I’ve read a few explanations of what shaders are, and they all talk about pipelines, vectors, rasterisation, scissor tests and the like. Well, I’d like a really simple explanation, like a politician’s famous description of the internet as a series of tubes. He didn’t talk about packets, nodes, IP addresses, etc.

So here goes.

Pipes

OpenGL is like a long pipe. You put your instructions in one end, and pixels come out of the other end to form a 3D picture on your screen. Inside the pipe is a chain of complex processes that we will not pretend to understand.

So let’s just focus on the pipe.

There are two places where holes have been cut in the pipe, so you can see the information flowing past, and you can actually reach in and change that information. This information is at a low level, down in the detail.

This is rather like running a normal computer program – which goes through a pipe of its own, called compilation – and being allowed to change some of the low level machine code at some point. If you’ve ever seen machine code, you’ll know that while it is extremely powerful and very fast, you have to know what you are doing or you can really mess up.

The two holes in the OpenGL pipe are a bit like that. They let you change the information stream, but they assume you know what you are doing, and the code is not as simple nor as forgiving as Codea – any mistakes, and your screen is simply blank.

We’ll be bold and peek in the holes anyway. But first, just a little more about what the pipe does, so we can understand what we see in the holes.

Vertexes to pixels

In cartoon animation, skilled artists don’t draw every single frame. They draw key frames, and get other people (or, these days, computers) to fill in the frames in between, a process called tweening.

Similarly, in 3D graphics, we have to specify the position and colour of our drawings, but we only need to do it for a sample set of points, called vertices. Effectively, we create a wireframe. OpenGL then interpolates all the pixels between these points. It groups vertices in triangles because this is the simplest shape, and so it uses the three corner values to calculate all the values inside the triangle. It’s as simple as that.

To do this, OpenGL needs to know the x,y,z position of each vertex, and its colour, or, if you are pasting a texture image over the wireframe, which point of the image goes on top of that vertex.

So each vertex has three key pieces of information

  • x,y,z position
  • colour (if you’ve set it)
  • texture mapping (which point in the texture goes here)

.
OpenGL can then interpolate this information to calculate the position and colour of every single pixel inside each triangle.

OpenGL does a whole heap of other very complex things with long names, of course, but all we care about is that we provide a set of vertices with information, and OpenGL interpolates them into colours for each pixel.

Back to the holes in the pipe.

Vertex shaders

The first hole in the pipe is cut at the point where the mesh has been split into individual vertices, and all the information for each vertex has been collected together. OpenGL is just about to pass all that information to the interpolation process.

So when we look into the hole, we just see a single vertex. Like I said, we are working at a low level, here. The vertex knows its x,y,z position, a colour (if you’ve set one), and its position on the texture image, and not much else.

There is some code in the hole, but all it seems to do is take some of this information and assign it to other variables without changing any of it, which may seem rather pointless.

In fact, the exact code is

    vColor = color;
    vTexCoord = texCoord;

But this is deliberate, because it gives you the chance to make some changes. So you can take the vertex data, write code that fiddles with it, and pass on the results.

And the code you put in this hole is called a vertex shader.

What would you want to change about a vertex? Well, a vertex is mainly about position, so for example, you could make a mirror image (ie inverted on the x axis) by flipping the x coordinate. Or you could create an exploding object by making the x,y,z coordinates fly apart in a certain way over a series of frames.

You have a number of limitations to work with. First, your code works with one vertex at a time, and it doesn’t have much information to work with, only what affects that one vertex. So it doesn’t know anything about its neighbouring vertices, for example.

Second, the code is written in its own language, based on C, and doesn’t have many functions.

Third, if there is an error, you get nothing, which can make debugging a little frustrating (although at least you can’t break anything, failure is perfectly safe).

But we’ll come back to these later.

Fragment shaders

The second hole in the pipe is cut at the point where the vertex information has been interpolated across every pixel in the mesh.

So all of that has just happened before reaching our hole. Looking in, we see a single pixel. We are really down in the weeds now.

Once again, there is code here. And all this code does, is to to take the interpolated color and texture position, and use it to figure out the colour that applies to the pixel. This takes just two lines of code.

    lowp vec4 col = texture2D(texture, vTexCoord) * vColor ;
    gl_FragColor = col;

This looks a bit strange, but you’ll get used to it.

The first line is equivalent to myImage:get(x,y) in Codea. It gets the colour at one point on the image called texture, at the x,y position specified by vTexCoord, and puts it in a variable called col.
And, if you have set colours for the vertices, it will apply the interpolated colour (vColor) here. For now, don’t worry about why it’s multiplying by the colour.

The second line simply assigns that colour to something called gl_FragColor.

So again, this code doesn’t do much. But, as with the vertex shader, the idea is that if we want, we can mess with the pixel colour. And it turns out we can do this in many interesting ways. In fact, nearly all the shaders built into Codea are of this type.

And any code we write for this hole is called a fragment shader (fragments are just another word for pixels).

So

 

  • vertex shaders affect individual vertices, and
  • fragment shaders affect individual pixels.

 

.
At this point, it is still a complete mystery how they do this, but the following posts will have examples which should help you understand.

Advertisement

From → Shaders

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: