Pagefault We write about things, mostly programming related

"gland" - Declarative OpenGL in D

Outlined in a previous post, I put down some ideas on how to utilize D’s metaprogramming capabilities in order to reduce the amount of fairly unnecessary boilerplate when writing OpenGL code. (I should note, I was heavily inspired by tomaka’s glium in rust land)

Since then, I’ve been hard at work abusing the basic idea laid out, coupled with implementing things I merely hinted at in my last post, like instancing and element buffer object support.

But that’s enough talking, lets see some example of the code in use and why one might want this at all!

Using a texture as a render target

Lets imagine the steps in a (relatively) common scenario where one has a texture one wants to use as a render target.

  1. Create the target texture of the desired size and texture format.
  2. Create the Framebuffer object, pointing at the created texture’s handle, optionally attaching a depth texture attachment at this stage, requiring yet another texture to be created.

Doesn’t look so difficult right? (anyone who has touched OpenGL or insert-graphics-api-here now will probably think otherwise)

The point here is, in OpenGL this will be an unnecessarily large amount of code for most simpler use-cases, and my goal is to achieve a leaner API that still allows the flexibility of using OpenGL.

With our API

We’re going to start out by defining the shaders we want to use for this process, very simple pass-through shaders, where the first pair is for the triangle we’re drawing and the second for the render target with its texture.

Shader Definitions

immutable char* tr_vs = q{
    #version 330 core

    layout (location = 0) in vec2 position;
    layout (location = 1) in vec3 colour;
    out vec3 v_colour;

    void main() {
        gl_Position = vec4(position, 0.0, 1.0);
        v_colour = colour;
    }
};

immutable char* tr_fs = q{
    #version 330 core

    in vec3 v_colour;
    out vec4 f_colour;

    void main() {
        f_colour = vec4(v_colour, 1.0);
    }
};

immutable char* tex_vs = q{
    #version 330 core

    layout (location = 0) in vec2 position;
    layout (location = 1) in vec2 uv;
    out vec2 tex_coord;

    void main() {
        gl_Position = vec4(position, 0.0, 1.0);
        tex_coord = uv;
    }
};

immutable char* tex_fs = q{
    #version 330 core

    in vec2 tex_coord;
    uniform sampler2D diffuse;
    out vec4 f_colour;

    void main() {
        f_colour = texture2D(diffuse, tex_coord);
    }
}

Shader and Uniform Types

Now that we’ve defined the code to be used for our shaders, lets move to on our next step, defining the types of the shaders we will be using.

alias TriangleShader = Shader!(
    [ShaderType.VertexShader, ShaderType.FragmentShader], [
        AttribTuple("position", 0),
        AttribTuple("colour", 1)
    ]
);

struct TextureUniform {
    @TextureUnit(0)
    Texture2D* diffuse;
} // TextureUniform

alias TextureShader = Shader!(
    [ShaderType.VertexShader, ShaderType.FragmentShader], [
        AttribTuple("position", 0),
        AttribTuple("uv", 1)
    ], TextureUniform
);

Lets look at this given code snippet a bit closer as now we’re entering what the API is doing.

Through usage of alias, we’ve given our shader types something we can refer to them by, as typing out the actual template parameters everywhere we use them would be a pain. The Shader struct is templated on two parameters, the first being a list of the shader types that will make up the shader program, the second being a list of the vertex attribs a given shader program accepts.

Following those two parameters, the template optionally accepts a type to be used as the struct to pass uniform data through, in this case TextureUniform. Its only member in this case is a Texture2D*, annotated with what texture unit it should be bound to when it is used.

The texture member name diffuse here is used when the shader is compiled, as it uses glGetUniformLocation on every member in the uniform data structure passed to get the actual location id to pass the data to when it is time to draw and pass uniforms. If glGetUniformLocation returns -1 inside the compilation process, it will assert and halt, messaging the user abouit which member wasn’t found in the given shader program as a uniform.

Vertex Type Definitions

Now that we’ve defined the shaders and the uniform to pass the texture to the GPU with, lets move on to defining what will be our vertex data.

struct Vertex2f3f {
    float[2] position;
    float[3] colour;
} // Vertex2f3f

@(DrawType.DrawArrays)
struct TriangleData {

    @(DrawHint.StaticDraw)
    @(BufferTarget.ArrayBuffer)
    @VertexCountProvider
    Vertex2f3f[] vertices;

} // TriangleData

// this alias is here to make it less annoying to refer to
alias TriangleVao = VertexArrayT!TriangleData;

struct Vertex2f2f {
    float[2] position;
    float[2] uv;
} // Vertex2f2f

@(DrawType.DrawArrays)
struct FramebufferData {

    @(DrawHint.StaticDraw)
    @(BufferTarget.ArrayBuffer)
    @VertexCountProvider 
    Vertex2f2f[] vertices;

} // FramebufferData

// ditto
alias FrameVao = VertexArrayT!FramebufferData;

What the API does is do a walk over all members in the structure, checking for the existence of certain UDAs, find that it has a @DrawHint annotation, using that as the draw hint, similarly for the @BufferTarget it uses it for setting it as a GL_ARRAY_BUFFER.

The @VertexCountProvider annotation here is used to tell the API which member’s data length to use for setting the vertices count to use when drawing with OpenGL.

So passing in data like:

Vertex2f3f[3] tri_vertices = [
    Vertex2f3f([0.0f, 0.5f], [1.0f, 0.0f, 0.0f]),
    Vertex2f3f([-0.5f, -0.5f], [0.0f, 1.0f, 0.0f]),
    Vertex2f3f([0.5f, -0.5f], [0.0f, 0.0f, 1.0f])
];

Would result in a length of 3 being set in the count parameter in the call to glDrawArrays. (when drawing finally happens)

One of the most important parts here is the annotation to the data structs themselves, where it has something like @(DrawType.DrawArrays), what this does is inform the API what is valid to annotate the given members with, essentially a form of rudimentary type-checking inside our API which is aware of what combinations of attributes are valid, and which are not. (all at compile-time!)

For example, there is also @(DrawType.DrawElements), which requires at least one element buffer object, as well as an @ElementCountProvider to tell it how many elements there are to draw.

Where the magic happens

Now that we’ve defined our data structures, lets get on with using all these structures for actually putting something on screen!

Creating our Rendering device

The createDevice function accepts three parameters in forms of function pointers, two first get window dimensions on demand, third one a function to call when it is time to do a present.

// w defined as the window
auto device = Renderer.createDevice(&w.width, &w.height, &w.present);

Creating our Framebuffer

Now, creating the Framebuffer we will render our data to!

int texture_w = device.width / 8;
int texture_h = device.height / 8;

TextureParams tex_params = {
    internal_format : InternalTextureFormat.RGB,
    pixel_format : PixelFormat.RGB
};

Texture2D framebuffer_texture;
auto texture_result = Texture2D.create(
    framebuffer_texture,
    null, // initialized with no data
    texture_w, texture_h,
    tex_params
);

// create a frame buffer from this texture
SimpleFrameBuffer fbo; // false here means "no depth"
auto fbo_result = framebuffer_texture.asSurface(fbo, false);

Compiling our Shaders

Given that we now have our texture data, next step is creating the shader programs we earlier defined the data for, which will be used to generate the OpenGL code in this step.

// load shader for drawing textured thing
TextureShader tx_shader;
auto ts_result = TextureShader.compile(tx_shader, &tex_vs, &tex_fs);

// load graphics and stuff
TriangleShader tr_shader;
auto trs_result = TriangleShader.compile(tr_shader, &tr_vs, &tr_fs);

// omitting error checking for now

Defining our vertex data

Now that we’ve defined all types necessary to do the actual drawing, we can move on to defining the vertex data we will use with these to put things on screen! (as well as uploading the data to the GPU)

// declare triangle vertex data
Vertex2f3f[3] tri_vertices = [
    Vertex2f3f([0.0f, 0.5f], [1.0f, 0.0f, 0.0f]), // triangle top
    Vertex2f3f([-0.5f, -0.5f], [0.0f, 1.0f, 0.0f]), // triangle left
    Vertex2f3f([0.5f, -0.5f], [0.0f, 0.0f, 1.0f]) // triangle right
];

// now, upload vertices
auto tri_data = TriangleData(tri_vertices);
auto vao = TriangleVao.upload(tri_data, DrawPrimitive.Triangles);

// declare fbo rect vertex data
Vertex2f2f[6] rect_vertices = [
    Vertex2f2f([-1.0f, -1.0f], [0.0f, 0.0f]), // top left
    Vertex2f2f([1.0f, -1.0f], [1.0f, 0.0f]), // top right
    Vertex2f2f([1.0f, 1.0f], [1.0f, 1.0f]), // bottom right

    Vertex2f2f([-1.0f, -1.0f], [0.0f, 0.0f]), // top left
    Vertex2f2f([-1.0f, 1.0f], [0.0f, 1.0f]), // bottom left
    Vertex2f2f([1.0f, 1.0f], [1.0f, 1.0f]) // bottom right
];

// also, rect vertices
auto frame_data = FramebufferData(rect_vertices);
auto rect_vao = FrameVao.upload(frame_data, DrawPrimitive.Triangles);

So what happens here is that, given the information specified earlier, OpenGL code is generated by the API to put the data on the GPU, changing what data you want to upload is as simple as changing the vertex type definitions at the top and following the errors, avoiding the normally extremely error-prone process of changing OpenGL code to match the new data definition.

If you want to peer into this beast, I’d recommend reading over the last post to get an idea of the very basic idea, then looking at this code.

Drawing to screen

There we go, now we’re ready to draw things to screen!

Most of the heavy lifting involved here is done behind the scenes by employing D’s abilities for compile-time reflection over types, and compile-time metaprogramming.

But the end result is an API that lets you define the types in your data, what methods to use for drawing them, and the normal boilerplate is mostly automated away.

while (window.isAlive) {

    // handle window events
    window.handleEvents();

    // check if it's time to quit
    if (window.isKeyDown(SDL_SCANCODE_ESCAPE)) {
        window.quit();
    }

    // default state, holds OpenGL state to be used in draw call 
    DrawParams params = {};

    // render using triangle shader and vertices to texture
    fbo.draw(tr_shader, vao, params);

    // now render given texture, woo!
    auto uniform_data = TextureUniform(&framebuffer_texture);
    device.draw(tx_shader, rect_vao, params, uniform_data);

    device.present();

}

End Result

Triangle Rendered into Smaller Framebuffer

There’s a lot i’ve left largely unexplained in this post as I just wanted to showcase a bit what it is like writing graphics code using this API.

More may be explained in greater detail in later posts, but for those interested, I’d point you towards the example showcased in this post which should be easily runnably by running dub in the render-to-texture folder.

Or, the main repository if you want to dig through the code, it’s all out there!

The API is slightly messy right now and relies on the availability of Vertex Array Objects and will thus have poor chances of working with older OpenGL than 3.3, as that was the goal I initially set for myself.

I hope you enjoyed the article as a showcase of D’s metaprogramming abilities, please do comment if anything is unnecessarily confusing!

Thanks for reading :)