A number of people have asked me about how I apply textures to quads generated using the greedy meshing approach (video: https://www.youtube.com/watch?v=0OZxZZCea8I, source: https://github.com/roboleary/GreedyMesh). There are a few ways to do this, but I thought it might be useful for me to share my approach.
I texture greedily meshed quads by applying an offset and a modulus in the fragment shader. I assume the use of a texture atlas – since it’s a widely used approach for texturing voxel environments – and it’s an efficient way to work. For example, the default texture atlas in an Adventure Box world is of size 16×16 and looks like this:
This atlas is divided into 16×16 individual block textures (with lots of space at the bottom for user-defined voxel types!). This means that if we apply this atlas to a quad, one voxel face should be covered by texture coordinates covering 1/16th of this atlas in 2 dimensions. In other words, to texture a single voxel face we need to copy pixels from the image starting at an offset on both axes and incrementing that offset 1/16 times. This, logically, will give us a single tile – since each tile is 1/16th of the atlas width and height!
For example, in the atlas above LAVA is found at location 6 on the X axis and 0 on the Y axis.
So for a LAVA quad, we need this information, (6,0). We also need to know how large this quad is – how many LAVA voxel faces we’re texturing – for example, the greedy mesher might generate a quad that’s, let’s say, 5 x 10 LAVA voxel faces merged together, which in the end should look like this:
To successfully texture this quad we need to send the tile offset in the atlas (6,0) and the quad dimensions (5 voxels by 10 voxels) to the graphics card, and then calculate the correct texture coordinates for each pixel in the fragment shader.
The tile offsets are the same at all vertices in the quad, since all vertices are associated with one type of voxel. However, the quad dimensions are sent as the texture coordinates, and they’re different at each vertex. If we imagine that the quad has just 4 vertices (which, granted, it doesn’t – but to keep things simple) we’ll be sending data like this on each vertex buffer:
The numbers on the left are the atlas offsets. The numbers on the right are the texture coordinates multiplied by the quad size… Essentially, the quad dimensions cunningly disguised as over-sized texture coordinates.
Note that we have no need for any floating point buffers here unless the quad becomes larger than 256 voxels on any axis. This approach limits chunk dimensions to 256 on all axes. We could use floating point buffers if we wanted to get around this limitation, but by sending in values as floats we would be buffering up 4 bytes for each coordinate. If we just send a single byte we reduce the amount of memory we’re using in that buffer by 75% – and buffer size has a direct impact on performance. Personally, I just send in the atlas offsets as bytes and ensure none of the atlas or chunk dimensions are larger than 256. Actually, if the atlas is no bigger than 16×16, we can send in the texture coordinates as a single packed byte – though we’d then have unpack the byte it in GL, which would also impact performance.
These kinds of details are performance optimizations, but in voxel engines you really want dense buffers – graphics cards are touchy, and you can easily tank your frame rate by using bloated buffers that cause unwanted paging of memory on the card.
Anyway, let’s assume we send in 4 separate bytes on each vertex – one for each of the above values. For each voxel face (light grey in the image above), we want the pixels taken from the atlas to be in the range ((atlasOffset * TILE_SIZE), ((atlasOffset * TILE_SIZE) + TILE_SIZE)), where TILE_SIZE = 1/[the number of tiles in the atlas on the X axis].
So taking just the X axis as an example (the same logic applies to the Y axis – but to keep things simple I’ll focus on 1 axis), we know the tile offset on the X axis in the atlas is 6 So it makes sense that the texture coordinate function on the X axis should be of the form:
x = (6 * TILE_SIZE) + ?
This means that every location on the quad will be textured with some pixel that’s located at a position equal to or further along than (6 * TILE_SIZE) on the X axis in the atlas.
What we then want is that every point on the quad is textured with a pixel located at a position equal to or further along than (6 * TILE_SIZE) on the X axis and less than ((6 * TILE_SIZE) + TILE_SIZE).
To make this happen we can look at the second set of values. If they’re not pre-multiplied by the tile size (and thus, bytes rather than floats) they’ll look like this at each point on the quad during fragment shading:
This means that we have the information necessary to know when the texture should repeat – whenever we hit a whole number in the texture coordinate interpolation. So we need to calculate the texture coordinates within the tile based on these values. We can do this by taking a modulus of the coordinate. In this example I haven’t premultiplied the quad dimensions by the tile size (since that would result in a float buffer), but if I had, we would have this (assuming a tile size of 1/16 = 0.0625):
In this latter case the function to generate the correct coordinates is a little more simple – because we don’t need to multiply by the tile size in the shader. So for example, if we’re in the fragment shader and we’re shading the area between 0 and 0.3125 on the x axis, we’ll be generating texture coordinates something like:
x = (4 * TILE_SIZE) + mod(0, TILE_SIZE)
x = (4 * TILE_SIZE) + mod(0.0625, TILE_SIZE)
x = (4 * TILE_SIZE) + mod(0.125, TILE_SIZE)
x = (4 * TILE_SIZE) + mod(0.1875, TILE_SIZE)
x = (4 * TILE_SIZE) + mod(0.25, TILE_SIZE)
x = (4 * TILE_SIZE) + mod(0.3125, TILE_SIZE)
The number of steps taken will depend on, among other things, the screen resolution – but the important thing to note is that those modulus functions will return the same loop of values, despite the fact that the X texture coordinate increases steadily across the quad. In fact, each of the above formulae equate to:
x = (4 * TILE_SIZE) + 0
This makes sense, because the values listed are on the leading edge of each voxel face in the quad. The fact that the values between each of these steps will steadily increase from 0 to 0.0625 (the tile size) is what creates a looping texture and solves the problem.
On the other hand, if we just pass byte values with the quad dimensions as texture coordinates – so we’re sending the actual quad size (5, 10), rather than the quad size multiplied by the tile size (0.3125, 0.625) , we need to slightly modify this function to multiply the texture coordinates by the TILE_SIZE in the shader:
x = (xOffset * TILE_SIZE) + mod((xTextureCoordinate * TILE_SIZE), TILE_SIZE);
This might perhaps be more clearly rendered:
x = (xOffset * TILE_SIZE) + mod(xTextureCoordinate, 1.0) * TILE_SIZE;
So you can see that the texture coordinate is always looping between 0 and 1, then being multiplied by the TILE_SIZE, resulting in a value always looping between 0 and TILE_SIZE, and added to a fixed offset in the atlas, resulting in texture coordinates always looping over the same tile in the atlas.
Exactly the same approach applies to the Y axis (though watch out for flippy Y-axis coordinates if you’re expecting to read the atlas from the top down! : )
Best of luck with your voxel project!