Difference between revisions of "Texture magnification antialiasing"

From vegard.wiki
Jump to navigation Jump to search
(hardware bilinear filtering)
(No difference)

Latest revision as of 09:37, 9 April 2020

Nearest neighbour texture filtering. Notice the uneven pixel sizes e.g. of the stars on the left.
Linear texture filtering. This is too blurry to preserve a "pixel art" look.
Pixel-aware texture filtering using the algorithm below. Pixels retain a crisp look while also being much more uniform in size.

There are several uses for upscaling pixel art, here I consider the following cases:

  1. fitting a smaller game screen (e.g. 160x144) to a modern resolution (e.g. 1280x1080)
  2. preserving a "pixel art" look for textured 3D models

The main problem with upscaling pixel art is when the the upscaled size is not an integer multiple of the original size. If you use nearest filtering then your pixel sizes will be uneven; this can be particularly jarring for animations where e.g. a part of a sprite appears to change size as it moves across the screen.[TODO: Add images.] You can't use (bi)linear filtering either, because your pixels will be smeared out into something that no longer looks like pixel art at all (specifically, the problem is that pixel values are interpolated across the whole upscaled pixel rather than just on pixel borders).

One solution for case 1 above is to only upscale to the nearest integer ratio. This will add a border to the image, however, and depending on the actual ratio (and the actual sizes!) this may be highly undesirable (e.g. it wastes a lot of screen space or does not magnify the image sufficiently). For case 2 this doesn't work at all, since the scale factor is not uniform across the screen.

What we really want is a filtering algorithm that antialiases only the thin line between the (low-res) pixels. As far as I know, there is no magnification filter in OpenGL that can do this, but we can do it ourselves in a shader.

Naïve implementation

Fragment shader

The first thing we need to do is to calculate the size (in full-scale pixel units) of a single low-res pixel. For case 1 above, where pixel sizes are uniform, we can just use small_screen_size / big_screen_size. For case 2, we can use fwidth().[TODO: Explain exactly how this works.]

We now need to test if a fragment is within half a pixel of the border of its texel (so within half a pixel of the top, bottom, left, or right in texture space); in that case, we need to blend with the neighbouring texel. The blend factor will depend on how close we are. The reason for using half a pixel instead of one pixel is that the 1 pixel border is shared between two texels, so by checking half a pixel on the right in the left texel and half a pixel on the left in the right texel we effectively get a 1 pixel border when you put them next to each other.[TODO: Add illustration.]

// tex is the texture to sample
// uv is the texture coordinate ranging from 0 to small_screen_size

#if 0
vec2 border = .5 * fwidth(uv);
vec2 border = .5 * small_screen_size / big_screen_size;

vec2 uvf = fract(uv);

// the main color of the texel, assuming we're not at a border
vec4 col = texelFetch(tex, ivec2(uv), 0);

vec4 xcol = col;
if (uvf.x < border.x)
    xcol = mix(texelFetchOffset(tex, ivec2(uv), 0, ivec2(-1, 0)), col, uvf.x / border.x);
else if (1. - uvf.x < border.x)
    xcol = mix(texelFetchOffset(tex, ivec2(uv), 0, ivec2(1, 0)), col, (1. - uvf.x) / border.x);

vec4 ycol = col;
if (uvf.y < border.y)
    ycol = mix(texelFetchOffset(tex, ivec2(uv), 0, ivec2(0, -1)), col, uvf.y / border.y);
else if (1. - uvf.y < border.y)
    ycol = mix(texelFetchOffet(tex, ivec2(uv), 0, ivec2(0, 1)), col, (1. - uvf.y) / border.y);

Finally, to account for those borders that sit at the boundary of 4 texels we just take the average of the two values we got in each direction:

output_color = mix(xcol, ycol, .5);

Note that the actual texture sampler must not use any hardware texture filtering! E.g. for GL_TEXTURE_MAG_FILTER it should be using GL_NEAREST; for GL_TEXTURE_MIN_FILTER it might be fine to use linear interpolation or even mipmapping.

Using hardware bilinear filtering

The algorithm above is implemented using up to 4 weighted (single-texel) texture lookups. It is also possible to implement it using built-in hardware bilinear filtering. In this case, the resulting fragment is still a combination of up to 4 texels, but where the uv coordinate for the texture lookup is modified so that it only interpolates at the 1-pixel boundary between texels.

This algorithm is implemented in https://www.shadertoy.com/view/ldlSzS and the technique is called "anti-aliased point sampled magnified textures".

[TODO: Show fFragment shader code for this.]

See also