Computer graphics

Texture mapping

Bilinear filtering

Bilinear filtering provides good results if the objects are close to the camera. However, it does not work well when the object is far away and multiple texels map to the same pixel. For distant objects it would be necessary to average all the contributing texels, which has to be precomputed, for instance, by using image pyramids.

Perspective correct interpolation

Pixel space is related to 3-D homogeneous space through a linear matrix multiplication. Consequently, the linear interpolation of pixel space coordinates is related to the linear interpolation of 3-D homogeneous space coordinates.

  1. Construct an array of values for each vertex of the polygon after multiplication by the projection matrix, including an 1 at the end.
  2. Perform clipping.
  3. Perform the perspective division to all elements of the vector (divide by by W). The last term becomes 1/W.
  4. “Interpolate all values linearly down polygon edges and across scanlines internal to the polygon”.
  5. At each pixel, divide the resulting values by the interpolated 1/W.

Visible surface determination

Painter’s algorithm

Primitives are sorted by Z coordinate in camera space and rendered from back to front. Interpenetrating polygons need to be split. Can be sped up by BSP trees.

Visible surface ray tracing

Traces rays through each pixel of the image plane and find the closest intersection with the objects in the scene.

Depth buffering

Keeps track of the current depth associated with each pixel. These values are interpolated during rasterization. Can be done through a Z-buffer or a W-buffer.

Z-buffer

Perspective-correct interpolation of Z values provides more precision near the camera (using linearly-interpolated Z values results in uniform precision). The resolution of a Z-buffer depends on the ratio Zfar / Znear.

W-buffer

W is defined in terms of the Z coordinate in camera space and therefore its value is independent of Znear. W-buffer is the best choice if one needs to make Znear very small.

Quaternions

Quaternions are the 3-D analogues for complex numbers and rotations in 2-D.

A quaternion is defined as the following sum.

q = q0 + q = q0 + iq1 + jq2 + kq3

Therefore, a quaternion can be represented by a 4-tuple of real numbers.

Quaternions observe some special products

i2 = j2 = k2 = ijk = -1

ij = k = -ji

jk = i = -kj

ki = j = -ik

After grouping the intermediate results of the multiplication of two quaternions p and q, we get the following formula.

pq = p0q0 - p·q + p0q + q0p + p×q

Where p0q0 - p·q is a scalar and p0q + q0p + p×q is a vector.

The complex conjugate q* of q is given by q* = q0 - q.

The norm of a quaternion \(q\), denoted by \(\|q\|\) is \(\sqrt{q q^{*}}\).

The norm of a product is the product of the norms, \(\|pq\| = \|p\|\|q\|\).

Every non-zero quaternion q has a multiplicative inverse \(q^{-1}\), such that \(q^{-1}q = qq^{-1} = 1\).

A closed formula for the inverse can be found as following.

\[\begin{align*} & q^{-1}qq^{*} = q^{-1} \|q\|^{2} = q^{*} \\\\ \therefore {} & q^{-1} = \frac{q^{*}}{\|q\|^{2}} \end{align*}\]

A rotation in R3 can be represented by a 3x3 orthogonal matrix with determinant 1. This matrix is a rotation operator in R3. Quaternions are an alternative form of the rotation operator in R3.

Quaternions (which are in R4) can operate in vectors from R3 by considering all vectors in R3 pure quaternions, that is, quaternions whose scalar part is zero.

Rotating \(v\) around \(q\) is performed by \(qvq^{*}\). This is guaranteed to be a pure quaternion.

Shadows

Shadows are regions of a scene not completely visible by the light sources. They are one of the most important clues about the spatial relationship among objects in a scene.

Most common shadow algorithms are restricted to direct light and point or directional light sources. Area light sources are usually approximated by several point lights.

The terms umbra and penumbra are used to mean complete shadow and partial shadow, respectively.

Planar shadows

Works by projecting the polygonal models onto a plane and painting this projection as a shadow.

Shadow mapping

Shadow mapping is an image-based algorithm which uses a depth buffer. It can be applied to any surface that can be rasterized. It is usually implemented in graphics hardware by using the texture sub-system.

It works by generating a depth map (shadow map) of the scene from the point of view of each light source.

Each fragment which is visible to the camera needs to be mapped to the light space of each light source too in order to check whether or not they were reached by the light source or not.

Shadow mapping is prone to aliasing (both during the construction and during access) and self-shadowing (which requires a bias factor to be used when testing).

The expression for obtaining the texture coordinates for a vertex of an object is the following.

\[(SP_{L}LM)X_{O}\]

Percentage closer filtering

In this method, the shadow test is performed against an area of the shadow map.

Volumetric shadows

For each light source, shadow polygons are created. Then, starting with a counter set to the number of shadow volumes containing the eye, rays are traced from the eye towards the target. We add 1 for each front facing shadow polygon and subtract 1 for each back facing shadow polygon. Then, if the counter is zero, the object is lit, if the counter is greater than zero, the object is in shadow.

Radiosity

Shadows are determined by the form factors among the elements of the scene.

Ray tracing

Trace rays from the surface point to each light sources and check if there are any intersections.

Light map

Light maps are data structures used to store the brightness of surfaces in a virtual scene. It is pre-calculated and stored in texture maps for later use. They are used to provide good quality global illumination at a relatively low computational cost.

Relief mapping

Depth and surface details are hard to model, so relief mapping is used to fake these fine details. Normal mapping is used to define normals through a texture. Depth mapping is used to add depth to a surface. Relief mapping is based on a per-fragment ray and height-field intersection.

Finding the intersection of the ray and the height-field starts with a linear search in order to determine the boundaries for a faster and more precise binary search.

Impostors

Impostors are an efficient way of adding a large number of simple models to a virtual scene without rendering a large number of polygons. A quad is rotated around its center so that it always faces the camera. Relief mapping might be used to improve the photorealism of the rendered texture.

This article from NVIDIA has more details on this.

Global illumination

In global illumination, the shading of a surface point is computed taking into account the other elements of the scene.

A light ray may hit several surfaces before it reaches the viewer. This better approximation has a higher cost.

Global illumination algorithms

Global illumination algorithms are sometimes described by a regular expression involving L (the light source), S (a specular reflection), D (a diffuse reflection), and E (the eye).

Recursive ray tracing

Handles multiple inter-reflections between shiny surfaces, refraction, and shadows. Does not consider multiple diffuse reflections.

Produces high-quality results for specular surfaces.

Expressed as LS*E | LDS*E.

Radiosity

Handles multiple reflections between diffuse surfaces, which includes color bleeding.

Produces high-quality results for diffuse environments.

Expressed as LD*E.

Two-pass (radiosity and ray tracing)

Combines both approaches.

Expressed as L(S|D)*E.

Surface reconstruction from point clouds

There are several incentives for the use of point clouds.

  • Modelling is time-consuming.
  • Models are becoming more detailed.
  • Art archiving.
  • Forensic studies.

Algebraic methods which try to fit a single and simple surface to the data points are only suitable for very small datasets.

Geometric methods often times rely on Delaunay triangulation. These methods are very sensitive to noise and point cloud density.

Implicit methods construct a function \(f\) whose isosurface 0 approximates the surface of the original object. In these methods objects are represented as equations, which requires isosurface extraction algorithms to be able to render them using the conventional graphics pipeline and derivatives to compute their normals.

Radial functions provide a general approach that will approximate the object through the solution of a linear system.