Bilinear filtering provides good results if the objects are close to the camera. However, it does not work well when the object is far away and multiple texels map to the same pixel. For distant objects it would be necessary to average all the contributing texels, which has to be precomputed, for instance, by using image pyramids.

Pixel space is related to 3-D homogeneous space through a linear matrix multiplication. Consequently, the linear interpolation of pixel space coordinates is related to the linear interpolation of 3-D homogeneous space coordinates.

- Construct an array of values for each vertex of the polygon after multiplication by the projection matrix, including an 1 at the end.
- Perform clipping.
- Perform the perspective division to all elements of the vector (divide by by W). The last term becomes 1/W.
- “Interpolate all values linearly down polygon edges and across scanlines internal to the polygon”.
- At each pixel, divide the resulting values by the interpolated 1/W.

Primitives are sorted by Z coordinate in camera space and rendered from back to front. Interpenetrating polygons need to be split. Can be sped up by BSP trees.

Traces rays through each pixel of the image plane and find the closest intersection with the objects in the scene.

Keeps track of the current depth associated with each pixel. These values are interpolated during rasterization. Can be done through a Z-buffer or a W-buffer.

Perspective-correct interpolation of Z values provides more precision near the
camera (using linearly-interpolated Z values results in uniform precision). The
resolution of a Z-buffer depends on the ratio Z_{far} /
Z_{near}.

W is defined in terms of the Z coordinate in camera space and therefore its
value is independent of Z_{near}. W-buffer is the best choice if one
needs to make Z_{near} very small.

Quaternions are the 3-D analogues for complex numbers and rotations in 2-D.

A quaternion is defined as the following sum.

q = q_{0} + **q** = q_{0} + **i**q_{1} +
**j**q_{2} + **k**q_{3}

Therefore, a quaternion can be represented by a 4-tuple of real numbers.

Quaternions observe some special products

**i**^{2} = **j**^{2} = **k**^{2} = **ijk**
= -1

**ij** = **k** = -**ji**

**jk** = **i** = -**kj**

**ki** = **j** = -**ik**

After grouping the intermediate results of the multiplication of two quaternions p and q, we get the following formula.

pq = p_{0}q_{0} - **p**·**q** + p_{0}**q** +
q_{0}**p** + **p**×**q**

Where p_{0}q_{0} - **p**·**q** is a scalar and
p_{0}**q** + q_{0}**p** + **p**×**q** is a vector.

The complex conjugate q* of q is given by q* = q_{0} - **q**.

The norm of a quaternion , denoted by is .

The norm of a product is the product of the norms, .

Every non-zero quaternion q has a multiplicative inverse , such that .

A closed formula for the inverse can be found as following.

A rotation in R^{3} can be represented by a 3x3 orthogonal matrix with
determinant 1. This matrix is a rotation operator in R^{3}. Quaternions
are an alternative form of the rotation operator in R^{3}.

Quaternions (which are in R^{4}) can operate in vectors from
R^{3} by considering all vectors in R^{3} pure quaternions,
that is, quaternions whose scalar part is zero.

Rotating around is performed by . This is guaranteed to be a pure quaternion.

Shadows are regions of a scene not completely visible by the light sources. They are one of the most important clues about the spatial relationship among objects in a scene.

Most common shadow algorithms are restricted to direct light and point or directional light sources. Area light sources are usually approximated by several point lights.

The terms *umbra* and *penumbra* are used to mean complete shadow and partial
shadow, respectively.

Works by projecting the polygonal models onto a plane and painting this projection as a shadow.

Shadow mapping is an image-based algorithm which uses a depth buffer. It can be applied to any surface that can be rasterized. It is usually implemented in graphics hardware by using the texture sub-system.

It works by generating a depth map (shadow map) of the scene from the point of view of each light source.

Each fragment which is visible to the camera needs to be mapped to the light space of each light source too in order to check whether or not they were reached by the light source or not.

Shadow mapping is prone to aliasing (both during the construction and during access) and self-shadowing (which requires a bias factor to be used when testing).

The expression for obtaining the texture coordinates for a vertex of an object is the following.

In this method, the shadow test is performed against an area of the shadow map.

For each light source, shadow polygons are created. Then, starting with a counter set to the number of shadow volumes containing the eye, rays are traced from the eye towards the target. We add 1 for each front facing shadow polygon and subtract 1 for each back facing shadow polygon. Then, if the counter is zero, the object is lit, if the counter is greater than zero, the object is in shadow.

Shadows are determined by the form factors among the elements of the scene.

Trace rays from the surface point to each light sources and check if there are any intersections.

Light maps are data structures used to store the brightness of surfaces in a virtual scene. It is pre-calculated and stored in texture maps for later use. They are used to provide good quality global illumination at a relatively low computational cost.

Depth and surface details are hard to model, so relief mapping is used to fake these fine details. Normal mapping is used to define normals through a texture. Depth mapping is used to add depth to a surface. Relief mapping is based on a per-fragment ray and height-field intersection.

Finding the intersection of the ray and the height-field starts with a linear search in order to determine the boundaries for a faster and more precise binary search.

Impostors are an efficient way of adding a large number of simple models to a virtual scene without rendering a large number of polygons. A quad is rotated around its center so that it always faces the camera. Relief mapping might be used to improve the photorealism of the rendered texture.

This article from NVIDIA has more details on this.

In global illumination, the shading of a surface point is computed taking into account the other elements of the scene.

A light ray may hit several surfaces before it reaches the viewer. This better approximation has a higher cost.

Global illumination algorithms are sometimes described by a regular expression involving L (the light source), S (a specular reflection), D (a diffuse reflection), and E (the eye).

Handles multiple inter-reflections between shiny surfaces, refraction, and shadows. Does not consider multiple diffuse reflections.

Produces high-quality results for specular surfaces.

Expressed as LS^{*}E | LDS^{*}E.

Handles multiple reflections between diffuse surfaces, which includes color bleeding.

Produces high-quality results for diffuse environments.

Expressed as LD^{*}E.

Combines both approaches.

Expressed as L(S|D)^{*}E.

There are several incentives for the use of point clouds.

- Modelling is time-consuming.
- Models are becoming more detailed.
- Art archiving.
- Forensic studies.

**Algebraic methods** which try to fit a single and simple surface to the data
points are only suitable for very small datasets.

**Geometric methods** often times rely on Delaunay
triangulation. These
methods are very sensitive to noise and point cloud density.

**Implicit methods** construct a function whose
isosurface 0 approximates the
surface of the original object. In these methods objects are represented as
equations, which requires isosurface extraction algorithms to be able to render
them using the conventional graphics pipeline and derivatives to compute their
normals.

**Radial functions** provide a general approach that will approximate the
object through the solution of a linear system.