Daniel W. VanArsdale

This is a selection of basic facts, theorems and
methods that utilize homogeneous coordinates. Some of these
results have been previously published but are rarely seen in
textbooks. Others come from my article "Homogeneous
Transformation Matrices for Computer Graphics," *Computers
& Graphics*, vol. 18, no. 2, 1994. The
presentation is at an undergraduate level, but the reader should
have had some exposure to analytic projective geometry and a
course in linear algebra. However beware, many facts change when
you go from vector spaces and Cartesian coordinates to
homogeneous coordinates. For example, "flats," unlike subspaces,
need not contain the origin, and a point is a flat. And an
eigenvector becomes an invariant point, and its eigenvalue is
not fixed.

This is a companion site to Homogeneous Transformation Matrices (HTM), a cookbook presentation of matrices that effect familiar geometric transformations. Many of the conventions and notation used here are introduced in HTM, so you should link there first and read sections I and II. Included here are methods sufficient to derive all the matrix formulas in HTM (referred to as M1, M2, . . ., M16). The references and links presented at the end here are identical to the ones in HTM.

__TOPICS__

1. Some Basic
Facts

2. Duality

3. Oriented Content

4. Rank of the
Product of Two Matrices

5. Affine Collineations

6. Null Space

7. Intersections

8. Invariant
Flats

9. Axis-Center Form

10. Singular Transformations

11. Three Dependent Points

12. A Perspective Matrix

13. References and Links

**F1**. If points P_{1},
. . ., P_{n} are independent in *R ^{n}*,
any point X has a representation of the form X = c

**F2**. If the n
x s (s <= n) matrix h has independent columns there exists an
s x n matrix H, the **left inverse** of h, such that
Hh = I_{s}, I_{s }the s x s identity
matrix. Similarly the r x n (r <=n) independent matrix P has
a **right inverse. **[Shilov, p. 98]

**F3**. Let P =
[P_{1}; . . .; P_{r}] be an independent r
x n point matrix. (1) The single point Q is contained in
range(P) if and only if there exists constants k_{1}, .
. ., k_{r }such that Q = k_{1}P_{1} + .
. . + k_{r}P_{r}, which is Q = KP, with K the 1
x r matrix (k_{1}, . . ., k_{r}). (2) For an r'
x n point matrix Q, range(Q) c range(P) if and only if Q = KP for
some r' x r matrix K.

**F5.** In *R ^{n}*
an ideal point U is normalized if UU

Duality in plane projective geometry is described as follows by Coxeter:

"The principle of duality (in two dimensions) asserts that every definition remains significant, and every theorem remains true, when we interchangeIn the analytic projective geometry of arbitrary dimension (rank n), duality permits the interchange ofpointandline, andjoinandintersection.To establish this principle we merely have to observe that theaxioms imply their own duals." (Coxeter, p. 15)

Example:

*Say P and Q are distinct points, not both on hyperplane h.
Then the point X = (Qh)P - (Ph)Q is the intersection of the
line through P and Q with h*.

This follows since by its form X is on the line through points P
and Q, and Xh = 0. The dual is:

*Say p and q are distinct hyperplanes, not both containing
point H. Then the hyperplane x = (qH)p - (pH)q is the join of
the intersection of p and q with H.*

Facts F3 and F3D are dual, as are Theorems 5 and 5D.

A student's first encounter with homogeneous coordinates is often the following:

Theorem. In the Cartesian plane, distinct points
P_{1}= (x_{1}, y_{1}), P_{2}= (x_{2},
y_{2}), P_{3}= (x_{3}, y_{3})
are collinear if and only if det [1, x_{1}, y_{1};
1, x_{2}, y_{2}; 1, x_{3}, y_{3}]
= 0.

In the 3 x 3 matrix above the rows (1, x_{1},
y_{1}), (1, x_{2}, y_{2}), (1, x_{3},
y_{3}) are the normalized homogeneous coordinates of
points P_{1}, P_{2}, P_{3}. This
interpretation is not developed in the texts I have examined,
nor do linear algebra textbooks give the following:

**Theorem 1: **In *R ^{n}* the
oriented content of the simplex formed by the n ordered
independent points P

Orientation here matches what we expect in one,
two and three dimensions. For example, in the plane the
determinant will be positive if P_{1}, P_{2} and
P_{3} form a counterclockwise triangle. When the points
are dependent (collinear in *R ^{3}*) the
determinant is zero.

**4.**
__The Rank of the Product of Two Matrices__

In matrix theory the rank of a matrix A is the maximum number of independent rows (or columns) that can be selected. This is equal to the dimension of the subspace spanned by the rows of A. The following theorem appears in every basic textbook on matrices (e.g. Ayres, p. 43):

Theorem: Let A be an r x n matrix and B be an n x
s matrix. Then

(1) rank (AB) <= rank (A)

(2) rank (AB) <= rank (B).

Interpreting the components of the matrices as homogeneous coordinates we have the following more specific theorem (VanArsdale).

**Theorem 2: **Let A
be an r x n matrix and B be an n x s matrix. Then

(1) rank (AB) = rank (A) - rank [range(A) ^ null(B)]

(2) rank (AB) = rank (B) - rank [range(B^{t}) ^
null(A^{t})].

To prove (1), using concepts from linear algebra,
extend a basis Y_{1}, . . ., Y_{j}
of range(A) ^ null(B) to range(A), producing Y_{1},
. . ., Y_{j}, Y_{j+1}, . . ., Y_{j+k}.
Then Y_{j+1}B, . . ., Y_{j+k}B is a basis for
range(AB). Thus rank[range(A)] = rank(A) = rank[range(A) ^ null
(B)] + rank(AB) as required. The second part of the theorem
follows after transposing the first. Here B^{t }is the
transpose of B.

Theorem 2 gives a geometric interpretation
of how much less rank (AB) is from rank (A), and we see at
once that if the flats represented by A and B are disjoint the
two ranks are equal.

An affine
collineation (affinity) maps ideal points to ideal points.
Geometrically this means parallel lines are mapped to parallel
lines. Most familiar linear transformations are affine, for
example rotation, dilation, translation, reflection and shear. A
nonsingular matrix T represents an affinity if and only if its
first column equals [k; 0; . . .; 0], k /= 0. If k = 1 T is **normalized
**and T maps ordinary normalized points to ordinary normalized
points.

An affine collineation on *R ^{n}*
is determined by its mapping of n independent ordinary points (Snapper & Troyer, p. 93). But it is
useful to allow ideal points as follows.

**Theorem 3: **Let
P_{1}, . . ., P_{n} and Q_{1}, . . ., Q_{n}
be two sets of points in *R ^{n}* such that: (1)
for each i, P

Proof: The mapping conditions on f specify
a unique affinity since the P_{i} and P_{k}+ P_{j}
constitute n independent ordinary points, likewise their images
Q_{i} and Q_{k} + Q_{j}. And
matrix T effects these mappings since PT = Q (see F4).
Finally we need to show that T is affine and normalized. Since
the first (homogeneous) columns of P and Q are equal, Pw = Qw,
so w = P^{-1}Qw = Tw which says the first column of T is
w = (1: 0; . . . ;0). (Stolfi,
p.158, for all ordinary points).

The conditions in Theorem 3 mean that if we use
ideal points P_{j} and P_{j}f = Q_{j} to
specify an affine transformation we must choose representations
of P_{j} and Q_{j} that work when added to some
ordinary point. Usually the ordinary point P_{k} in the
theorem will be an invariant point on the axis of f. For a
simple example, the reflection f about the x-axis in *R ^{3}*
maps (0,0,1)f = (0,0,1), (1,0,1)f = (1,0,1) and
(0,1,0)f = (0, -1, 0) = Q

Let the rank n - 2 ordinary flat S be represented
by the point matrix P = [P_{1}; . . .; P_{n-2}]
and by hyperplane matrix g = [g_{1}, g_{2}] = P^{h},
g oriented and orthonormalized. Then the rotation f by angle b
about axis S has the representation T = [P; g^{N}]^{-1
}[P; Rg^{N}] where R = [cos b, sin b; -sin b,
cos b] (M15 in HTM).
This can be verified by noting (see F4): [P;
g^{N}]T = [P; Rg^{N}], thus P_{i}T = P_{i}
for i = 1, . . ., n-2 and

[g_{1}^{N};
g_{2}^{N}] T = g^{N}T = Rg^{N}
= [(cos b) g_{1}^{N}+ (sin b) g_{2}^{N};
(-sin b) g_{1}^{N }+ (cos b) g_{2}^{N}]

Thus T has mapped n - 2 + 2 = n independent
points correctly, and since f is affine by Theorem 3 this
insures T represents f. If one lacks confidence about
rotating the ideal points g_{1}^{N}_{and
}g_{2}^{N} their
coordinates can both be added to an ordinary invariant point on
axis S to return to the familiar. Theorem 3 can also be used to
derive equations M3, M5, M6, and M9 in HTM.

In HTM we introduced the notation P^{h}
for the independent hyperplane matrix that represents the same
flat (by intersections) that a given point matrix P represents
(by unions). Procedure B in HTM provides a method to calculate P^{h
}by elementary column operations. It is also useful to be
able to reverse this process and construct an independent point
matrix, Q, that represents the same flat that a given hyperplane
matrix g represents. This is simply a point representation of
the null space of g, Q = null(g), or Q = g^{P}.
Procedure B can be used to calculate g^{P} as follows.

Procedure D: For a hyperplane matrix g, find an independent point matrix

Q = g^{P}such that range(Q) = null(g).

Step 1. Transpose g (getting g^{T}).

Step 2. Use Procedure B to find (g^{T})^{h}

Step 3. Transpose (g^{T})^{h}to find Q = g^{P}

Step 4. (Optional). Normalize the rows of Q.

Homogeneous coordinates are at their best in
finding the intersection of two flats. To perform the
calculations involved it is necessary to make two simple
modifications to the two procedures (B and D) that convert point
representations to hyperplane representations (P^{h}),
and visa versa (h^{P}).

Now to find the intersection of any two flats SProcedure B (and D) modifications:

(1) Allow the input matrix P (or h) to have dependent rows (or columns).

(2) Allow the input and output of "improper" matrices: (i) a point matrix with no rows representing the null flat, and (ii) a hyperplane matrix with no columns, representing all ofR.^{n}

**Theorem 4.
**Q_{1} ^ Q_{2 } = [Q_{1}^{h},
Q_{2}^{h}] ^{P}

This works in a space of any dimension, for flats
of any dimension, for parallel flats, and for intersections of
any dimension including the null flat for disjoint inputs.
Coding requirements are simple: just reduction to a standard
form by elementary column operations, the rank n variable. If
the intersection is a single point that is all you will get,
since as specified the procedure for h^{P }produces a
matrix with independent rows. This obvious and extremely general
method has probably been known long before 1968 (Hodge & Pedoe, p. 189), but it gets
little or no attention in basic texts and packaged programs.

The capability to find intersections and unions
makes the following two constructions in three dimensional
Euclidean space (*R ^{4 }*) almost a matter of
definition.

In *R ^{4}*, given two skew lines l

In *R ^{4}*, given two ordinary skew
lines l

Say a projective transformation f on *R ^{n
}*is represented by the homogeneous matrix T. A point
P is

A flat, considered as a set of points S, is **invariant
**under a mapping f if f is a bijection on S. If each point of
S is itself invariant the flat is said to be **point-wise**
invariant (P-invariant). All points on
a P-invariant flat have the same eigenvalue, and conversely all
invariant points with equal eigenvalues under a nonsingular
projective transformation constitute a flat. An axis of f is a P-invariant
proper flat that is not strictly contained in a P-invariant
flat. I call the common eigenvalue of points on an axis the P-eigenvalue of the axis to
distinguish this number from the eigenvalue of hyperplanes.

A bundle of hyperplanes S consists of all hyperplanes that contain some flat of points (the core flat of the bundle). A bundle of hyperplanes is invariant under a mapping by f if f defines a bijection on S. Note that unless a bundle consists of a single hyperplane, there can be no projective mapping of an entire bundle by f unless f is a collineation, this because if there is a single null point of f this will lie on a hyperplane of the bundle. This break in duality arises for nonsingular projectivities because we have defined them as transformations of points. For nonsingular projectivities (collineations) hyperplanes are mapped dual to points. If each hyperplane of bundle S is itself invariant the bundle is said to be hyperplane-wise invariant (h-invariant). All hyperplanes of an h-invariant bundle have the same eigenvalue, and conversely all invariant hyperplanes with equal eigenvalues under a projective transformation constitute a bundle. A center of f is an h-invariant proper bundle (i.e. its core flat is proper) that does not strictly contain an h-invariant bundle. It helps to visualize bundles by their core flats. Thus a center of f has a core flat that does not properly contain a core flat of an h-invariant bundle. We can redefine the "center" of a projective transformation as the core flat of its center. This breaks strict duality but appears in the literature.

Example 1: A rotation T in

Most familiar geometric transformation have an axis. The next theorem, which is followed by its dual, is very useful for constructing homogenous matrices representing such transformations.

**Theorem 5.
**Say a projective transformation f on R^{n }has an axis S of
rank r and h is any independent n x s hyperplane matrix
representation of S, s = n - r. Then f has a matrix
representation of the form

**T = I + hC**

where C is some independent point matrix. If f is
a collineation, C represents a center of f. ** **(VanArsdale)

Proof: Since the common eigenvalue of
points on axis S is nonzero, any matrix representation T of f
can be scaled so this P-eigenvalue is 1. Then any invariant
point Q with eigenvalue 1 must be on h since h is an axis, and
we have Q(T - I) = 0 if and only if Qh = 0. Interpreting T - I
as a hyperplane matrix we get (T - I)^{P} = h^{P}
and so by F3D, T - I = hC, C some s x
n point matrix. Now rank(C) __>__ rank(hC) = rank(T- I) =
rank(h) = s, and since C has s rows these must be independent.

It remains to show that if f is a collineation,
range(C) is a center of f. Every hyperplane g of R^{n} is in the domain of a collineation f, and
those that contain C are invariant under f with eigenvalue 1
since Tg = (I + hC)g = g using Cg = 0. And any invariant
hyperplane g with eigenvalue 1 contains range(C), for Tg = [I +
hC]g = g implies hCg = 0 which requires Cg = 0 since h has a
left inverse. This shows there is no h-invariant proper subflat
of range(C), hence C represents a center of f.

Theorem 5D. Say a projective
transformation f on R^{n }has a center S
of rank s and C is any independent s x n point matrix
representation of S. Then f has a matrix representation of the
form

**T = I + hC**

where h is some independent hyperplane matrix. If
f is a collineation, h represents an axis of f.

Proof. Since the common eigenvalue of hyperplanes
containing center S is nonzero, any matrix representation T of f
can be scaled so this h-eigenvalue is 1. Then any invariant
hyperplane g with eigenvalue 1 must contain C since C is a
center, and we have (T - I)g = 0 if and only if Cg = 0.
Interpreting T - I as a point matrix we get range(T - I) =
range(C) and so by F3, T - I = hC, h some n x
s hyperplane matrix. Now rank(h) __>__ rank(hC) = rank(T-
I) = rank(C) = s, and since h has s columns these must be
independent.

It remains to show that if f is a collineation, h^{P
}is an axis of f. Every point P of R^{n} is in the domain of a collineation f, and
those in h^{P} are invariant under f with eigenvalue 1
since PT = P(I + hC) = P using Ph = 0. And any invariant point P
with eigenvalue 1 lies in h^{P}, for PT = P[I + hC] = P
implies PhC = 0 which requires Ph = 0 since C has a right
inverse. This shows there is no P-invariant flat that properly
contains h^{P}, hence h represents an axis of f.

Say by geometrical considerations we know a projective transformation f has an axis of rank r and a corresponding center (or null space) of rank s = n - r and we wish to construct a homogeneous matrix representation T of f. We can choose any fixed n x s hyperplane matrix h to represent the axis, and for the center (or null space) of f we can choose any fixed s x n point matrix C. By Theorem 5 we know that f is represented by a matrix of the form T = I + hC' where C' has the same range as C. Thus C' = MC where M is some nonsingular s x s matrix. So

and further conditions on f may allow us to solve
for matrix M. When rank(h) = rank(C) = 1, the "matrix" M is a
scalar. This "axis-center" method can be used to derive
matrices M1, M2, M7, M8, M10, M12, M13 and M14 in HTM. In
the next section we illustrate its use in deriving a matrix for
singular projection (M1).

If a projective transformation f on R^{n} maps at least one point to null (i.e. there
is at least one point not in the domain of f), then we call f singular, and it is represented by
a singular n x n matrix T.

Since matrix T is of rank r < n, there also exists an n x s matrix g with independent columns such that Tg = 0 (Ayres, p. 78). Matrix g may be interpreted as a hyperplane matrix and then null(g) is the range of T, range(T), since for any point P in R

Suppose instead of beginning with a transformation matrix T we are given S

Let space S

for a matrix representation of a general
projection. When the axis h is a single hyperplane this reduces
to T = I - hC/Ch, an attractive formula that I have not been
able to find published prior to 1994 (VanArsdale). Please
inform me if you know of a prior source for this. More
complicated equivalent expressions do appear. Hodge and Pedoe give one
using Grassmann coordinates (p. 309).

We have, in effect, defined a projection as a projective transformation with complementary null space and axis. Other equivalent definitions appear in the literature [Hodge and Pedoe, p. 283; Halmos, p. 73].

In linear algebra vectors V_{1}, .
. ., V_{r} are dependent if constants c_{1}, . .
., c_{r} exist, not all zero, such that c_{1}V_{1}
+ . . . + c_{r}V_{r} = 0. To actually find the
constants c_{i} requires the solution of a system
of homogeneous linear equations, which is easy enough to
do. But there is usually no convenient explicit expression for
these constants, nor a geometrical interpretation of them. In
projective space, *R ^{n}*, we have for homogeneous
coordinates:

**Theorem 6. **If three
points P_{1}, P_{2}, P_{3} are dependent
and h_{1}, h_{2} are any two hyperplanes then

(d_{21}d_{32} - d_{22}d_{31})P_{1}
+ (d_{31}d_{12} - d_{32}d_{11})P_{2}
+ (d_{11}d_{22} - d_{12}d_{21})P_{3}
= 0

where d_{ij} = P_{i}h_{j}.

To prove this we can write the lengthy dependence relation as the symbolic determinant

D = det [P_{1}, P_{2}, P_{3};
P_{1}h_{1}, P_{2}h_{1}, P_{3}h_{1};
P_{1}h_{2}, P_{2}h_{2}, P_{3}h_{2}
].

Here the first row, [P_{1}, P_{2},
P_{3}], consists of three points each with n homogeneous
coordinates, so D can not be evaluated as a number. But
when we consider each of the coordinates in turn D becomes
a legitimate determinant. And each of these n determinants will
be zero. For since the points are dependent, one is a linear
combination of the others, say P_{3} = c_{1}P_{1}
+ c_{2}P_{2}. Substituting this for P_{3}
everywhere in D will produce a determinant with dependent
columns for each of the n coordinates of the points.

One can write corollaries, duals and extensions of Theorem 7.

**Corollary
6A. **If normalized ordinary points P_{1},
P_{2}, P_{3} are dependent and h is any
hyperplane then (d_{2}-d_{3})P_{1}
+ (d_{3}-d_{1})P_{2} + (d_{1}-d_{2})P_{3}
= 0 where d_{i} = P_{i}h.

This follows from Theorem 7 by letting h_{1}
= h and h_{2} = w, the hyperplane at infinity, for
then P_{i}h_{2} = d_{i2} = 1. Here
the d_{i} have a geometrical interpretation for h
ordinary and normalized: d_{i} = P_{i}h is
the directed distance from hyperplane h to point P_{i}.

**Corollary
6B. **If the normalized ordinary
hyperplanes h_{1}, h_{2}, h_{3} are
dependent and X is any point on the ideal line L through their
normals, then

sin(a_{2} - a_{3})h_{1} + sin(a_{3}
- a_{1})h_{2} + sin(a_{1} - a_{2})h_{3}
= 0

where a_{i} is the directed angle between
X and h_{i}^{N} on L.

This can be proved by writing a dual of Theorem
7. The symbolic determinant will be: D = [ h_{1}, h_{2},
h_{3}; Xh_{1}, Xh_{2}, Xh_{3};
Yh_{1}, Yh_{2}, Yh_{3}] where X and Y
are arbitrary points. To prove Corollary 7B take X as chosen
there, and point Y orthogonal to X on L (i.e. XY^{t} =
0, see F5). Then Xh_{i} = cos a_{i}
= d_{i1} and Yh_{i} = sin a_{i} = d_{i2}
for i = 1, 2, 3. The first term of D will then be (d_{21}d_{32}
- d_{22}d_{31})P_{1 }= [cos a_{2}
sin a_{3} - sin a_{2} cos a_{3}] P_{1}
= sin ( a_{3} - a_{2} ) P_{1} and so on
as in Corollary 3B, with a change of signs to get positive
cycling of the subscripts.

When considering ideal points in *R ^{n }*and
the angles between them, it may be more suitable to use vectors
(with n - 1 components) and their dot products.

Corollary 6C.
Say the vectors V_{1}, V_{2}, V_{3} are
dependent. Then c_{1}V_{1} + c_{2}V_{2}
+ c_{3}V_{3} = 0 where c_{1}
= (d_{21}d_{32} - d_{22}d_{31}),
c2 = (d_{31}d_{12} - d_{32}d_{11}),
c3 = (d_{11}d_{22} - d_{12}d_{21})
using d_{ij} = V_{i} .V_{j} (dot
product).

This follows from the symbolic determinant D = [V_{1},
V_{2}, V_{3}; d_{11}, _{ }d_{21},_{
}d_{31}; d_{12}, d_{22},_{ }d
_{32}].

If a collineation f has a hyperplane has an axis
it is called a **perspective**. From Theorem 5 we know f has
a corresponding center point, and a matrix representation of the
form T = I + hC where hyperplane h is the axis and C is the
center. If point C is on h (Ch = 0), f is called an **elation**.
If C is not on h then f is called an **homology**. Say P and
Q are distinct points and Pf = Q. Then neither P nor Q are on h
since f is one-to-one and all points on h are invariant. Now if
Ph = Qh then f is an elation for PT = P + (Ph)C = Q implies Ph +
(Ph)Ch = Qh which, using Ph = Qh, requires Ch = 0. So if f
is an homology, Ph /= Qh.

We will construct a matrix for an homology given
its axis and two successive mappings. In *R ^{n}*
the invariant axis of perspective f fixes the image of n - 1
independent points on the axis. When f is an homology, two
additional points (off the axis) and their images will determine
n + 1 maps, and hence f. But these points can not be chosen
freely. For transforming P

**(A)**
T = I + h[-P_{1}/(P_{1}h) + xP_{2}],
x any nonzero constant.

Now if we also transform P_{2}T = P_{3},
clearly the points P_{1}, P_{2}, P_{3}
must all lie on the same line through C. This dependence of
three points provides an application for the explicit
expressions of dependence in the previous section.

**Theorem 8. **Say the homology f
has hyperplane h as an axis, and for distinct ordinary points P_{1},
P_{2}, P_{3}: P_{1}f = P_{2}
and P_{2}f = P_{3} . Use the notation d_{i}
= P_{i}h. Then f is represented by the matrix

T = I + h( -P_{1}/d_{1} + xP_{2})
where x = [d_{3}(d_{1} - d_{2})] / [d_{1}d_{2}(d_{2}-d_{3})].

Proof: Since P_{1}, P_{2}, P_{3
}are collinear we can apply Corollary 6A, using the axis h of f as
the arbitrary hyperplane in the corollary. So (d_{2}-d_{3})P_{1}
+ (d_{3}-d_{1})P_{2} + (d_{1} -
d_{2})P_{3} = 0, d_{i} as above.
Thus:

**(B)**
P_{1} + [(d_{3} - d_{1})/(d_{2}
- d_{3})] P_{2} = kP_{3}

where k is a constant we need not be concerned
with as long as it is not zero. Since f is an homology, from the
above discussion we know d_{1} /= d_{2} and d_{2}
/= d_{3}.

Now applying P_{2}T in (A)
gives P_{2}T = P_{2} + d_{2 }[-P_{1}/d_{1}
+ xP_{2}] or

**(C)**
P_{2}T = -(d_{2}/d_{1})[P_{1} -
(d_{1}/d_{2})(1 + d_{2}x)P_{2}]

We wish the expression in the brackets in (C) to
equal a multiple of P_{3 }as in (B). So solving P_{1}
- (d_{1}/d_{2})(1 + d_{2}x)P_{2}
= P_{1} + (d_{3} - d_{1})/(d_{2}
- d_{3}) P_{2} for x gives x = d_{3}(d_{1}
- d_{2})/d_{1}d_{2}(d_{2}-d_{3})
as in Theorem 8. Note that generally the homology of Theorem 8
will not be affine hence Theorem 3
above for affinities does not apply.

Ayres, F. Jr., *Matrices*,
Schaum's Outline Series, New York, 1962.

Coxeter, H.S.M., *The
Real Projective Plane* (2nd ed.), Cambridge, 1961.

Halmos, P.R., *Finite-Dimensional
Vector Spaces*, (2nd ed.), Van Nostrand, New York, 1958.

Hodge, W.V.D & Pedoe,
D., *Methods of Algebraic Geometry* (Vol. 1),
Cambridge Univ. Press, 1968.

Laub, A.J. & Shiflett,
G.R., A linear algebra approach to the analysis of rigid body
displacement from initial and final position data. *J. Appl.
Mech*. 49, 213-216, 1982.

Pedoe, D., *Geometry*,
Dover, New York, 1988.

Roberts, L.G., *Homogeneous
Matrix Representation and Manipulation of N-dimensional
Constructs*. MIT Lincoln Laboratory, MS 1405, May 1965.

Semple, J.G. & Kneebone,
G.T., *Algebraic Projective Geometry*, Clarendon
Press, Oxford, 1952.

Shilov, G.E., *Linear
Algebra*, Dover Publications, New York, 1977.

Snapper, E. & Troyer,
R.J., *Metric Affine Geometry. *Academic Press,
1971.

Stolfi, J., *Oriented
Projective Geometry*, Academic Press, 1991.

VanArsdale, D.,
Homogeneous Transformation Matrices for Computer Graphics, *Computers
& Graphics*, vol. 18, no. 2, 177-191, 1994.

Sixteen homogeneous matrices for familiar geometric transformations plus examples. Companion to this site.

Transformation
of Coordinates

Uses coordinates to prove some classical theorems in plane
projective geometry.

Math
Forum - Projective geometry

Internal links to articles on projective geometry at various
levels. Useful online resource.

Geometric
transformations

Elementary 2D and 3D transformations, including affine, shear,
and rotation.

To top of this document

To Index page for Daniel W. VanArsdale

Corrections, references, comments or questions on
this article are appreciated, but please no unrelated homework
requests.

email Daniel W. VanArsdale: barnowl@silcom.com

First uploaded Oct. 2, 2000. Sections 8-10 revised October 26, 2007. Hit counter added 11/15/2019.