[vox-tech] linear algebra: equivalent matrices

Peter Jay Salzman p at dirac.org
Wed Dec 7 08:41:16 PST 2005


Posted to vox-tech since this is a CS topic.  I'd like to verify some things
which I think are true.


Consider the set of all square matrices of rank n.  The determinant of M,
det(M), forms an equivalence class on that set.  The equivalence relation is
defined by:

   A ~ B  iff  det(A) == det(B)                (1)


Now, like vectors, matrices are always expressed in a basis, whether we
explicitly say so or not.  So when we write the components M, we should
really write M_b where b represents the basis we chose to express M in.  We
can express M_b in a different basis, say M_a, by a rotation operation:

   M_a = S^{-1} M_b S

where S is an orthogonal "rotation matrix".  However, no matter what basis
we express M in, det(M) remains constant.  Therefore, we get an equivalence
class on the set of square matrices of rank n based on whether we can rotate
one matrix into another.  The equivalence relation is defined by:

   A ~ B  iff  A = S^{-1} M_b S                (2)

for _some_ orthogonal matrix S, which determines the basis for M.  There is
one rotation matrix S that will make M_b diagonal.  That rotation matrix is
formed by the eigenvectors of M_b.



Big finale:

The equivalence classes defined by relation (1) are epimorphic to the
equivalence classes defined by relation (2).  If we place a restriction on S
that it must have a determinant of +1 ("proper" rotations), then the two
sets of equivalence classes are isomorphic.

What this is really saying is that, when viewed as the sides of a
parallelopiped, a matrix will always have the same area no matter what basis
you choose to express it in.


How accurate is all this?  In interested in the lingo as well as the ideas.

Thx!
Pete


PS- Whether a rotation is S^{-1} M_b S or S M_b S^{-1} depends on how your
favorite linear algebra author defines his/her rotation matrices.


More information about the vox-tech mailing list