[vox-tech] linear algebra: equivalent matrices
Peter Jay Salzman
p at dirac.org
Wed Dec 7 12:56:08 PST 2005
On Wed 07 Dec 05, 3:05 PM, Aaron A. King <aaron.king at umich.edu> said:
> Not sure if I'm understanding your question aright, Peter, but I think you're
> asking if the equivalence relation (1) is isomorphic to the equivalence
> relation (2). That is not the case. If you view a matrix as defining a
> parallelopiped in n-space, then the determinant measures the volume thereof.
> Now it is easy to have two parallelopipeds which are not congruent but which
> have the same volume.
>
> The relation (2) defines a set of equivalence classes which is much finer than
> the set defined by (1). To put it another way, the determinant is only one
> of many matrix invariants. The full set of invariants under the relation (2)
> can be summarized in the Jordan form of the matrix.
>
> Apologies for pontificating, especially if I've answered a question you never
> asked.
>
> Cheers (and congrats on the Ph.D.!),
>
> Aaron
Don't apologize -- this is exactly what I wanted to know!
OK, so then it's not true that all matrices with the same determinant are
related by a rotation. I was going to ask about trace too,
Tr[M_b]
= Tr[S^{-1} M_a S ]
= Tr[S] Tr[M_a^{-1}] Tr[S^{-1}]
= Tr[M_a^{-1}]
!= Tr[M_a]
but trace isn't invariant under inverse, so the trace relation can't be
equivalent to relation (2) either.
Thanks for chiming in!
Pete
> On Wednesday 07 December 2005 01:50 pm, Peter Jay Salzman wrote:
> | > Peter Jay Salzman wrote:
> | > >
> | > > Consider the set of all square matrices of rank n. The determinant of
> | > > M, det(M), forms an equivalence class on that set. The equivalence
> | > > relation is defined by:
> | > >
> | > > A ~ B iff det(A) == det(B) (1)
> | > >
> | > >
> | > > Now, like vectors, matrices are always expressed in a basis, whether we
> | > > explicitly say so or not. So when we write the components M, we should
> | > > really write M_b where b represents the basis we chose to express M in.
> | > > We can express M_b in a different basis, say M_a, by a rotation
> | > > operation:
> | > >
> | > > M_a = S^{-1} M_b S
> | > >
> | > > where S is an orthogonal "rotation matrix". However, no matter what
> | > > basis we express M in, det(M) remains constant. Therefore, we get an
> | > > equivalence class on the set of square matrices of rank n based on
> | > > whether we can rotate
> | > > one matrix into another. The equivalence relation is defined by:
> | > >
> | > > A ~ B iff A = S^{-1} B S (2)
> | > >
> | > > for _some_ orthogonal matrix S, which determines the basis for M. There
> | > > is one rotation matrix S that will make M_b diagonal. That rotation
> | > > matrix is formed by the eigenvectors of M_b.
> | > >
> | > >
> | > >
> | > > Big finale:
> | > >
> | > > The equivalence classes defined by relation (1) are epimorphic to the
> | > > equivalence classes defined by relation (2). If we place a restriction
> | > > on S that it must have a determinant of +1 ("proper" rotations), then
> | > > the two sets of equivalence classes are isomorphic.
> | > >
> | > > What this is really saying is that, when viewed as the sides of a
> | > > parallelopiped, a matrix will always have the same area no matter what
> | > > basis you choose to express it in.
> | > >
> | > >
> | > > How accurate is all this? In interested in the lingo as well as the
> | > > ideas.
More information about the vox-tech
mailing list