I've got basic elements in place. I can create new matrices of any size. I can determine when a matrix is a vector (and I can separate between row and column vectors), I can determine when a matrix is a scalar. I can add two matrices, multiply, subtract. I can negate a matrix, transpose it, or scalar multiply it. I can test two matrices for equality.
The next big operation to support is the determinant. However, this is composed of smaller operations which I would need to implement first. Also, there are more then one way to do it:
- The classic is Laplacian decomposition. This would be relatively easy to implement. I would need to create a function to compute minors, which in turn would require me to compute cofactors. This would be recursive, and would be inefficient for large matrices.
- A better idea would be LU decomposition. This would require me to write up gaussian row operation functions, row reduction algorithms, and a simple trace.
- Also good would be QR decomposition, which would provide a good head start for eigen decomposition later on. This would require an implementation using either gram-schmidt orthonormalization, householder reflections, or givens rotations. Gram-schmidt would probably not be too difficult since I already have vector dot-products (but no vector 2-norms yet).
I'm not planning to do any of the rest of this today, or even next week because of the final cleanup work that I need to do on my thesis source code. I have to clean some of the code up (add a lot of comments which are sorely needed), and work out a few kinks. Ideally, this won't take me too too long
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.