First and foremost, I've finally added some methods to convert between type. There are now methods for convert_to_number_matrix, convert_to_complex_matrix, and convert_to_pmc_matrix. Every type implements all of these methods, even when it would be a no-op. This is so you can take a PMC and if it "does 'matrix'" you can cast it to the type you want to deal with without worrying about spending too much expense. These methods always return a new matrix, so you can keep a copy of the matrix in it's original form and also have a new copy to play with. In the case where the matrix is already in the target format, I create a clone.
Unfortunately, these conversion operations can be a little bit expensive when you're actually converting types. The problem is that the data for the matrices is stored internally in a very dense format. For the Number and Complex matrices, the data is stored internally in the format required by the BLAS library routines. For the number matrix, the values are basically stored together in a single large buffer. For complex matrices, the real and imaginary values are stored together also, alternating positions. Converting one to the other is not easy, since I have to allocate a completely new buffer and iterate over each space individually. So, too many conversion operations can get expensive quickly.
Using these new conversion methods, I have updated some previous methods, like gemm(), the routine which performs the GEMM operation from the BLAS library. You can now pass any matrix types to this method, and it will perform conversions internally to ensure the types match. Here's a fun little NQP example that shows some of the capabilities of this library today:
INIT { pir::load_bytecode("pla_nqp.pbc"); } my $A := NumMatrix2D.new(); $A.initialize_from_args(2, 2, 1, 2.0, "3", 4); $A.transpose(); pir::say("A: " ~ $A); my $B := ComplexMatrix2D.new(); $B.initialize_from_array(2, 2, [1, 2.0, "3+3i", "7+5i"]); $B.conjugate(); pir::say("B: " ~ $B); my $C := PMCMatrix2D.new(); $C.fill(4.4, 2, 2); pir::say("C: " ~ $C); my $D := $B.gemm(0.5, $A, $B, "1+2i", $C); pir::say("D: " ~ $D);
You can try it yourself, or you can take my word for it that the result is correct. I've verified it this morning in Octave, and the results are the same (though the Octave script to produce this result is considerably shorter).
PLA is finally starting to get the kind of basic functionality and test coverage that I've been hoping for. With a few more finishing touches on this base, I'm going to start adding new functionality like LAPACK bindings. Specifically, I'm hoping to add in support for some common matrix decompositions, matrix reductions, inverses and eigenvalues. I'm also hoping to get started on the module for Rakudo sometime soon.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.