Unlike C/C++, SPL includes several matrix specific operators:
Operator |
Function |
Description |
MMULT |
matrix multiply |
|
MDIV |
matrix solve |
|
MRDIV |
matrix right division |
|
MPOW |
matrix power |
|
TRANSPOSE |
matrix transpose (postfix) |
|
TRANSPOSEC |
matrix conjugate transpose (postfix) |
For example, if:
a = {{0, 1, 2},
{1, 0, 1},
{2, 2, 1}};
b = {1,
2,
3};
then:
c = a *^ b == {8,
4,
9}
a \^ c == {1,
2,
3}
b’ == {{1, 2, 3}} // 1 x 3 Array
b’ *^ b == {14} // 1 x 1 Array
sum(b*b) == 14 // Scalar
Consider the following over-determined system of equations:
x + 4y + 7z = 30
2x + 5y + 8z = 36
3x + 6y = 15
x + 2y + z = 2
To solve these equations, we define the following variables:
A = {{1, 4, 7},
{2, 5, 8},
{3, 6, 0},
{1, 2, 1}}
x = {30,
36,
15,
2}
The \^ operator solves the system using the method of least squares:
b = A \^ x
b == {-1.8,
3.2,
2.8}
y = A *^ b
y == {30.0,
34.8,
13.0,
7.4}
norm(x-y) == 5.6921
The solution y minimizes the mean squared error.
In general, if A, b, and x are matrices such that A *^ x = b, then A \^ b returns the matrix x.
For A \^ b, where A is square, the system is solved using LU decomposition. This is usually numerically more stable than directly calculating the inverse matrix, i.e.
x = inv(A) *^ b.
If matrix A is not square, the system is considered a least squares problem and is solved by QR decomposition. The resulting matrix is the best solution in the least squares sense.
The /^ operator performs matrix right division such that B /^ A numerically approximates B * inv(A). The actual algorithm calculates (A’ \^ B’)’ where ‘ is the transpose operator.
A = {{0, 1, 2},
{1, 0, 1},
{2, 2, 1}}
x = {{3, 5, 4},
{1, 9, 2},
{3, 2, 1}}
B = A *^ x
B /^ A == {{3, 5, 4},
{1, 9, 2},
{3, 2, 1}}
For a square, non-singular matrix A and a scalar p, A^^p raises A to the p power by calculating repeated matrix multiplication if p is a positive integer. If p is a negative integer, the same calculation is performed using the inverse of A.
If p is a real or complex scalar, the operation is performed using eigenvectors and eigenvalues.
If A is a scalar and p a square, non-singular matrix, the matrix power is also calculated using eigenvectors and eigenvalues.
If both A and p are matrices, an error occurs.
A = {{0, 1, 2},
{1, 0, 1},
{2, 2, 1}}
A^^3 == {{10, 11, 17},
{ 9, 8, 10},
{18, 18, 19}}
A*^A*^A == {{10, 11, 17},
{ 9, 8, 10},
{18, 18, 19}}
A^^-1.5 == {{1.8956-1.2288i, 1.8956-0.2288i, 2.3216+1.0049i},
{1.3078+0.3562i, 1.3078-0.6438i, 1.6017+0.0637i},
{2.6155+0.7124i, 2.6155+0.7124i, 3.2033-0.8726i}}
3^^A == {{13.2331, 12.8997, 16.0229},
{ 8.9891, 9.3225, 10.9444},
{17.9783, 17.9783, 22.2222}}
The rows and columns of a matrix can be swapped with the postfix ‘ transpose operator.
A = {{0, 1, 2},
{1, 0, 1},
{2, 2, 1}}
A’ == {{0, 1, 2},
{1, 0, 2},
{2, 1, 1}}
A~^ is equivalent to conj(A’) and is identical to the ‘ operator for real matrices.