Performs general least squares fitting of basis functions using singular value decomposition.
SVDDIV(A, y, tol)
(coef, coefstd) = SVDDIV(A, y, tol)
A 
 
An input array, the basis function design matrix. 
y 
 
An input series, the Y output. 
tol 
 
Optional. A real, the threshold at which to zero out singular values. If not specified, all singular values are used. 
A series, the fitted coefficients.
(coef, coefstd) = SVDDIV(A, y, tol) returns the fitted coefficients and the standard deviation of the coefficients.
x = {1, 1, 1.5, 3};
y = {6, 2, 2.25, 6};
c1 = polyfit(y, x, 2);
c2 = svddiv(ravel(ones(length(x),1), x, x^2), y);
c1 == c2 == {3, 2, 1};
The first method uses POLYFIT to fit a quadratic polynomial to the data points. As shown, SVDDIV produces the same result. The fitted equation becomes:
W1: gnorm(100,.01)
W2: 5*W1 + 3*sin(W1)  2*exp(W1*W1)
W3: ravel(W1, sin(W1), exp(W1*W1))
W4: svddiv(W3, W2)
W4 == {5, 3, 2}, the coefficients of W2.
In this example, we wish to find the coefficients of the relation:
Notice that although each term is not necessarily linear, the equation as a whole is a linear combination of terms. SVDDIV solves the set of simultaneous equations to yield:
Consider the problem of fitting a set of data points to a model that is a linear combination of functions, called basis functions:
The basis functions themselves are not required to be linear, only the dependence on the coefficients c_{k} is linear.
By employing the method of least squares, we wish to find the coefficients that minimize the sum of the squares of the differences between the data points and the fitted results:
where N is the number of data points and M is the number of basis functions. We define the NxM design matrix:
where the design matrix has the form:
Each column of the design matrix is a basis function evaluated at the known x values.
We also define the output vector y of length N as:
With these matrix definitions, we restate the least squares criteria as finding the coefficient vector c that minimizes:
This quantity is minimized to 0 when:
SVDDIV solves the over determined set of simultaneous equations as specified by A and y. Given the matrix equation:
A *^ c = y
SVDDIV calculates:
c = V *^ (1/W) *^ (transpose(U) *^ y)
where
A = U *^ W *^ transpose(V)
as calculated by SVD. By specifying TOL, singular values less than TOL are eliminated and SVDDIV essentially calculates a least squares fit. A typical value for TOL is EPS. See [1] for further details.
Although computationally intensive, SVDDIV is particularly useful when the least squares set of equations is illconditioned and susceptible to roundoff errors.
See LINFIT for a least squares method that uses faster QR decomposition, direct normal equations or SVD and a discussion of the normal equations of the least squares problem.
See DADiSP/MatrixXL to significantly optimize SVDDIV.
[1] 
Press, Flannery, Teukolsky, Vetterling 

Numerical Recipes in C 

Cambridge Press, 1988 

pp. 407552 