public class RowMatrix extends Object implements DistributedMatrix, Logging
param: rows rows stored as an RDD[Vector]
param: nRows number of rows. A nonpositive value means unknown, and then the number of rows will
be determined by the number of records in the RDD rows
.
param: nCols number of columns. A nonpositive value means unknown, and then the number of
columns will be determined by the size of the first row.
Constructor and Description 

RowMatrix(RDD<Vector> rows)
Alternative constructor leaving matrix dimensions to be determined automatically.

RowMatrix(RDD<Vector> rows,
long nRows,
int nCols) 
Modifier and Type  Method and Description 

CoordinateMatrix 
columnSimilarities()
Compute all cosine similarities between columns of this matrix using the bruteforce
approach of computing normalized dot products.

CoordinateMatrix 
columnSimilarities(double threshold)
Compute similarities between columns of this matrix using a sampling approach.

MultivariateStatisticalSummary 
computeColumnSummaryStatistics()
Computes columnwise summary statistics.

Matrix 
computeCovariance()
Computes the covariance matrix, treating each row as an observation.

Matrix 
computeGramianMatrix()
Computes the Gramian matrix
A^T A . 
Matrix 
computePrincipalComponents(int k)
Computes the top k principal components.

SingularValueDecomposition<RowMatrix,Matrix> 
computeSVD(int k,
boolean computeU,
double rCond)
Computes singular value decomposition of this matrix.

RowMatrix 
multiply(Matrix B)
Multiply this matrix by a local matrix on the right.

long 
numCols()
Gets or computes the number of columns.

long 
numRows()
Gets or computes the number of rows.

RDD<Vector> 
rows() 
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public long numCols()
numCols
in interface DistributedMatrix
public long numRows()
numRows
in interface DistributedMatrix
public Matrix computeGramianMatrix()
A^T A
.public SingularValueDecomposition<RowMatrix,Matrix> computeSVD(int k, boolean computeU, double rCond)
At most k largest nonzero singular values and associated vectors are returned. If there are k such values, then the dimensions of the return will be:  U is a RowMatrix of size m x k that satisfies U' * U = eye(k),  s is a Vector of size k, holding the singular values in descending order,  V is a Matrix of size n x k that satisfies V' * V = eye(k).
We assume n is smaller than m. The singular values and the right singular vectors are derived from the eigenvalues and the eigenvectors of the Gramian matrix A' * A. U, the matrix storing the right singular vectors, is computed via matrix multiplication as U = A * (V * S^1^), if requested by user. The actual method to use is determined automatically based on the cost:  If n is small (n < 100) or k is large compared with n (k > n / 2), we compute the Gramian matrix first and then compute its top eigenvalues and eigenvectors locally on the driver. This requires a single pass with O(n^2^) storage on each executor and on the driver, and O(n^2^ k) time on the driver.  Otherwise, we compute (A' * A) * v in a distributive way and send it to ARPACK's DSAUPD to compute (A' * A)'s top eigenvalues and eigenvectors on the driver node. This requires O(k) passes, O(n) storage on each executor, and O(n k) storage on the driver.
Several internal parameters are set to default values. The reciprocal condition number rCond is set to 1e9. All singular values smaller than rCond * sigma(0) are treated as zeros, where sigma(0) is the largest singular value. The maximum number of Arnoldi update iterations for ARPACK is set to 300 or k * 3, whichever is larger. The numerical tolerance for ARPACK's eigendecomposition is set to 1e10.
k
 number of leading singular values to keep (0 < k <= n).
It might return less than k if
there are numerically zero singular values or there are not enough Ritz values
converged before the maximum number of Arnoldi update iterations is reached (in case
that matrix A is illconditioned).computeU
 whether to compute UrCond
 the reciprocal condition number. All singular values smaller than rCond * sigma(0)
are treated as zero, where sigma(0) is the largest singular value.public Matrix computeCovariance()
public Matrix computePrincipalComponents(int k)
k
 number of top principal components.public MultivariateStatisticalSummary computeColumnSummaryStatistics()
public RowMatrix multiply(Matrix B)
B
 a local matrix whose number of rows must match the number of columns of this matrixRowMatrix
representing the product,
which preserves partitioningpublic CoordinateMatrix columnSimilarities()
public CoordinateMatrix columnSimilarities(double threshold)
The threshold parameter is a tradeoff knob between estimate quality and computational cost.
Setting a threshold of 0 guarantees deterministic correct results, but comes at exactly the same cost as the bruteforce approach. Setting the threshold to positive values incurs strictly less computational cost than the bruteforce approach, however the similarities computed will be estimates.
The sampling guarantees relativeerror correctness for those pairs of columns that have similarity greater than the given similarity threshold.
To describe the guarantee, we set some notation: Let A be the smallest in magnitude nonzero element of this matrix. Let B be the largest in magnitude nonzero element of this matrix. Let L be the maximum number of nonzeros per row.
For example, for {0,1} matrices: A=B=1. Another example, for the Netflix matrix: A=1, B=5
For those column pairs that are above the threshold, the computed similarity is correct to within 20% relative error with probability at least 1  (0.981)^10/B^
The shuffle size is bounded by the *smaller* of the following two expressions:
O(n log(n) L / (threshold * A)) O(m L^2^)
The latter is the cost of the bruteforce approach, so for nonzero thresholds, the cost is always cheaper than the bruteforce approach.
threshold
 Set to 0 for deterministic guaranteed correctness.
Similarities above this threshold are estimated
with the cost vs estimate quality tradeoff described above.