Object
org.apache.spark.mllib.linalg.distributed.RowMatrix
All Implemented Interfaces:
Serializable, org.apache.spark.internal.Logging, DistributedMatrix, scala.Serializable

public class RowMatrix extends Object implements DistributedMatrix, org.apache.spark.internal.Logging
Represents a row-oriented distributed Matrix with no meaningful row indices.

param: rows rows stored as an RDD[Vector] param: nRows number of rows. A non-positive value means unknown, and then the number of rows will be determined by the number of records in the RDD rows. param: nCols number of columns. A non-positive value means unknown, and then the number of columns will be determined by the size of the first row.

See Also:
  • Constructor Details

    • RowMatrix

      public RowMatrix(RDD<Vector> rows, long nRows, int nCols)
    • RowMatrix

      public RowMatrix(RDD<Vector> rows)
      Alternative constructor leaving matrix dimensions to be determined automatically.
  • Method Details

    • rows

      public RDD<Vector> rows()
    • numCols

      public long numCols()
      Gets or computes the number of columns.
      Specified by:
      numCols in interface DistributedMatrix
    • numRows

      public long numRows()
      Gets or computes the number of rows.
      Specified by:
      numRows in interface DistributedMatrix
    • computeGramianMatrix

      public Matrix computeGramianMatrix()
      Computes the Gramian matrix A^T A.

      Returns:
      (undocumented)
      Note:
      This cannot be computed on matrices with more than 65535 columns.
    • computeSVD

      public SingularValueDecomposition<RowMatrix,Matrix> computeSVD(int k, boolean computeU, double rCond)
      Computes singular value decomposition of this matrix. Denote this matrix by A (m x n). This will compute matrices U, S, V such that A ~= U * S * V', where S contains the leading k singular values, U and V contain the corresponding singular vectors.

      At most k largest non-zero singular values and associated vectors are returned. If there are k such values, then the dimensions of the return will be: - U is a RowMatrix of size m x k that satisfies U' * U = eye(k), - s is a Vector of size k, holding the singular values in descending order, - V is a Matrix of size n x k that satisfies V' * V = eye(k).

      We assume n is smaller than m, though this is not strictly required. The singular values and the right singular vectors are derived from the eigenvalues and the eigenvectors of the Gramian matrix A' * A. U, the matrix storing the right singular vectors, is computed via matrix multiplication as U = A * (V * S^-1^), if requested by user. The actual method to use is determined automatically based on the cost: - If n is small (n &lt; 100) or k is large compared with n (k &gt; n / 2), we compute the Gramian matrix first and then compute its top eigenvalues and eigenvectors locally on the driver. This requires a single pass with O(n^2^) storage on each executor and on the driver, and O(n^2^ k) time on the driver. - Otherwise, we compute (A' * A) * v in a distributive way and send it to ARPACK's DSAUPD to compute (A' * A)'s top eigenvalues and eigenvectors on the driver node. This requires O(k) passes, O(n) storage on each executor, and O(n k) storage on the driver.

      Several internal parameters are set to default values. The reciprocal condition number rCond is set to 1e-9. All singular values smaller than rCond * sigma(0) are treated as zeros, where sigma(0) is the largest singular value. The maximum number of Arnoldi update iterations for ARPACK is set to 300 or k * 3, whichever is larger. The numerical tolerance for ARPACK's eigen-decomposition is set to 1e-10.

      Parameters:
      k - number of leading singular values to keep (0 &lt; k &lt;= n). It might return less than k if there are numerically zero singular values or there are not enough Ritz values converged before the maximum number of Arnoldi update iterations is reached (in case that matrix A is ill-conditioned).
      computeU - whether to compute U
      rCond - the reciprocal condition number. All singular values smaller than rCond * sigma(0) are treated as zero, where sigma(0) is the largest singular value.
      Returns:
      SingularValueDecomposition(U, s, V). U = null if computeU = false.

      Note:
      The conditions that decide which method to use internally and the default parameters are subject to change.
    • computeCovariance

      public Matrix computeCovariance()
      Computes the covariance matrix, treating each row as an observation.

      Returns:
      a local dense matrix of size n x n

      Note:
      This cannot be computed on matrices with more than 65535 columns.
    • computePrincipalComponentsAndExplainedVariance

      public scala.Tuple2<Matrix,Vector> computePrincipalComponentsAndExplainedVariance(int k)
      Computes the top k principal components and a vector of proportions of variance explained by each principal component. Rows correspond to observations and columns correspond to variables. The principal components are stored a local matrix of size n-by-k. Each column corresponds for one principal component, and the columns are in descending order of component variance. The row data do not need to be "centered" first; it is not necessary for the mean of each column to be 0. But, if the number of columns are more than 65535, then the data need to be "centered".

      Parameters:
      k - number of top principal components.
      Returns:
      a matrix of size n-by-k, whose columns are principal components, and a vector of values which indicate how much variance each principal component explains
    • computePrincipalComponents

      public Matrix computePrincipalComponents(int k)
      Computes the top k principal components only.

      Parameters:
      k - number of top principal components.
      Returns:
      a matrix of size n-by-k, whose columns are principal components
      See Also:
    • computeColumnSummaryStatistics

      public MultivariateStatisticalSummary computeColumnSummaryStatistics()
      Computes column-wise summary statistics.
      Returns:
      (undocumented)
    • multiply

      public RowMatrix multiply(Matrix B)
      Multiply this matrix by a local matrix on the right.

      Parameters:
      B - a local matrix whose number of rows must match the number of columns of this matrix
      Returns:
      a RowMatrix representing the product, which preserves partitioning
    • columnSimilarities

      public CoordinateMatrix columnSimilarities()
      Compute all cosine similarities between columns of this matrix using the brute-force approach of computing normalized dot products.

      Returns:
      An n x n sparse upper-triangular matrix of cosine similarities between columns of this matrix.
    • columnSimilarities

      public CoordinateMatrix columnSimilarities(double threshold)
      Compute similarities between columns of this matrix using a sampling approach.

      The threshold parameter is a trade-off knob between estimate quality and computational cost.

      Setting a threshold of 0 guarantees deterministic correct results, but comes at exactly the same cost as the brute-force approach. Setting the threshold to positive values incurs strictly less computational cost than the brute-force approach, however the similarities computed will be estimates.

      The sampling guarantees relative-error correctness for those pairs of columns that have similarity greater than the given similarity threshold.

      To describe the guarantee, we set some notation: Let A be the smallest in magnitude non-zero element of this matrix. Let B be the largest in magnitude non-zero element of this matrix. Let L be the maximum number of non-zeros per row.

      For example, for {0,1} matrices: A=B=1. Another example, for the Netflix matrix: A=1, B=5

      For those column pairs that are above the threshold, the computed similarity is correct to within 20% relative error with probability at least 1 - (0.981)^10/B^

      The shuffle size is bounded by the *smaller* of the following two expressions:

      O(n log(n) L / (threshold * A)) O(m L^2^)

      The latter is the cost of the brute-force approach, so for non-zero thresholds, the cost is always cheaper than the brute-force approach.

      Parameters:
      threshold - Set to 0 for deterministic guaranteed correctness. Similarities above this threshold are estimated with the cost vs estimate quality trade-off described above.
      Returns:
      An n x n sparse upper-triangular matrix of cosine similarities between columns of this matrix.
    • tallSkinnyQR

      public QRDecomposition<RowMatrix,Matrix> tallSkinnyQR(boolean computeQ)
      Compute QR decomposition for RowMatrix. The implementation is designed to optimize the QR decomposition (factorization) for the RowMatrix of a tall and skinny shape. Reference: Paul G. Constantine, David F. Gleich. "Tall and skinny QR factorizations in MapReduce architectures" (see here)

      Parameters:
      computeQ - whether to computeQ
      Returns:
      QRDecomposition(Q, R), Q = null if computeQ = false.