Class MLUtils

Object
org.apache.spark.mllib.util.MLUtils

public class MLUtils extends Object
Helper methods to load, save and pre-process data used in MLLib.
  • Constructor Details

    • MLUtils

      public MLUtils()
  • Method Details

    • convertVectorColumnsToML

      public static Dataset<Row> convertVectorColumnsToML(Dataset<?> dataset, String... cols)
      Converts vector columns in an input Dataset from the Vector type to the new Vector type under the spark.ml package.
      Parameters:
      dataset - input dataset
      cols - a list of vector columns to be converted. New vector columns will be ignored. If unspecified, all old vector columns will be converted except nested ones.
      Returns:
      the input DataFrame with old vector columns converted to the new vector type
    • convertVectorColumnsFromML

      public static Dataset<Row> convertVectorColumnsFromML(Dataset<?> dataset, String... cols)
      Converts vector columns in an input Dataset to the Vector type from the new Vector type under the spark.ml package.
      Parameters:
      dataset - input dataset
      cols - a list of vector columns to be converted. Old vector columns will be ignored. If unspecified, all new vector columns will be converted except nested ones.
      Returns:
      the input DataFrame with new vector columns converted to the old vector type
    • convertMatrixColumnsToML

      public static Dataset<Row> convertMatrixColumnsToML(Dataset<?> dataset, String... cols)
      Converts Matrix columns in an input Dataset from the Matrix type to the new Matrix type under the spark.ml package.
      Parameters:
      dataset - input dataset
      cols - a list of matrix columns to be converted. New matrix columns will be ignored. If unspecified, all old matrix columns will be converted except nested ones.
      Returns:
      the input DataFrame with old matrix columns converted to the new matrix type
    • convertMatrixColumnsFromML

      public static Dataset<Row> convertMatrixColumnsFromML(Dataset<?> dataset, String... cols)
      Converts matrix columns in an input Dataset to the Matrix type from the new Matrix type under the spark.ml package.
      Parameters:
      dataset - input dataset
      cols - a list of matrix columns to be converted. Old matrix columns will be ignored. If unspecified, all new matrix columns will be converted except nested ones.
      Returns:
      the input DataFrame with new matrix columns converted to the old matrix type
    • loadLibSVMFile

      public static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc, String path, int numFeatures, int minPartitions)
      Loads labeled data in the LIBSVM format into an RDD[LabeledPoint]. The LIBSVM format is a text-based format used by LIBSVM and LIBLINEAR. Each line represents a labeled sparse feature vector using the following format:
      label index1:value1 index2:value2 ...
      where the indices are one-based and in ascending order. This method parses each line into a {@link org.apache.spark.mllib.regression.LabeledPoint}, where the feature indices are converted to zero-based. @param sc Spark context @param path file or directory path in any Hadoop-supported file system URI @param numFeatures number of features, which will be determined from the input data if a nonpositive value is given. This is useful when the dataset is already split into multiple files and you want to load them separately, because some features may not present in certain files, which leads to inconsistent feature dimensions. @param minPartitions min number of partitions @return labeled data stored as an RDD[LabeledPoint]
      Parameters:
      sc - (undocumented)
      path - (undocumented)
      numFeatures - (undocumented)
      minPartitions - (undocumented)
      Returns:
      (undocumented)
    • loadLibSVMFile

      public static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc, String path, int numFeatures)
      Loads labeled data in the LIBSVM format into an RDD[LabeledPoint], with the default number of partitions.
      Parameters:
      sc - (undocumented)
      path - (undocumented)
      numFeatures - (undocumented)
      Returns:
      (undocumented)
    • loadLibSVMFile

      public static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc, String path)
      Loads binary labeled data in the LIBSVM format into an RDD[LabeledPoint], with number of features determined automatically and the default number of partitions.
      Parameters:
      sc - (undocumented)
      path - (undocumented)
      Returns:
      (undocumented)
    • saveAsLibSVMFile

      public static void saveAsLibSVMFile(RDD<LabeledPoint> data, String dir)
      Save labeled data in LIBSVM format.
      Parameters:
      data - an RDD of LabeledPoint to be saved
      dir - directory to save the data
      See Also:
      • org.apache.spark.mllib.util.MLUtils.loadLibSVMFile
    • loadVectors

      public static RDD<Vector> loadVectors(SparkContext sc, String path, int minPartitions)
      Loads vectors saved using RDD[Vector].saveAsTextFile.
      Parameters:
      sc - Spark context
      path - file or directory path in any Hadoop-supported file system URI
      minPartitions - min number of partitions
      Returns:
      vectors stored as an RDD[Vector]
    • loadVectors

      public static RDD<Vector> loadVectors(SparkContext sc, String path)
      Loads vectors saved using RDD[Vector].saveAsTextFile with the default number of partitions.
      Parameters:
      sc - (undocumented)
      path - (undocumented)
      Returns:
      (undocumented)
    • loadLabeledPoints

      public static RDD<LabeledPoint> loadLabeledPoints(SparkContext sc, String path, int minPartitions)
      Loads labeled points saved using RDD[LabeledPoint].saveAsTextFile.
      Parameters:
      sc - Spark context
      path - file or directory path in any Hadoop-supported file system URI
      minPartitions - min number of partitions
      Returns:
      labeled points stored as an RDD[LabeledPoint]
    • loadLabeledPoints

      public static RDD<LabeledPoint> loadLabeledPoints(SparkContext sc, String dir)
      Loads labeled points saved using RDD[LabeledPoint].saveAsTextFile with the default number of partitions.
      Parameters:
      sc - (undocumented)
      dir - (undocumented)
      Returns:
      (undocumented)
    • kFold

      public static <T> scala.Tuple2<RDD<T>,RDD<T>>[] kFold(RDD<T> rdd, int numFolds, int seed, scala.reflect.ClassTag<T> evidence$1)
      Return a k element array of pairs of RDDs with the first element of each pair containing the training data, a complement of the validation data and the second element, the validation data, containing a unique 1/kth of the data. Where k=numFolds.
      Parameters:
      rdd - (undocumented)
      numFolds - (undocumented)
      seed - (undocumented)
      evidence$1 - (undocumented)
      Returns:
      (undocumented)
    • kFold

      public static <T> scala.Tuple2<RDD<T>,RDD<T>>[] kFold(RDD<T> rdd, int numFolds, long seed, scala.reflect.ClassTag<T> evidence$2)
      Version of kFold() taking a Long seed.
      Parameters:
      rdd - (undocumented)
      numFolds - (undocumented)
      seed - (undocumented)
      evidence$2 - (undocumented)
      Returns:
      (undocumented)
    • kFold

      public static scala.Tuple2<RDD<Row>,RDD<Row>>[] kFold(Dataset<Row> df, int numFolds, String foldColName)
      Version of kFold() taking a fold column name.
      Parameters:
      df - (undocumented)
      numFolds - (undocumented)
      foldColName - (undocumented)
      Returns:
      (undocumented)
    • appendBias

      public static Vector appendBias(Vector vector)
      Returns a new vector with 1.0 (bias) appended to the input vector.
      Parameters:
      vector - (undocumented)
      Returns:
      (undocumented)
    • convertVectorColumnsToML

      public static Dataset<Row> convertVectorColumnsToML(Dataset<?> dataset, scala.collection.Seq<String> cols)
      Converts vector columns in an input Dataset from the Vector type to the new Vector type under the spark.ml package.
      Parameters:
      dataset - input dataset
      cols - a list of vector columns to be converted. New vector columns will be ignored. If unspecified, all old vector columns will be converted except nested ones.
      Returns:
      the input DataFrame with old vector columns converted to the new vector type
    • convertVectorColumnsFromML

      public static Dataset<Row> convertVectorColumnsFromML(Dataset<?> dataset, scala.collection.Seq<String> cols)
      Converts vector columns in an input Dataset to the Vector type from the new Vector type under the spark.ml package.
      Parameters:
      dataset - input dataset
      cols - a list of vector columns to be converted. Old vector columns will be ignored. If unspecified, all new vector columns will be converted except nested ones.
      Returns:
      the input DataFrame with new vector columns converted to the old vector type
    • convertMatrixColumnsToML

      public static Dataset<Row> convertMatrixColumnsToML(Dataset<?> dataset, scala.collection.Seq<String> cols)
      Converts Matrix columns in an input Dataset from the Matrix type to the new Matrix type under the spark.ml package.
      Parameters:
      dataset - input dataset
      cols - a list of matrix columns to be converted. New matrix columns will be ignored. If unspecified, all old matrix columns will be converted except nested ones.
      Returns:
      the input DataFrame with old matrix columns converted to the new matrix type
    • convertMatrixColumnsFromML

      public static Dataset<Row> convertMatrixColumnsFromML(Dataset<?> dataset, scala.collection.Seq<String> cols)
      Converts matrix columns in an input Dataset to the Matrix type from the new Matrix type under the spark.ml package.
      Parameters:
      dataset - input dataset
      cols - a list of matrix columns to be converted. Old matrix columns will be ignored. If unspecified, all new matrix columns will be converted except nested ones.
      Returns:
      the input DataFrame with new matrix columns converted to the old matrix type
    • org$apache$spark$internal$Logging$$log_

      public static org.slf4j.Logger org$apache$spark$internal$Logging$$log_()
    • org$apache$spark$internal$Logging$$log__$eq

      public static void org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1)