org.apache.spark.mllib.util
Class MLUtils

Object
  extended by org.apache.spark.mllib.util.MLUtils

public class MLUtils
extends Object

Helper methods to load, save and pre-process data used in ML Lib.


Constructor Summary
MLUtils()
           
 
Method Summary
static Vector appendBias(Vector vector)
          Returns a new vector with 1.0 (bias) appended to the input vector.
static double EPSILON()
           
static
<T> scala.Tuple2<RDD<T>,RDD<T>>[]
kFold(RDD<T> rdd, int numFolds, int seed, scala.reflect.ClassTag<T> evidence$1)
          :: Experimental :: Return a k element array of pairs of RDDs with the first element of each pair containing the training data, a complement of the validation data and the second element, the validation data, containing a unique 1/kth of the data.
static RDD<LabeledPoint> loadLabeledData(SparkContext sc, String dir)
          Deprecated. Should use RDD.saveAsTextFile(java.lang.String) for saving and loadLabeledPoints(org.apache.spark.SparkContext, java.lang.String, int) for loading.
static RDD<LabeledPoint> loadLabeledPoints(SparkContext sc, String dir)
          Loads labeled points saved using RDD[LabeledPoint].saveAsTextFile with the default number of partitions.
static RDD<LabeledPoint> loadLabeledPoints(SparkContext sc, String path, int minPartitions)
          Loads labeled points saved using RDD[LabeledPoint].saveAsTextFile.
static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc, String path)
          Loads binary labeled data in the LIBSVM format into an RDD[LabeledPoint], with number of features determined automatically and the default number of partitions.
static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc, String path, boolean multiclass)
           
static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc, String path, boolean multiclass, int numFeatures)
           
static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc, String path, boolean multiclass, int numFeatures, int minPartitions)
           
static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc, String path, int numFeatures)
          Loads labeled data in the LIBSVM format into an RDD[LabeledPoint], with the default number of partitions.
static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc, String path, int numFeatures, int minPartitions)
          Loads labeled data in the LIBSVM format into an RDD[LabeledPoint].
static RDD<Vector> loadVectors(SparkContext sc, String path)
          Loads vectors saved using RDD[Vector].saveAsTextFile with the default number of partitions.
static RDD<Vector> loadVectors(SparkContext sc, String path, int minPartitions)
          Loads vectors saved using RDD[Vector].saveAsTextFile.
static void saveAsLibSVMFile(RDD<LabeledPoint> data, String dir)
          Save labeled data in LIBSVM format.
static void saveLabeledData(RDD<LabeledPoint> data, String dir)
          Deprecated. Should use RDD.saveAsTextFile(java.lang.String) for saving and loadLabeledPoints(org.apache.spark.SparkContext, java.lang.String, int) for loading.
 
Methods inherited from class Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

MLUtils

public MLUtils()
Method Detail

EPSILON

public static double EPSILON()

loadLibSVMFile

public static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc,
                                               String path,
                                               int numFeatures,
                                               int minPartitions)
Loads labeled data in the LIBSVM format into an RDD[LabeledPoint]. The LIBSVM format is a text-based format used by LIBSVM and LIBLINEAR. Each line represents a labeled sparse feature vector using the following format:
label index1:value1 index2:value2 ...
where the indices are one-based and in ascending order. This method parses each line into a {@link org.apache.spark.mllib.regression.LabeledPoint}, where the feature indices are converted to zero-based. @param sc Spark context @param path file or directory path in any Hadoop-supported file system URI @param numFeatures number of features, which will be determined from the input data if a nonpositive value is given. This is useful when the dataset is already split into multiple files and you want to load them separately, because some features may not present in certain files, which leads to inconsistent feature dimensions. @param minPartitions min number of partitions @return labeled data stored as an RDD[LabeledPoint]

Parameters:
sc - (undocumented)
path - (undocumented)
numFeatures - (undocumented)
minPartitions - (undocumented)
Returns:
(undocumented)

loadLibSVMFile

public static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc,
                                               String path,
                                               boolean multiclass,
                                               int numFeatures,
                                               int minPartitions)

loadLibSVMFile

public static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc,
                                               String path,
                                               int numFeatures)
Loads labeled data in the LIBSVM format into an RDD[LabeledPoint], with the default number of partitions.

Parameters:
sc - (undocumented)
path - (undocumented)
numFeatures - (undocumented)
Returns:
(undocumented)

loadLibSVMFile

public static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc,
                                               String path,
                                               boolean multiclass,
                                               int numFeatures)

loadLibSVMFile

public static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc,
                                               String path,
                                               boolean multiclass)

loadLibSVMFile

public static RDD<LabeledPoint> loadLibSVMFile(SparkContext sc,
                                               String path)
Loads binary labeled data in the LIBSVM format into an RDD[LabeledPoint], with number of features determined automatically and the default number of partitions.

Parameters:
sc - (undocumented)
path - (undocumented)
Returns:
(undocumented)

saveAsLibSVMFile

public static void saveAsLibSVMFile(RDD<LabeledPoint> data,
                                    String dir)
Save labeled data in LIBSVM format.

Parameters:
data - an RDD of LabeledPoint to be saved
dir - directory to save the data

See Also:
loadLibSVMFile(org.apache.spark.SparkContext, java.lang.String, int, int)

loadVectors

public static RDD<Vector> loadVectors(SparkContext sc,
                                      String path,
                                      int minPartitions)
Loads vectors saved using RDD[Vector].saveAsTextFile.

Parameters:
sc - Spark context
path - file or directory path in any Hadoop-supported file system URI
minPartitions - min number of partitions
Returns:
vectors stored as an RDD[Vector]

loadVectors

public static RDD<Vector> loadVectors(SparkContext sc,
                                      String path)
Loads vectors saved using RDD[Vector].saveAsTextFile with the default number of partitions.

Parameters:
sc - (undocumented)
path - (undocumented)
Returns:
(undocumented)

loadLabeledPoints

public static RDD<LabeledPoint> loadLabeledPoints(SparkContext sc,
                                                  String path,
                                                  int minPartitions)
Loads labeled points saved using RDD[LabeledPoint].saveAsTextFile.

Parameters:
sc - Spark context
path - file or directory path in any Hadoop-supported file system URI
minPartitions - min number of partitions
Returns:
labeled points stored as an RDD[LabeledPoint]

loadLabeledPoints

public static RDD<LabeledPoint> loadLabeledPoints(SparkContext sc,
                                                  String dir)
Loads labeled points saved using RDD[LabeledPoint].saveAsTextFile with the default number of partitions.

Parameters:
sc - (undocumented)
dir - (undocumented)
Returns:
(undocumented)

loadLabeledData

public static RDD<LabeledPoint> loadLabeledData(SparkContext sc,
                                                String dir)
Deprecated. Should use RDD.saveAsTextFile(java.lang.String) for saving and loadLabeledPoints(org.apache.spark.SparkContext, java.lang.String, int) for loading.

Load labeled data from a file. The data format used here is L, f1 f2 ... where f1, f2 are feature values in Double and L is the corresponding label as Double.

Parameters:
sc - SparkContext
dir - Directory to the input data files.
Returns:
An RDD of LabeledPoint. Each labeled point has two elements: the first element is the label, and the second element represents the feature values (an array of Double).


saveLabeledData

public static void saveLabeledData(RDD<LabeledPoint> data,
                                   String dir)
Deprecated. Should use RDD.saveAsTextFile(java.lang.String) for saving and loadLabeledPoints(org.apache.spark.SparkContext, java.lang.String, int) for loading.

Save labeled data to a file. The data format used here is L, f1 f2 ... where f1, f2 are feature values in Double and L is the corresponding label as Double.

Parameters:
data - An RDD of LabeledPoints containing data to be saved.
dir - Directory to save the data.


kFold

public static <T> scala.Tuple2<RDD<T>,RDD<T>>[] kFold(RDD<T> rdd,
                                                      int numFolds,
                                                      int seed,
                                                      scala.reflect.ClassTag<T> evidence$1)
:: Experimental :: Return a k element array of pairs of RDDs with the first element of each pair containing the training data, a complement of the validation data and the second element, the validation data, containing a unique 1/kth of the data. Where k=numFolds.

Parameters:
rdd - (undocumented)
numFolds - (undocumented)
seed - (undocumented)
evidence$1 - (undocumented)
Returns:
(undocumented)

appendBias

public static Vector appendBias(Vector vector)
Returns a new vector with 1.0 (bias) appended to the input vector.

Parameters:
vector - (undocumented)
Returns:
(undocumented)