public class SequenceFileRDDFunctions<K,V>
extends Object
implements org.apache.spark.internal.Logging, scala.Serializable
Constructor and Description |
---|
SequenceFileRDDFunctions(RDD<scala.Tuple2<K,V>> self,
Class<? extends org.apache.hadoop.io.Writable> _keyWritableClass,
Class<? extends org.apache.hadoop.io.Writable> _valueWritableClass,
scala.Function1<K,org.apache.hadoop.io.Writable> evidence$1,
scala.reflect.ClassTag<K> evidence$2,
scala.Function1<V,org.apache.hadoop.io.Writable> evidence$3,
scala.reflect.ClassTag<V> evidence$4) |
Modifier and Type | Method and Description |
---|---|
void |
saveAsSequenceFile(String path,
scala.Option<Class<? extends org.apache.hadoop.io.compress.CompressionCodec>> codec)
Output the RDD as a Hadoop SequenceFile using the Writable types we infer from the RDD's key
and value types.
|
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
public SequenceFileRDDFunctions(RDD<scala.Tuple2<K,V>> self, Class<? extends org.apache.hadoop.io.Writable> _keyWritableClass, Class<? extends org.apache.hadoop.io.Writable> _valueWritableClass, scala.Function1<K,org.apache.hadoop.io.Writable> evidence$1, scala.reflect.ClassTag<K> evidence$2, scala.Function1<V,org.apache.hadoop.io.Writable> evidence$3, scala.reflect.ClassTag<V> evidence$4)
public void saveAsSequenceFile(String path, scala.Option<Class<? extends org.apache.hadoop.io.compress.CompressionCodec>> codec)
path
can be on any Hadoop-supported
file system.path
- (undocumented)codec
- (undocumented)