Package org.apache.spark.ml.regression
Class AFTSurvivalRegressionModel
Object
org.apache.spark.ml.PipelineStage
org.apache.spark.ml.Transformer
org.apache.spark.ml.Model<M>
org.apache.spark.ml.PredictionModel<FeaturesType,M>
 
org.apache.spark.ml.regression.RegressionModel<Vector,AFTSurvivalRegressionModel>
 
org.apache.spark.ml.regression.AFTSurvivalRegressionModel
- All Implemented Interfaces:
- Serializable,- org.apache.spark.internal.Logging,- Params,- HasAggregationDepth,- HasFeaturesCol,- HasFitIntercept,- HasLabelCol,- HasMaxBlockSizeInMB,- HasMaxIter,- HasPredictionCol,- HasTol,- PredictorParams,- AFTSurvivalRegressionParams,- Identifiable,- MLWritable
public class AFTSurvivalRegressionModel
extends RegressionModel<Vector,AFTSurvivalRegressionModel>
implements AFTSurvivalRegressionParams, MLWritable 
Model produced by 
AFTSurvivalRegression.- See Also:
- 
Nested Class SummaryNested ClassesNested classes/interfaces inherited from interface org.apache.spark.internal.Loggingorg.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
- 
Method SummaryModifier and TypeMethodDescriptionfinal IntParamParam for suggested depth for treeAggregate (>= 2).Param for censor column name.Creates a copy of this instance with the same UID and some extra params.final BooleanParamParam for whether to fit an intercept term.doublestatic AFTSurvivalRegressionModelfinal DoubleParamParam for Maximum memory in MB for stacking input data into blocks.final IntParammaxIter()Param for maximum number of iterations (>= 0).intReturns the number of features the model was trained on.doublePredict label for the given features.predictQuantiles(Vector features) final DoubleArrayParamParam for quantile probabilities array.Param for quantiles column name.static MLReader<AFTSurvivalRegressionModel>read()doublescale()setQuantileProbabilities(double[] value) setQuantilesCol(String value) final DoubleParamtol()Param for the convergence tolerance for iterative algorithms (>= 0).toString()Transforms dataset by reading fromPredictionModel.featuresCol(), callingpredict, and storing the predictions as a new columnPredictionModel.predictionCol().transformSchema(StructType schema) Check transform validity and derive the output schema from the input schema.uid()An immutable unique ID for the object and its derivatives.write()Returns anMLWriterinstance for this ML instance.Methods inherited from class org.apache.spark.ml.PredictionModelfeaturesCol, labelCol, predictionCol, setFeaturesCol, setPredictionColMethods inherited from class org.apache.spark.ml.Transformertransform, transform, transformMethods inherited from class org.apache.spark.ml.PipelineStageparamsMethods inherited from class java.lang.Objectequals, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface org.apache.spark.ml.regression.AFTSurvivalRegressionParamsgetCensorCol, getQuantileProbabilities, getQuantilesCol, hasQuantilesCol, validateAndTransformSchemaMethods inherited from interface org.apache.spark.ml.param.shared.HasAggregationDepthgetAggregationDepthMethods inherited from interface org.apache.spark.ml.param.shared.HasFeaturesColfeaturesCol, getFeaturesColMethods inherited from interface org.apache.spark.ml.param.shared.HasFitInterceptgetFitInterceptMethods inherited from interface org.apache.spark.ml.param.shared.HasLabelColgetLabelCol, labelColMethods inherited from interface org.apache.spark.ml.param.shared.HasMaxBlockSizeInMBgetMaxBlockSizeInMBMethods inherited from interface org.apache.spark.ml.param.shared.HasMaxItergetMaxIterMethods inherited from interface org.apache.spark.ml.param.shared.HasPredictionColgetPredictionCol, predictionColMethods inherited from interface org.apache.spark.internal.LogginginitializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logBasedOnLevel, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, MDC, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface org.apache.spark.ml.util.MLWritablesaveMethods inherited from interface org.apache.spark.ml.param.Paramsclear, copyValues, defaultCopy, defaultParamMap, estimateMatadataSize, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwnMethods inherited from interface org.apache.spark.ml.PredictorParamsvalidateAndTransformSchema
- 
Method Details- 
read
- 
load
- 
censorColDescription copied from interface:AFTSurvivalRegressionParamsParam for censor column name. The value of this column could be 0 or 1. If the value is 1, it means the event has occurred i.e. uncensored; otherwise censored.- Specified by:
- censorColin interface- AFTSurvivalRegressionParams
- Returns:
- (undocumented)
 
- 
quantileProbabilitiesDescription copied from interface:AFTSurvivalRegressionParamsParam for quantile probabilities array. Values of the quantile probabilities array should be in the range (0, 1) and the array should be non-empty.- Specified by:
- quantileProbabilitiesin interface- AFTSurvivalRegressionParams
- Returns:
- (undocumented)
 
- 
quantilesColDescription copied from interface:AFTSurvivalRegressionParamsParam for quantiles column name. This column will output quantiles of corresponding quantileProbabilities if it is set.- Specified by:
- quantilesColin interface- AFTSurvivalRegressionParams
- Returns:
- (undocumented)
 
- 
maxBlockSizeInMBDescription copied from interface:HasMaxBlockSizeInMBParam for Maximum memory in MB for stacking input data into blocks. Data is stacked within partitions. If more than remaining data size in a partition then it is adjusted to the data size. Default 0.0 represents choosing optimal value, depends on specific algorithm. Must be >= 0..- Specified by:
- maxBlockSizeInMBin interface- HasMaxBlockSizeInMB
- Returns:
- (undocumented)
 
- 
aggregationDepthDescription copied from interface:HasAggregationDepthParam for suggested depth for treeAggregate (>= 2).- Specified by:
- aggregationDepthin interface- HasAggregationDepth
- Returns:
- (undocumented)
 
- 
fitInterceptDescription copied from interface:HasFitInterceptParam for whether to fit an intercept term.- Specified by:
- fitInterceptin interface- HasFitIntercept
- Returns:
- (undocumented)
 
- 
tolDescription copied from interface:HasTolParam for the convergence tolerance for iterative algorithms (>= 0).
- 
maxIterDescription copied from interface:HasMaxIterParam for maximum number of iterations (>= 0).- Specified by:
- maxIterin interface- HasMaxIter
- Returns:
- (undocumented)
 
- 
uidDescription copied from interface:IdentifiableAn immutable unique ID for the object and its derivatives.- Specified by:
- uidin interface- Identifiable
- Returns:
- (undocumented)
 
- 
coefficients
- 
interceptpublic double intercept()
- 
scalepublic double scale()
- 
numFeaturespublic int numFeatures()Description copied from class:PredictionModelReturns the number of features the model was trained on. If unknown, returns -1- Overrides:
- numFeaturesin class- PredictionModel<Vector,- AFTSurvivalRegressionModel> 
 
- 
setQuantileProbabilities
- 
setQuantilesCol
- 
predictQuantiles
- 
predictDescription copied from class:PredictionModelPredict label for the given features. This method is used to implementtransform()and outputPredictionModel.predictionCol().- Specified by:
- predictin class- PredictionModel<Vector,- AFTSurvivalRegressionModel> 
- Parameters:
- features- (undocumented)
- Returns:
- (undocumented)
 
- 
transformDescription copied from class:PredictionModelTransforms dataset by reading fromPredictionModel.featuresCol(), callingpredict, and storing the predictions as a new columnPredictionModel.predictionCol().- Overrides:
- transformin class- PredictionModel<Vector,- AFTSurvivalRegressionModel> 
- Parameters:
- dataset- input dataset
- Returns:
- transformed dataset with PredictionModel.predictionCol()of typeDouble
 
- 
transformSchemaDescription copied from class:PipelineStageCheck transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during transformSchemaand raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate().Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks. - Overrides:
- transformSchemain class- PredictionModel<Vector,- AFTSurvivalRegressionModel> 
- Parameters:
- schema- (undocumented)
- Returns:
- (undocumented)
 
- 
copyDescription copied from interface:ParamsCreates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. SeedefaultCopy().- Specified by:
- copyin interface- Params
- Specified by:
- copyin class- Model<AFTSurvivalRegressionModel>
- Parameters:
- extra- (undocumented)
- Returns:
- (undocumented)
 
- 
writeDescription copied from interface:MLWritableReturns anMLWriterinstance for this ML instance.- Specified by:
- writein interface- MLWritable
- Returns:
- (undocumented)
 
- 
toString- Specified by:
- toStringin interface- Identifiable
- Overrides:
- toStringin class- Object
 
 
-