spark.svmLinear {SparkR} | R Documentation |

Fits a linear SVM model against a SparkDataFrame, similar to svm in e1071 package. Currently only supports binary classification model with linear kernel. Users can print, make predictions on the produced model and save the model to the input path.

spark.svmLinear(data, formula, ...) ## S4 method for signature 'SparkDataFrame,formula' spark.svmLinear(data, formula, regParam = 0, maxIter = 100, tol = 1e-06, standardization = TRUE, threshold = 0, weightCol = NULL, aggregationDepth = 2, handleInvalid = c("error", "keep", "skip")) ## S4 method for signature 'LinearSVCModel' predict(object, newData) ## S4 method for signature 'LinearSVCModel' summary(object) ## S4 method for signature 'LinearSVCModel,character' write.ml(object, path, overwrite = FALSE)

`data` |
SparkDataFrame for training. |

`formula` |
A symbolic description of the model to be fitted. Currently only a few formula operators are supported, including '~', '.', ':', '+', and '-'. |

`...` |
additional arguments passed to the method. |

`regParam` |
The regularization parameter. Only supports L2 regularization currently. |

`maxIter` |
Maximum iteration number. |

`tol` |
Convergence tolerance of iterations. |

`standardization` |
Whether to standardize the training features before fitting the model. The coefficients of models will be always returned on the original scale, so it will be transparent for users. Note that with/without standardization, the models should be always converged to the same solution when no regularization is applied. |

`threshold` |
The threshold in binary classification applied to the linear model prediction. This threshold can be any real number, where Inf will make all predictions 0.0 and -Inf will make all predictions 1.0. |

`weightCol` |
The weight column name. |

`aggregationDepth` |
The depth for treeAggregate (greater than or equal to 2). If the dimensions of features or the number of partitions are large, this param could be adjusted to a larger size. This is an expert parameter. Default value should be good for most cases. |

`handleInvalid` |
How to handle invalid data (unseen labels or NULL values) in features and label column of string type. Supported options: "skip" (filter out rows with invalid data), "error" (throw an error), "keep" (put invalid data in a special additional bucket, at index numLabels). Default is "error". |

`object` |
a LinearSVCModel fitted by |

`newData` |
a SparkDataFrame for testing. |

`path` |
The directory where the model is saved. |

`overwrite` |
Overwrites or not if the output path already exists. Default is FALSE which means throw exception if the output path exists. |

`spark.svmLinear`

returns a fitted linear SVM model.

`predict`

returns the predicted values based on a LinearSVCModel.

`summary`

returns summary information of the fitted model, which is a list.
The list includes `coefficients`

(coefficients of the fitted model),
`numClasses`

(number of classes), `numFeatures`

(number of features).

spark.svmLinear since 2.2.0

predict(LinearSVCModel) since 2.2.0

summary(LinearSVCModel) since 2.2.0

write.ml(LogisticRegression, character) since 2.2.0

```
## Not run:
##D sparkR.session()
##D t <- as.data.frame(Titanic)
##D training <- createDataFrame(t)
##D model <- spark.svmLinear(training, Survived ~ ., regParam = 0.5)
##D summary <- summary(model)
##D
##D # fitted values on training data
##D fitted <- predict(model, training)
##D
##D # save fitted model to input path
##D path <- "path/to/model"
##D write.ml(model, path)
##D
##D # can also read back the saved model and predict
##D # Note that summary deos not work on loaded model
##D savedModel <- read.ml(path)
##D summary(savedModel)
## End(Not run)
```

[Package *SparkR* version 2.4.0 Index]