Class: BinaryLogisticRegressionSummary

eclairjs/ml/classification. BinaryLogisticRegressionSummary

Binary Logistic regression results for a given model.

Constructor

new BinaryLogisticRegressionSummary()

Source:

Extends

Methods

areaUnderROC() → {float}

Computes the area under the receiver operating characteristic (ROC) curve. Note: This ignores instance weights (setting all to 1.0) from LogisticRegression.weightCol. This will change in later Spark versions.
Source:
Returns:
Type
float

featuresCol() → {string}

Field in "predictions" which gives the features of each instance as a vector.
Inherited From:
Source:
Returns:
Type
string

fMeasureByThreshold() → {module:eclairjs/sql.DataFrame}

Returns a dataframe with two fields (threshold, F-Measure) curve with beta = 1.0. Note: This ignores instance weights (setting all to 1.0) from LogisticRegression.weightCol. This will change in later Spark versions.
Source:
Returns:
Type
module:eclairjs/sql.DataFrame

labelCol() → {string}

Field in "predictions" which gives the true label of each instance.
Inherited From:
Source:
Returns:
Type
string

objectiveHistory() → {Array.<float>}

objective function (scaled loss + regularization) at each iteration
Source:
Returns:
Type
Array.<float>

pr() → {module:eclairjs/sql.DataFrame}

Returns the precision-recall curve, which is an Dataframe containing two fields recall, precision with (0.0, 1.0) prepended to it. Note: This ignores instance weights (setting all to 1.0) from LogisticRegression.weightCol. This will change in later Spark versions.
Source:
Returns:
Type
module:eclairjs/sql.DataFrame

precisionByThreshold() → {module:eclairjs/sql.DataFrame}

Returns a dataframe with two fields (threshold, precision) curve. Every possible probability obtained in transforming the dataset are used as thresholds used in calculating the precision. Note: This ignores instance weights (setting all to 1.0) from LogisticRegression.weightCol. This will change in later Spark versions.
Source:
Returns:
Type
module:eclairjs/sql.DataFrame

predictions() → {module:eclairjs/sql.DataFrame}

Dataframe outputted by the model's `transform` method.
Inherited From:
Source:
Returns:
Type
module:eclairjs/sql.DataFrame

probabilityCol() → {string}

Field in "predictions" which gives the calibrated probability of each instance as a vector.
Inherited From:
Source:
Returns:
Type
string

recallByThreshold() → {module:eclairjs/sql.DataFrame}

Returns a dataframe with two fields (threshold, recall) curve. Every possible probability obtained in transforming the dataset are used as thresholds used in calculating the recall. Note: This ignores instance weights (setting all to 1.0) from LogisticRegression.weightCol. This will change in later Spark versions.
Source:
Returns:
Type
module:eclairjs/sql.DataFrame

roc() → {module:eclairjs/sql.DataFrame}

Returns the receiver operating characteristic (ROC) curve, which is an Dataframe having two fields (FPR, TPR) with (0.0, 0.0) prepended and (1.0, 1.0) appended to it.Note: This ignores instance weights (setting all to 1.0) from LogisticRegression.weightCol. Note: This will change in later Spark versions.
Source:
Returns:
Type
module:eclairjs/sql.DataFrame

totalIterations() → {integer}

Number of training iterations until termination
Source:
Returns:
Type
integer