trait
Evaluator[P, L, E] extends AnyRef
Abstract Value Members
-
abstract
def
evaluate(predictions: RDD[P], labels: RDD[L]): E
Concrete Value Members
-
final
def
!=(arg0: AnyRef): Boolean
-
final
def
!=(arg0: Any): Boolean
-
final
def
##(): Int
-
final
def
==(arg0: AnyRef): Boolean
-
final
def
==(arg0: Any): Boolean
-
final
def
asInstanceOf[T0]: T0
-
def
clone(): AnyRef
-
final
def
eq(arg0: AnyRef): Boolean
-
def
equals(arg0: Any): Boolean
-
-
def
evaluate(predictions: RDD[P], labels: PipelineDataset[L]): E
-
def
evaluate(predictions: PipelineDataset[P], labels: RDD[L]): E
-
def
finalize(): Unit
-
final
def
getClass(): Class[_]
-
def
hashCode(): Int
-
final
def
isInstanceOf[T0]: Boolean
-
final
def
ne(arg0: AnyRef): Boolean
-
final
def
notify(): Unit
-
final
def
notifyAll(): Unit
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
-
def
toString(): String
-
final
def
wait(): Unit
-
final
def
wait(arg0: Long, arg1: Int): Unit
-
final
def
wait(arg0: Long): Unit
Inherited from AnyRef
Inherited from Any
An Evaluator is an object whose "evaluate" method takes a vector of Predictions and a set of Labels (of the same length and order) and returns an "Evaluation" which is specific to the domain (binary classification, multi-label classification, etc.). The Evaluation is typically a set of summary statistics designed to capture the performance of a machine learning pipeline.
Because evaluation typically happens at the end of a pipeline, we support the cartesian product of {RDD, PipelineDataset} for both sets of arguments.
Type of Predictions.
Type of the Labels.
Type of the Evaluation.