F1Metric¶
- class mmocr.evaluation.metrics.F1Metric(num_classes, key='labels', mode='micro', cared_classes=[], ignored_classes=[], collect_device='cpu', prefix=None)[源代码]¶
Compute F1 scores.
- 参数
num_classes (int) – Number of labels.
key (str) – The key name of the predicted and ground truth labels. Defaults to ‘labels’.
Options are: - ‘micro’: Calculate metrics globally by counting the total true
positives, false negatives and false positives.
’macro’: Calculate metrics for each label, and find their unweighted mean.
If mode is a list, then metrics in mode will be calculated separately. Defaults to ‘micro’.
cared_classes (list[int]) – The indices of the labels particpated in the metirc computing. If both
cared_classes
andignored_classes
are empty, all classes will be taken into account. Defaults to []. Note:cared_classes
andignored_classes
cannot be specified together.ignored_classes (list[int]) – The index set of labels that are ignored when computing metrics. If both
cared_classes
andignored_classes
are empty, all classes will be taken into account. Defaults to []. Note:cared_classes
andignored_classes
cannot be specified together.collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.
prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.
- 返回类型
警告
Only non-negative integer labels are involved in computing. All negative ground truth labels will be ignored.
- process(data_batch, data_samples)[源代码]¶
Process one batch of data_samples. The processed results should be stored in
self.results
, which will be used to compute the metrics when all batches have been processed.- 参数
data_batch (Sequence[Dict]) – A batch of gts.
data_samples (Sequence[Dict]) – A batch of outputs from the model.
- 返回类型