# F1Metric¶

class mmocr.evaluation.metrics.F1Metric(num_classes, key='labels', mode='micro', cared_classes=[], ignored_classes=[], collect_device='cpu', prefix=None)[source]

Compute F1 scores.

Parameters
• num_classes (int) – Number of labels.

• key (str) – The key name of the predicted and ground truth labels. Defaults to ‘labels’.

• mode (str or list[str]) –

Options are: - ‘micro’: Calculate metrics globally by counting the total true

positives, false negatives and false positives.

• ’macro’: Calculate metrics for each label, and find their unweighted mean.

If mode is a list, then metrics in mode will be calculated separately. Defaults to ‘micro’.

• cared_classes (list[int]) – The indices of the labels particpated in the metirc computing. If both cared_classes and ignored_classes are empty, all classes will be taken into account. Defaults to []. Note: cared_classes and ignored_classes cannot be specified together.

• ignored_classes (list[int]) – The index set of labels that are ignored when computing metrics. If both cared_classes and ignored_classes are empty, all classes will be taken into account. Defaults to []. Note: cared_classes and ignored_classes cannot be specified together.

• collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

• prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

Return type

None

Warning

Only non-negative integer labels are involved in computing. All negative ground truth labels will be ignored.

compute_metrics(results)[source]

Compute the metrics from processed results.

Parameters

results (list[Dict]) – The processed results of each batch.

Returns

The f1 scores. The keys are the names of the

metrics, and the values are corresponding results. Possible keys are ‘micro_f1’ and ‘macro_f1’.

Return type

dict[str, float]

process(data_batch, data_samples)[source]

Process one batch of data_samples. The processed results should be stored in self.results, which will be used to compute the metrics when all batches have been processed.

Parameters
• data_batch (Sequence[Dict]) – A batch of gts.

• data_samples (Sequence[Dict]) – A batch of outputs from the model.

Return type

None