Shortcuts

mmocr.apis

mmocr.apis.disable_text_recog_aug_test(cfg, set_types=None)[source]

Remove aug_test from test pipeline for text recognition.

Parameters
  • cfg (mmcv.Config) – Input config.

  • set_types (list[str]) – Type of dataset source. Should be None or sublist of [‘test’, ‘val’].

mmocr.apis.init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None)[source]

Initialize a detector from config file.

Parameters
  • config (str or mmcv.Config) – Config file path or the config object.

  • checkpoint (str, optional) – Checkpoint path. If left as None, the model will not load any weights.

  • cfg_options (dict) – Options to override some settings in the used config.

Returns

The constructed detector.

Return type

nn.Module

mmocr.apis.init_random_seed(seed=None, device='cuda')[source]

Initialize random seed. If the seed is None, it will be replaced by a random number, and then broadcasted to all processes.

Parameters
  • seed (int, Optional) – The seed.

  • device (str) – The device where the seed will be put on.

Returns

Seed to be used.

Return type

int

mmocr.apis.model_inference(model, imgs, ann=None, batch_mode=False, return_data=False)[source]

Inference image(s) with the detector.

Parameters
  • model (nn.Module) – The loaded detector.

  • imgs (str/ndarray or list[str/ndarray] or tuple[str/ndarray]) – Either image files or loaded images.

  • batch_mode (bool) – If True, use batch mode for inference.

  • ann (dict) – Annotation info for key information extraction.

  • return_data – Return postprocessed data.

Returns

Predicted results.

Return type

result (dict)

mmocr.apis.replace_image_to_tensor(cfg, set_types=None)[source]

Replace ‘ImageToTensor’ to ‘DefaultFormatBundle’.

mmocr.core

evaluation

mmocr.core.evaluation.compute_f1_score(preds, gts, ignores=[])[source]

Compute the F1-score of prediction.

Parameters
  • preds (Tensor) – The predicted probability NxC map with N and C being the sample number and class number respectively.

  • gts (Tensor) – The ground truth vector of size N.

  • ignores – The index set of classes that are ignored when reporting results. Note: all samples are participated in computing.

mmocr.core.evaluation.eval_hmean(results, img_infos, ann_infos, metrics={'hmean-iou'}, score_thr=0.3, rank_list=None, logger=None, **kwargs)[source]

Evaluation in hmean metric.

Parameters
  • results (list[dict]) – Each dict corresponds to one image, containing the following keys: boundary_result

  • img_infos (list[dict]) – Each dict corresponds to one image, containing the following keys: filename, height, width

  • ann_infos (list[dict]) – Each dict corresponds to one image, containing the following keys: masks, masks_ignore

  • score_thr (float) – Score threshold of prediction map.

  • metrics (set{str}) – Hmean metric set, should be one or all of {‘hmean-iou’, ‘hmean-ic13’}

Returns

float]

Return type

dict[str

mmocr.core.evaluation.eval_hmean_ic13(det_boxes, gt_boxes, gt_ignored_boxes, precision_thr=0.4, recall_thr=0.8, center_dist_thr=1.0, one2one_score=1.0, one2many_score=0.8, many2one_score=1.0)[source]

Evaluate hmean of text detection using the icdar2013 standard.

Parameters
  • det_boxes (list[list[list[float]]]) – List of arrays of shape (n, 2k). Each element is the det_boxes for one img. k>=4.

  • gt_boxes (list[list[list[float]]]) – List of arrays of shape (m, 2k). Each element is the gt_boxes for one img. k>=4.

  • gt_ignored_boxes (list[list[list[float]]]) – List of arrays of (l, 2k). Each element is the ignored gt_boxes for one img. k>=4.

  • precision_thr (float) – Precision threshold of the iou of one (gt_box, det_box) pair.

  • recall_thr (float) – Recall threshold of the iou of one (gt_box, det_box) pair.

  • center_dist_thr (float) – Distance threshold of one (gt_box, det_box) center point pair.

  • one2one_score (float) – Reward when one gt matches one det_box.

  • one2many_score (float) – Reward when one gt matches many det_boxes.

  • many2one_score (float) – Reward when many gts match one det_box.

Returns

Tuple of dicts which encodes the hmean for the dataset and all images.

Return type

hmean (tuple[dict])

mmocr.core.evaluation.eval_hmean_iou(pred_boxes, gt_boxes, gt_ignored_boxes, iou_thr=0.5, precision_thr=0.5)[source]

Evaluate hmean of text detection using IOU standard.

Parameters
  • pred_boxes (list[list[list[float]]]) – Text boxes for an img list. Each box has 2k (>=8) values.

  • gt_boxes (list[list[list[float]]]) – Ground truth text boxes for an img list. Each box has 2k (>=8) values.

  • gt_ignored_boxes (list[list[list[float]]]) – Ignored ground truth text boxes for an img list. Each box has 2k (>=8) values.

  • iou_thr (float) – Iou threshold when one (gt_box, det_box) pair is matched.

  • precision_thr (float) – Precision threshold when one (gt_box, det_box) pair is matched.

Returns

Tuple of dicts indicates the hmean for the dataset

and all images.

Return type

hmean (tuple[dict])

mmocr.core.evaluation.eval_ner_f1(results, gt_infos)[source]

Evaluate for ner task.

Parameters
  • results (list) – Predict results of entities.

  • gt_infos (list[dict]) – Ground-truth information which contains text and label.

Returns

precision,recall, f1-score of total

and each catogory.

Return type

class_info (dict)

mmocr.core.evaluation.eval_ocr_metric(pred_texts, gt_texts)[source]

Evaluate the text recognition performance with metric: word accuracy and 1-N.E.D. See https://rrc.cvc.uab.es/?ch=14&com=tasks for details.

Parameters
  • pred_texts (list[str]) – Text strings of prediction.

  • gt_texts (list[str]) – Text strings of ground truth.

Returns

float]): Metric dict for text recognition, include:
  • word_acc: Accuracy in word level.

  • word_acc_ignore_case: Accuracy in word level, ignore letter case.

  • word_acc_ignore_case_symbol: Accuracy in word level, ignore

    letter case and symbol. (default metric for academic evaluation)

  • char_recall: Recall in character level, ignore

    letter case and symbol.

  • char_precision: Precision in character level, ignore

    letter case and symbol.

  • 1-N.E.D: 1 - normalized_edit_distance.

Return type

eval_res (dict[str

mmocr.utils

class mmocr.utils.Registry(name, build_func=None, parent=None, scope=None)[source]

A registry to map strings to classes.

Registered object could be built from registry.

Example

>>> MODELS = Registry('models')
>>> @MODELS.register_module()
>>> class ResNet:
>>>     pass
>>> resnet = MODELS.build(dict(type='ResNet'))

Please refer to https://mmcv.readthedocs.io/en/latest/understand_mmcv/registry.html for advanced usage.

Parameters
  • name (str) – Registry name.

  • build_func (func, optional) – Build function to construct instance from Registry, func:build_from_cfg is used if neither parent or build_func is specified. If parent is specified and build_func is not given, build_func will be inherited from parent. Default: None.

  • parent (Registry, optional) – Parent registry. The class registered in children registry could be built from parent. Default: None.

  • scope (str, optional) – The scope of registry. It is the key to search for children registry. If not specified, scope will be the name of the package where class is defined, e.g. mmdet, mmcls, mmseg. Default: None.

get(key)[source]

Get the registry record.

Parameters

key (str) – The class name in string format.

Returns

The corresponding class.

Return type

class

static infer_scope()[source]

Infer the scope of registry.

The name of the package where registry is defined will be returned.

Example

>>> # in mmdet/models/backbone/resnet.py
>>> MODELS = Registry('models')
>>> @MODELS.register_module()
>>> class ResNet:
>>>     pass
The scope of ``ResNet`` will be ``mmdet``.
Returns

The inferred scope name.

Return type

str

register_module(name=None, force=False, module=None)[source]

Register a module.

A record will be added to self._module_dict, whose key is the class name or the specified name, and value is the class itself. It can be used as a decorator or a normal function.

Example

>>> backbones = Registry('backbone')
>>> @backbones.register_module()
>>> class ResNet:
>>>     pass
>>> backbones = Registry('backbone')
>>> @backbones.register_module(name='mnet')
>>> class MobileNet:
>>>     pass
>>> backbones = Registry('backbone')
>>> class ResNet:
>>>     pass
>>> backbones.register_module(ResNet)
Parameters
  • name (str | None) – The module name to be registered. If not specified, the class name will be used.

  • force (bool, optional) – Whether to override an existing class with the same name. Default: False.

  • module (type) – Module class to be registered.

static split_scope_key(key)[source]

Split scope and key.

The first scope will be split from key.

Examples

>>> Registry.split_scope_key('mmdet.ResNet')
'mmdet', 'ResNet'
>>> Registry.split_scope_key('ResNet')
None, 'ResNet'
Returns

The former element is the first scope of the key, which can be None. The latter is the remaining key.

Return type

tuple[str | None, str]

class mmocr.utils.StringStrip(strip=True, strip_pos='both', strip_str=None)[source]

Removing the leading and/or the trailing characters based on the string argument passed.

Parameters
  • strip (bool) – Whether remove characters from both left and right of the string. Default: True.

  • strip_pos (str) – Which position for removing, can be one of (‘both’, ‘left’, ‘right’), Default: ‘both’.

  • strip_str (str|None) – A string specifying the set of characters to be removed from the left and right part of the string. If None, all leading and trailing whitespaces are removed from the string. Default: None.

mmocr.utils.build_from_cfg(cfg, registry, default_args=None)[source]

Build a module from config dict.

Parameters
  • cfg (dict) – Config dict. It should at least contain the key “type”.

  • registry (Registry) – The registry to search the type from.

  • default_args (dict, optional) – Default initialization arguments.

Returns

The constructed object.

Return type

object

mmocr.utils.collect_env()[source]

Collect the information of the running environments.

mmocr.utils.convert_annotations(image_infos, out_json_name)[source]

Convert the annotation into coco style.

Parameters
  • image_infos (list) – The list of image information dicts

  • out_json_name (str) – The output json filename

Returns

The coco style dict

Return type

out_json(dict)

mmocr.utils.drop_orientation(img_file)[source]

Check if the image has orientation information. If yes, ignore it by converting the image format to png, and return new filename, otherwise return the original filename.

Parameters

img_file (str) – The image path

Returns

The converted image filename with proper postfix

mmocr.utils.get_root_logger(log_file=None, log_level=20)[source]

Use get_logger method in mmcv to get the root logger.

The logger will be initialized if it has not been initialized. By default a StreamHandler will be added. If log_file is specified, a FileHandler will also be added. The name of the root logger is the top-level package name, e.g., “mmpose”.

Parameters
  • log_file (str | None) – The log filename. If specified, a FileHandler will be added to the root logger.

  • log_level (int) – The root logger level. Note that only the process of rank 0 is affected, while other processes will set the level to “Error” and be silent most of the time.

Returns

The root logger.

Return type

logging.Logger

mmocr.utils.is_2dlist(x)[source]

check x is 2d-list([[1], []]) or 1d empty list([]).

Notice:

The reason that it contains 1d empty list is because some arguments from gt annotation file or model prediction may be empty, but usually, it should be 2d-list.

mmocr.utils.is_3dlist(x)[source]

check x is 3d-list([[[1], []]]) or 2d empty list([[], []]) or 1d empty list([]).

Notice:

The reason that it contains 1d or 2d empty list is because some arguments from gt annotation file or model prediction may be empty, but usually, it should be 3d-list.

mmocr.utils.is_not_png(img_file)[source]

Check img_file is not png image.

Parameters

img_file (str) – The input image file name

Returns

The bool flag indicating whether it is not png

mmocr.utils.is_on_same_line(box_a, box_b, min_y_overlap_ratio=0.8)[source]

Check if two boxes are on the same line by their y-axis coordinates.

Two boxes are on the same line if they overlap vertically, and the length of the overlapping line segment is greater than min_y_overlap_ratio * the height of either of the boxes.

Parameters
  • box_a (list), box_b (list) – Two bounding boxes to be checked

  • min_y_overlap_ratio (float) – The minimum vertical overlapping ratio allowed for boxes in the same line

Returns

The bool flag indicating if they are on the same line

mmocr.utils.list_from_file(filename, encoding='utf-8')[source]

Load a text file and parse the content as a list of strings. The trailing “r” and “n” of each line will be removed.

Note

This will be replaced by mmcv’s version after it supports encoding.

Parameters
  • filename (str) – Filename.

  • encoding (str) – Encoding used to open the file. Default utf-8.

Returns

A list of strings.

Return type

list[str]

mmocr.utils.list_to_file(filename, lines)[source]

Write a list of strings to a text file.

Parameters
  • filename (str) – The output filename. It will be created/overwritten.

  • lines (list(str)) – Data to be written.

mmocr.utils.revert_sync_batchnorm(module)[source]

Helper function to convert all SyncBatchNorm layers in the model to BatchNormXd layers.

Adapted from @kapily’s work: (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547)

Parameters

module (nn.Module) – The module containing SyncBatchNorm layers.

Returns

The converted module with BatchNormXd layers.

Return type

module_output

mmocr.utils.stitch_boxes_into_lines(boxes, max_x_dist=10, min_y_overlap_ratio=0.8)[source]

Stitch fragmented boxes of words into lines.

Note: part of its logic is inspired by @Johndirr (https://github.com/faustomorales/keras-ocr/issues/22)

Parameters
  • boxes (list) – List of ocr results to be stitched

  • max_x_dist (int) – The maximum horizontal distance between the closest edges of neighboring boxes in the same line

  • min_y_overlap_ratio (float) – The minimum vertical overlapping ratio allowed for any pairs of neighboring boxes in the same line

Returns

List of merged boxes and texts

Return type

merged_boxes(list[dict])

mmocr.models

common_backbones

class mmocr.models.common.backbones.UNet(in_channels=3, base_channels=64, num_stages=5, strides=(1, 1, 1, 1, 1), enc_num_convs=(2, 2, 2, 2, 2), dec_num_convs=(2, 2, 2, 2), downsamples=(True, True, True, True), enc_dilations=(1, 1, 1, 1, 1), dec_dilations=(1, 1, 1, 1), with_cp=False, conv_cfg=None, norm_cfg={'type': 'BN'}, act_cfg={'type': 'ReLU'}, upsample_cfg={'type': 'InterpConv'}, norm_eval=False, dcn=None, plugins=None, init_cfg=[{'type': 'Kaiming', 'layer': 'Conv2d'}, {'type': 'Constant', 'layer': ['_BatchNorm', 'GroupNorm'], 'val': 1}])[source]

UNet backbone. U-Net: Convolutional Networks for Biomedical Image Segmentation. https://arxiv.org/pdf/1505.04597.pdf

Parameters
  • in_channels (int) – Number of input image channels. Default” 3.

  • base_channels (int) – Number of base channels of each stage. The output channels of the first stage. Default: 64.

  • num_stages (int) – Number of stages in encoder, normally 5. Default: 5.

  • strides (Sequence[int 1 | 2]) – Strides of each stage in encoder. len(strides) is equal to num_stages. Normally the stride of the first stage in encoder is 1. If strides[i]=2, it uses stride convolution to downsample in the correspondence encoder stage. Default: (1, 1, 1, 1, 1).

  • enc_num_convs (Sequence[int]) – Number of convolutional layers in the convolution block of the correspondence encoder stage. Default: (2, 2, 2, 2, 2).

  • dec_num_convs (Sequence[int]) – Number of convolutional layers in the convolution block of the correspondence decoder stage. Default: (2, 2, 2, 2).

  • downsamples (Sequence[int]) – Whether use MaxPool to downsample the feature map after the first stage of encoder (stages: [1, num_stages)). If the correspondence encoder stage use stride convolution (strides[i]=2), it will never use MaxPool to downsample, even downsamples[i-1]=True. Default: (True, True, True, True).

  • enc_dilations (Sequence[int]) – Dilation rate of each stage in encoder. Default: (1, 1, 1, 1, 1).

  • dec_dilations (Sequence[int]) – Dilation rate of each stage in decoder. Default: (1, 1, 1, 1).

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

  • conv_cfg (dict | None) – Config dict for convolution layer. Default: None.

  • norm_cfg (dict | None) – Config dict for normalization layer. Default: dict(type=’BN’).

  • act_cfg (dict | None) – Config dict for activation layer in ConvModule. Default: dict(type=’ReLU’).

  • upsample_cfg (dict) – The upsample config of the upsample module in decoder. Default: dict(type=’InterpConv’).

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • dcn (bool) – Use deformable convolution in convolutional layer or not. Default: None.

  • plugins (dict) – plugins for convolutional layers. Default: None.

Notice:

The input image size should be divisible by the whole downsample rate of the encoder. More detail of the whole downsample rate can be found in UNet._check_input_divisible.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

train(mode=True)[source]

Convert the model into training mode while keep normalization layer freezed.

class mmocr.models.common.losses.DiceLoss(eps=1e-06)[source]
forward(pred, target, mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.common.losses.FocalLoss(gamma=2, weight=None, ignore_index=- 100)[source]

Multi-class Focal loss implementation.

Parameters
  • gamma (float) – The larger the gamma, the smaller the loss weight of easier samples.

  • weight (float) – A manual rescaling weight given to each class.

  • ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient.

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

textdet_dense_heads

class mmocr.models.textdet.dense_heads.DBHead(in_channels, with_bias=False, downsample_ratio=1.0, loss={'type': 'DBLoss'}, postprocessor={'text_repr_type': 'quad', 'type': 'DBPostprocessor'}, init_cfg=[{'type': 'Kaiming', 'layer': 'Conv'}, {'type': 'Constant', 'layer': 'BatchNorm', 'val': 1.0, 'bias': 0.0001}], train_cfg=None, test_cfg=None, **kwargs)[source]

The class for DBNet head.

This was partially adapted from https://github.com/MhLiao/DB

Parameters
  • in_channels (int) – The number of input channels of the db head.

  • with_bias (bool) – Whether add bias in Conv2d layer.

  • downsample_ratio (float) – The downsample ratio of ground truths.

  • loss (dict) – Config of loss for dbnet.

  • postprocessor (dict) – Config of postprocessor for dbnet.

forward(inputs)[source]
Parameters

inputs (Tensor) – Shape (batch_size, hidden_size, h, w).

Returns

A tensor of the same shape as input.

Return type

Tensor

class mmocr.models.textdet.dense_heads.DRRGHead(in_channels, k_at_hops=(8, 4), num_adjacent_linkages=3, node_geo_feat_len=120, pooling_scale=1.0, pooling_output_size=(4, 3), nms_thr=0.3, min_width=8.0, max_width=24.0, comp_shrink_ratio=1.03, comp_ratio=0.4, comp_score_thr=0.3, text_region_thr=0.2, center_region_thr=0.2, center_region_area_thr=50, local_graph_thr=0.7, loss={'type': 'DRRGLoss'}, postprocessor={'link_thr': 0.85, 'type': 'DRRGPostprocessor'}, train_cfg=None, test_cfg=None, init_cfg={'mean': 0, 'override': {'name': 'out_conv'}, 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

The class for DRRG head: Deep Relational Reasoning Graph Network for Arbitrary Shape Text Detection.

Parameters
  • k_at_hops (tuple(int)) – The number of i-hop neighbors, i = 1, 2.

  • num_adjacent_linkages (int) – The number of linkages when constructing adjacent matrix.

  • node_geo_feat_len (int) – The length of embedded geometric feature vector of a component.

  • pooling_scale (float) – The spatial scale of rotated RoI-Align.

  • pooling_output_size (tuple(int)) – The output size of RRoI-Aligning.

  • nms_thr (float) – The locality-aware NMS threshold of text components.

  • min_width (float) – The minimum width of text components.

  • max_width (float) – The maximum width of text components.

  • comp_shrink_ratio (float) – The shrink ratio of text components.

  • comp_ratio (float) – The reciprocal of aspect ratio of text components.

  • comp_score_thr (float) – The score threshold of text components.

  • text_region_thr (float) – The threshold for text region probability map.

  • center_region_thr (float) – The threshold for text center region probability map.

  • center_region_area_thr (int) – The threshold for filtering small-sized text center region.

  • local_graph_thr (float) – The threshold to filter identical local graphs.

  • loss (dict) – The config of loss that DRRGHead uses..

  • postprocessor (dict) – Config of postprocessor for Drrg.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(inputs, gt_comp_attribs)[source]
Parameters
  • inputs (Tensor) – Shape of \((N, C, H, W)\).

  • gt_comp_attribs (list[ndarray]) – The padded text component attributes. Shape: (num_component, 8).

Returns

Returns (pred_maps, (gcn_pred, gt_labels)).

  • pred_maps (Tensor): Prediction map with shape \((N, C_{out}, H, W)\).
  • gcn_pred (Tensor): Prediction from GCN module, with shape \((N, 2)\).
  • gt_labels (Tensor): Ground-truth label with shape \((N, 8)\).

Return type

tuple

get_boundary(edges, scores, text_comps, img_metas, rescale)[source]

Compute text boundaries via post processing.

Parameters
  • edges (ndarray) – The edge array of shape N * 2, each row is a pair of text component indices that makes up an edge in graph.

  • scores (ndarray) – The edge score array.

  • text_comps (ndarray) – The text components.

  • img_metas (list[dict]) – The image meta infos.

  • rescale (bool) – Rescale boundaries to the original image resolution.

Returns

The result dict containing key boundary_result.

Return type

dict

single_test(feat_maps)[source]
Parameters

feat_maps (Tensor) – Shape of \((N, C, H, W)\).

Returns

Returns (edge, score, text_comps).

  • edge (ndarray): The edge array of shape \((N, 2)\) where each row is a pair of text component indices that makes up an edge in graph.
  • score (ndarray): The score array of shape \((N,)\), corresponding to the edge above.
  • text_comps (ndarray): The text components of shape \((N, 9)\) where each row corresponds to one box and its score: (x1, y1, x2, y2, x3, y3, x4, y4, score).

Return type

tuple

class mmocr.models.textdet.dense_heads.FCEHead(in_channels, scales, fourier_degree=5, nms_thr=0.1, loss={'num_sample': 50, 'type': 'FCELoss'}, postprocessor={'alpha': 1.0, 'beta': 2.0, 'num_reconstr_points': 50, 'score_thr': 0.3, 'text_repr_type': 'poly', 'type': 'FCEPostprocessor'}, train_cfg=None, test_cfg=None, init_cfg={'mean': 0, 'override': [{'name': 'out_conv_cls'}, {'name': 'out_conv_reg'}], 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

The class for implementing FCENet head.

FCENet(CVPR2021): Fourier Contour Embedding for Arbitrary-shaped Text Detection

Parameters
  • in_channels (int) – The number of input channels.

  • scales (list[int]) – The scale of each layer.

  • fourier_degree (int) – The maximum Fourier transform degree k.

  • nms_thr (float) – The threshold of nms.

  • loss (dict) – Config of loss for FCENet.

  • postprocessor (dict) – Config of postprocessor for FCENet.

forward(feats)[source]
Parameters

feats (list[Tensor]) – Each tensor has the shape of \((N, C_i, H_i, W_i)\).

Returns

Each pair of tensors corresponds to the classification result and regression result computed from the input tensor with the same index. They have the shapes of \((N, C_{cls,i}, H_i, W_i)\) and \((N, C_{out,i}, H_i, W_i)\).

Return type

list[[Tensor, Tensor]]

get_boundary(score_maps, img_metas, rescale)[source]

Compute text boundaries via post processing.

Parameters
  • score_maps (Tensor) – The text score map.

  • img_metas (dict) – The image meta info.

  • rescale (bool) – Rescale boundaries to the original image resolution if true, and keep the score_maps resolution if false.

Returns

A dict where boundary results are stored in boundary_result.

Return type

dict

class mmocr.models.textdet.dense_heads.HeadMixin(loss, postprocessor)[source]

Base head class for text detection, including loss calcalation and postprocess.

Parameters
  • loss (dict) – Config to build loss.

  • postprocessor (dict) – Config to build postprocessor.

get_boundary(score_maps, img_metas, rescale)[source]

Compute text boundaries via post processing.

Parameters
  • score_maps (Tensor) – The text score map.

  • img_metas (dict) – The image meta info.

  • rescale (bool) – Rescale boundaries to the original image resolution if true, and keep the score_maps resolution if false.

Returns

A dict where boundary results are stored in boundary_result.

Return type

dict

loss(pred_maps, **kwargs)[source]

Compute the loss for scene text detection.

Parameters

pred_maps (Tensor) – The input score maps of shape \((NxCxHxW)\).

Returns

The dict for losses.

Return type

dict

resize_boundary(boundaries, scale_factor)[source]

Rescale boundaries via scale_factor.

Parameters
  • boundaries (list[list[float]]) – The boundary list. Each boundary has \(2k+1\) elements with \(k>=4\).

  • scale_factor (ndarray) – The scale factor of size \((4,)\).

Returns

The scaled boundaries.

Return type

list[list[float]]

class mmocr.models.textdet.dense_heads.PANHead(in_channels, out_channels, downsample_ratio=0.25, loss={'type': 'PANLoss'}, postprocessor={'text_repr_type': 'poly', 'type': 'PANPostprocessor'}, train_cfg=None, test_cfg=None, init_cfg={'mean': 0, 'override': {'name': 'out_conv'}, 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

The class for PANet head.

Parameters
  • in_channels (list[int]) – A list of 4 numbers of input channels.

  • out_channels (int) – Number of output channels.

  • downsample_ratio (float) – Downsample ratio.

  • loss (dict) – Configuration dictionary for loss type. Supported loss types are “PANLoss” and “PSELoss”.

  • postprocessor (dict) – Config of postprocessor for PANet.

  • train_cfg (dict) – Depreciated.

  • test_cfg (dict) – Depreciated.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(inputs)[source]
Parameters

inputs (list[Tensor] | Tensor) – Each tensor has the shape of \((N, C_i, W, H)\), where \(\sum_iC_i=C_{in}\) and \(C_{in}\) is input_channels.

Returns

A tensor of shape \((N, C_{out}, W, H)\) where \(C_{out}\) is output_channels.

Return type

Tensor

class mmocr.models.textdet.dense_heads.PSEHead(in_channels, out_channels, downsample_ratio=0.25, loss={'type': 'PSELoss'}, postprocessor={'text_repr_type': 'poly', 'type': 'PSEPostprocessor'}, train_cfg=None, test_cfg=None, init_cfg=None, **kwargs)[source]

The class for PSENet head.

Parameters
  • in_channels (list[int]) – A list of 4 numbers of input channels.

  • out_channels (int) – Number of output channels.

  • downsample_ratio (float) – Downsample ratio.

  • loss (dict) – Configuration dictionary for loss type. Supported loss types are “PANLoss” and “PSELoss”.

  • postprocessor (dict) – Config of postprocessor for PSENet.

  • train_cfg (dict) – Depreciated.

  • test_cfg (dict) – Depreciated.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

class mmocr.models.textdet.dense_heads.TextSnakeHead(in_channels, out_channels=5, downsample_ratio=1.0, loss={'type': 'TextSnakeLoss'}, postprocessor={'text_repr_type': 'poly', 'type': 'TextSnakePostprocessor'}, train_cfg=None, test_cfg=None, init_cfg={'mean': 0, 'override': {'name': 'out_conv'}, 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

The class for TextSnake head: TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes.

TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes.

Parameters
  • in_channels (int) – Number of input channels.

  • out_channels (int) – Number of output channels.

  • downsample_ratio (float) – Downsample ratio.

  • loss (dict) – Configuration dictionary for loss type.

  • postprocessor (dict) – Config of postprocessor for TextSnake.

  • train_cfg – Depreciated.

  • test_cfg – Depreciated.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(inputs)[source]
Parameters

inputs (Tensor) – Shape \((N, C_{in}, H, W)\), where \(C_{in}\) is in_channels. \(H\) and \(W\) should be the same as the input of backbone.

Returns

A tensor of shape \((N, 5, H, W)\).

Return type

Tensor

textdet_necks

class mmocr.models.textdet.necks.FPEM_FFM(in_channels, conv_out=128, fpem_repeat=2, align_corners=False, init_cfg={'distribution': 'uniform', 'layer': 'Conv2d', 'type': 'Xavier'})[source]

This code is from https://github.com/WenmuZhou/PAN.pytorch.

Parameters
  • in_channels (list[int]) – A list of 4 numbers of input channels.

  • conv_out (int) – Number of output channels.

  • fpem_repeat (int) – Number of FPEM layers before FFM operations.

  • align_corners (bool) – The interpolation behaviour in FFM operation, used in torch.nn.functional.interpolate().

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(x)[source]
Parameters

x (list[Tensor]) – A list of four tensors of shape \((N, C_i, H_i, W_i)\), representing C2, C3, C4, C5 features respectively. \(C_i\) should matches the number in in_channels.

Returns

Four tensors of shape \((N, C_{out}, H_0, W_0)\) where \(C_{out}\) is conv_out.

Return type

list[Tensor]

class mmocr.models.textdet.necks.FPNC(in_channels, lateral_channels=256, out_channels=64, bias_on_lateral=False, bn_re_on_lateral=False, bias_on_smooth=False, bn_re_on_smooth=False, conv_after_concat=False, init_cfg=None)[source]

FPN-like fusion module in Real-time Scene Text Detection with Differentiable Binarization.

This was partially adapted from https://github.com/MhLiao/DB and https://github.com/WenmuZhou/DBNet.pytorch.

Parameters
  • in_channels (list[int]) – A list of numbers of input channels.

  • lateral_channels (int) – Number of channels for lateral layers.

  • out_channels (int) – Number of output channels.

  • bias_on_lateral (bool) – Whether to use bias on lateral convolutional layers.

  • bn_re_on_lateral (bool) – Whether to use BatchNorm and ReLU on lateral convolutional layers.

  • bias_on_smooth (bool) – Whether to use bias on smoothing layer.

  • bn_re_on_smooth (bool) – Whether to use BatchNorm and ReLU on smoothing layer.

  • conv_after_concat (bool) – Whether to add a convolution layer after the concatenation of predictions.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(inputs)[source]
Parameters

inputs (list[Tensor]) – Each tensor has the shape of \((N, C_i, H_i, W_i)\). It usually expects 4 tensors (C2-C5 features) from ResNet.

Returns

A tensor of shape \((N, C_{out}, H_0, W_0)\) where \(C_{out}\) is out_channels.

Return type

Tensor

class mmocr.models.textdet.necks.FPNF(in_channels=[256, 512, 1024, 2048], out_channels=256, fusion_type='concat', init_cfg={'distribution': 'uniform', 'layer': 'Conv2d', 'type': 'Xavier'})[source]

FPN-like fusion module in Shape Robust Text Detection with Progressive Scale Expansion Network.

Parameters
  • in_channels (list[int]) – A list of number of input channels.

  • out_channels (int) – The number of output channels.

  • fusion_type (str) – Type of the final feature fusion layer. Available options are “concat” and “add”.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(inputs)[source]
Parameters

inputs (list[Tensor]) – Each tensor has the shape of \((N, C_i, H_i, W_i)\). It usually expects 4 tensors (C2-C5 features) from ResNet.

Returns

A tensor of shape \((N, C_{out}, H_0, W_0)\) where \(C_{out}\) is out_channels.

Return type

Tensor

class mmocr.models.textdet.necks.FPN_UNet(in_channels, out_channels, init_cfg={'distribution': 'uniform', 'layer': ['Conv2d', 'ConvTranspose2d'], 'type': 'Xavier'})[source]

The class for implementing DRRG and TextSnake U-Net-like FPN.

DRRG: Deep Relational Reasoning Graph Network for Arbitrary Shape Text Detection.

TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes.

Parameters
  • in_channels (list[int]) – Number of input channels at each scale. The length of the list should be 4.

  • out_channels (int) – The number of output channels.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(x)[source]
Parameters

x (list[Tensor] | tuple[Tensor]) – A list of four tensors of shape \((N, C_i, H_i, W_i)\), representing C2, C3, C4, C5 features respectively. \(C_i\) should matches the number in in_channels.

Returns

Shape \((N, C, H, W)\) where \(H=4H_0\) and \(W=4W_0\).

Return type

Tensor

textdet_detectors

class mmocr.models.textdet.detectors.DBNet(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None, show_score=False, init_cfg=None)[source]

The class for implementing DBNet text detector: Real-time Scene Text Detection with Differentiable Binarization.

[https://arxiv.org/abs/1911.08947].

class mmocr.models.textdet.detectors.DRRG(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None, show_score=False, init_cfg=None)[source]

The class for implementing DRRG text detector. Deep Relational Reasoning Graph Network for Arbitrary Shape Text Detection.

[https://arxiv.org/abs/2003.07493]

forward_train(img, img_metas, **kwargs)[source]
Parameters
  • img (Tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.

  • img_metas (list[dict]) – A List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details of the values of these keys see mmdet.datasets.pipelines.Collect.

Returns

A dictionary of loss components.

Return type

dict[str, Tensor]

simple_test(img, img_metas, rescale=False)[source]

Test function without test-time augmentation.

Parameters
  • img (torch.Tensor) – Images with shape (N, C, H, W).

  • img_metas (list[dict]) – List of image information.

  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.

Returns

BBox results of each image and classes.

The outer list corresponds to each image. The inner list corresponds to each class.

Return type

list[list[np.ndarray]]

class mmocr.models.textdet.detectors.FCENet(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None, show_score=False, init_cfg=None)[source]

The class for implementing FCENet text detector FCENet(CVPR2021): Fourier Contour Embedding for Arbitrary-shaped Text

Detection

[https://arxiv.org/abs/2104.10442]

simple_test(img, img_metas, rescale=False)[source]

Test function without test-time augmentation.

Parameters
  • img (torch.Tensor) – Images with shape (N, C, H, W).

  • img_metas (list[dict]) – List of image information.

  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.

Returns

BBox results of each image and classes.

The outer list corresponds to each image. The inner list corresponds to each class.

Return type

list[list[np.ndarray]]

class mmocr.models.textdet.detectors.OCRMaskRCNN(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None, text_repr_type='quad', show_score=False, init_cfg=None)[source]

Mask RCNN tailored for OCR.

get_boundary(results)[source]

Convert segmentation into text boundaries.

Parameters

results (tuple) – The result tuple. The first element is segmentation while the second is its scores.

Returns

A result dict containing ‘boundary_result’.

Return type

dict

simple_test(img, img_metas, proposals=None, rescale=False)[source]

Test without augmentation.

class mmocr.models.textdet.detectors.PANet(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None, show_score=False, init_cfg=None)[source]

The class for implementing PANet text detector:

Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network [https://arxiv.org/abs/1908.05900].

class mmocr.models.textdet.detectors.PSENet(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None, show_score=False, init_cfg=None)[source]

The class for implementing PSENet text detector: Shape Robust Text Detection with Progressive Scale Expansion Network.

[https://arxiv.org/abs/1806.02559].

class mmocr.models.textdet.detectors.SingleStageTextDetector(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None, init_cfg=None)[source]

The class for implementing single stage text detector.

forward_train(img, img_metas, **kwargs)[source]
Parameters
  • img (Tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.

  • img_metas (list[dict]) – A list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys, see mmdet.datasets.pipelines.Collect.

Returns

A dictionary of loss components.

Return type

dict[str, Tensor]

simple_test(img, img_metas, rescale=False)[source]

Test function without test-time augmentation.

Parameters
  • img (torch.Tensor) – Images with shape (N, C, H, W).

  • img_metas (list[dict]) – List of image information.

  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.

Returns

BBox results of each image and classes.

The outer list corresponds to each image. The inner list corresponds to each class.

Return type

list[list[np.ndarray]]

class mmocr.models.textdet.detectors.TextDetectorMixin(show_score)[source]

Base class for text detector, only to show results.

Parameters

show_score (bool) – Whether to show text instance score.

show_result(img, result, score_thr=0.5, bbox_color='green', text_color='green', thickness=1, font_scale=0.5, win_name='', show=False, wait_time=0, out_file=None)[source]

Draw result over img.

Parameters
  • img (str or Tensor) – The image to be displayed.

  • result (dict) – The results to draw over img.

  • score_thr (float, optional) – Minimum score of bboxes to be shown. Default: 0.3.

  • bbox_color (str or tuple or Color) – Color of bbox lines.

  • text_color (str or tuple or Color) – Color of texts.

  • thickness (int) – Thickness of lines.

  • font_scale (float) – Font scales of texts.

  • win_name (str) – The window name.

  • wait_time (int) – Value of waitKey param. Default: 0.

  • show (bool) – Whether to show the image. Default: False.

  • out_file (str or None) – The filename to write the image. Default: None.imshow_pred_boundary`

class mmocr.models.textdet.detectors.TextSnake(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None, show_score=False, init_cfg=None)[source]

The class for implementing TextSnake text detector: TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes.

[https://arxiv.org/abs/1807.01544]

textdet_losses

class mmocr.models.textdet.losses.DBLoss(alpha=1, beta=1, reduction='mean', negative_ratio=3.0, eps=1e-06, bbce_loss=False)[source]

The class for implementing DBNet loss.

This is partially adapted from https://github.com/MhLiao/DB.

Parameters
  • alpha (float) – The binary loss coef.

  • beta (float) – The threshold loss coef.

  • reduction (str) – The way to reduce the loss.

  • negative_ratio (float) – The ratio of positives to negatives.

  • eps (float) – Epsilon in the threshold loss function.

  • bbce_loss (bool) – Whether to use balanced bce for probability loss. If False, dice loss will be used instead.

bitmasks2tensor(bitmasks, target_sz)[source]

Convert Bitmasks to tensor.

Parameters
  • bitmasks (list[BitmapMasks]) – The BitmapMasks list. Each item is for one img.

  • target_sz (tuple(int, int)) – The target tensor of size \((H, W)\).

Returns

The list of kernel tensors. Each element stands for one kernel level.

Return type

list[Tensor]

forward(preds, downsample_ratio, gt_shrink, gt_shrink_mask, gt_thr, gt_thr_mask)[source]

Compute DBNet loss.

Parameters
  • preds (Tensor) – The output tensor with size \((N, 3, H, W)\).

  • downsample_ratio (float) – The downsample ratio for the ground truths.

  • gt_shrink (list[BitmapMasks]) – The mask list with each element being the shrunk text mask for one img.

  • gt_shrink_mask (list[BitmapMasks]) – The effective mask list with each element being the shrunk effective mask for one img.

  • gt_thr (list[BitmapMasks]) – The mask list with each element being the threshold text mask for one img.

  • gt_thr_mask (list[BitmapMasks]) – The effective mask list with each element being the threshold effective mask for one img.

Returns

The dict for dbnet losses with “loss_prob”, “loss_db” and “loss_thresh”.

Return type

dict

class mmocr.models.textdet.losses.DRRGLoss(ohem_ratio=3.0)[source]

The class for implementing DRRG loss. This is partially adapted from https://github.com/GXYM/DRRG licensed under the MIT license.

DRRG: Deep Relational Reasoning Graph Network for Arbitrary Shape Text Detection.

Parameters

ohem_ratio (float) – The negative/positive ratio in ohem.

balance_bce_loss(pred, gt, mask)[source]

Balanced Binary-CrossEntropy Loss.

Parameters
  • pred (Tensor) – Shape of \((1, H, W)\).

  • gt (Tensor) – Shape of \((1, H, W)\).

  • mask (Tensor) – Shape of \((1, H, W)\).

Returns

Balanced bce loss.

Return type

Tensor

bitmasks2tensor(bitmasks, target_sz)[source]

Convert Bitmasks to tensor.

Parameters
  • bitmasks (list[BitmapMasks]) – The BitmapMasks list. Each item is for one img.

  • target_sz (tuple(int, int)) – The target tensor of size \((H, W)\).

Returns

The list of kernel tensors. Each element stands for one kernel level.

Return type

list[Tensor]

forward(preds, downsample_ratio, gt_text_mask, gt_center_region_mask, gt_mask, gt_top_height_map, gt_bot_height_map, gt_sin_map, gt_cos_map)[source]

Compute Drrg loss.

Parameters
  • preds (tuple(Tensor)) – The first is the prediction map with shape \((N, C_{out}, H, W)\). The second is prediction from GCN module, with shape \((N, 2)\). The third is ground-truth label with shape \((N, 8)\).

  • downsample_ratio (float) – The downsample ratio.

  • gt_text_mask (list[BitmapMasks]) – Text mask.

  • gt_center_region_mask (list[BitmapMasks]) – Center region mask.

  • gt_mask (list[BitmapMasks]) – Effective mask.

  • gt_top_height_map (list[BitmapMasks]) – Top height map.

  • gt_bot_height_map (list[BitmapMasks]) – Bottom height map.

  • gt_sin_map (list[BitmapMasks]) – Sinusoid map.

  • gt_cos_map (list[BitmapMasks]) – Cosine map.

Returns

A loss dict with loss_text, loss_center, loss_height, loss_sin, loss_cos, and loss_gcn.

Return type

dict

gcn_loss(gcn_data)[source]

CrossEntropy Loss from gcn module.

Parameters

gcn_data (tuple(Tensor, Tensor)) – The first is the prediction with shape \((N, 2)\) and the second is the gt label with shape \((m, n)\) where \(m * n = N\).

Returns

CrossEntropy loss.

Return type

Tensor

class mmocr.models.textdet.losses.FCELoss(fourier_degree, num_sample, ohem_ratio=3.0)[source]

The class for implementing FCENet loss.

FCENet(CVPR2021): Fourier Contour Embedding for Arbitrary-shaped Text Detection

Parameters
  • fourier_degree (int) – The maximum Fourier transform degree k.

  • num_sample (int) – The sampling points number of regression loss. If it is too small, fcenet tends to be overfitting.

  • ohem_ratio (float) – the negative/positive ratio in OHEM.

forward(preds, _, p3_maps, p4_maps, p5_maps)[source]

Compute FCENet loss.

Parameters
  • preds (list[list[Tensor]]) – The outer list indicates images in a batch, and the inner list indicates the classification prediction map (with shape \((N, C, H, W)\)) and regression map (with shape \((N, C, H, W)\)).

  • p3_maps (list[ndarray]) – List of leval 3 ground truth target map with shape \((C, H, W)\).

  • p4_maps (list[ndarray]) – List of leval 4 ground truth target map with shape \((C, H, W)\).

  • p5_maps (list[ndarray]) – List of leval 5 ground truth target map with shape \((C, H, W)\).

Returns

A loss dict with loss_text, loss_center, loss_reg_x and loss_reg_y.

Return type

dict

fourier2poly(real_maps, imag_maps)[source]

Transform Fourier coefficient maps to polygon maps.

Parameters
  • real_maps (tensor) – A map composed of the real parts of the Fourier coefficients, whose shape is (-1, 2k+1)

  • imag_maps (tensor) – A map composed of the imag parts of the Fourier coefficients, whose shape is (-1, 2k+1)

Returns
x_maps (tensor): A map composed of the x value of the polygon

represented by n sample points (xn, yn), whose shape is (-1, n)

y_maps (tensor): A map composed of the y value of the polygon

represented by n sample points (xn, yn), whose shape is (-1, n)

class mmocr.models.textdet.losses.PANLoss(alpha=0.5, beta=0.25, delta_aggregation=0.5, delta_discrimination=3, ohem_ratio=3, reduction='mean', speedup_bbox_thr=- 1)[source]

The class for implementing PANet loss. This was partially adapted from https://github.com/WenmuZhou/PAN.pytorch.

PANet: Efficient and Accurate Arbitrary- Shaped Text Detection with Pixel Aggregation Network.

Parameters
  • alpha (float) – The kernel loss coef.

  • beta (float) – The aggregation and discriminative loss coef.

  • delta_aggregation (float) – The constant for aggregation loss.

  • delta_discrimination (float) – The constant for discriminative loss.

  • ohem_ratio (float) – The negative/positive ratio in ohem.

  • reduction (str) – The way to reduce the loss.

  • speedup_bbox_thr (int) – Speed up if speedup_bbox_thr > 0 and < bbox num.

aggregation_discrimination_loss(gt_texts, gt_kernels, inst_embeds)[source]

Compute the aggregation and discrimnative losses.

Parameters
  • gt_texts (Tensor) – The ground truth text mask of size \((N, 1, H, W)\).

  • gt_kernels (Tensor) – The ground truth text kernel mask of size \((N, 1, H, W)\).

  • inst_embeds (Tensor) – The text instance embedding tensor of size \((N, 1, H, W)\).

Returns

A tuple of aggregation loss and discriminative loss before reduction.

Return type

(Tensor, Tensor)

bitmasks2tensor(bitmasks, target_sz)[source]

Convert Bitmasks to tensor.

Parameters
  • bitmasks (list[BitmapMasks]) – The BitmapMasks list. Each item is for one img.

  • target_sz (tuple(int, int)) – The target tensor of size \((H, W)\).

Returns

The list of kernel tensors. Each element stands for one kernel level.

Return type

list[Tensor]

forward(preds, downsample_ratio, gt_kernels, gt_mask)[source]

Compute PANet loss.

Parameters
  • preds (Tensor) – The output tensor of size \((N, 6, H, W)\).

  • downsample_ratio (float) – The downsample ratio between preds and the input img.

  • gt_kernels (list[BitmapMasks]) – The kernel list with each element being the text kernel mask for one img.

  • gt_mask (list[BitmapMasks]) – The effective mask list with each element being the effective mask for one img.

Returns

A loss dict with loss_text, loss_kernel, loss_aggregation and loss_discrimination.

Return type

dict

ohem_batch(text_scores, gt_texts, gt_mask)[source]

OHEM sampling for a batch of imgs.

Parameters
  • text_scores (Tensor) – The text scores of size \((H, W)\).

  • gt_texts (Tensor) – The gt text masks of size \((H, W)\).

  • gt_mask (Tensor) – The gt effective mask of size \((H, W)\).

Returns

The sampled mask of size \((H, W)\).

Return type

Tensor

ohem_img(text_score, gt_text, gt_mask)[source]

Sample the top-k maximal negative samples and all positive samples.

Parameters
  • text_score (Tensor) – The text score of size \((H, W)\).

  • gt_text (Tensor) – The ground truth text mask of size \((H, W)\).

  • gt_mask (Tensor) – The effective region mask of size \((H, W)\).

Returns

The sampled pixel mask of size \((H, W)\).

Return type

Tensor

class mmocr.models.textdet.losses.PSELoss(alpha=0.7, ohem_ratio=3, reduction='mean', kernel_sample_type='adaptive')[source]

The class for implementing PSENet loss. This is partially adapted from https://github.com/whai362/PSENet.

PSENet: Shape Robust Text Detection with Progressive Scale Expansion Network.

Parameters
  • alpha (float) – Text loss coefficient, and \(1-\alpha\) is the kernel loss coefficient.

  • ohem_ratio (float) – The negative/positive ratio in ohem.

  • reduction (str) – The way to reduce the loss. Available options are “mean” and “sum”.

forward(score_maps, downsample_ratio, gt_kernels, gt_mask)[source]

Compute PSENet loss.

Parameters
  • score_maps (tensor) – The output tensor with size of Nx6xHxW.

  • downsample_ratio (float) – The downsample ratio between score_maps and the input img.

  • gt_kernels (list[BitmapMasks]) – The kernel list with each element being the text kernel mask for one img.

  • gt_mask (list[BitmapMasks]) – The effective mask list with each element being the effective mask for one img.

Returns

A loss dict with loss_text and loss_kernel.

Return type

dict

class mmocr.models.textdet.losses.TextSnakeLoss(ohem_ratio=3.0)[source]

The class for implementing TextSnake loss. This is partially adapted from https://github.com/princewang1994/TextSnake.pytorch.

TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes.

Parameters

ohem_ratio (float) – The negative/positive ratio in ohem.

bitmasks2tensor(bitmasks, target_sz)[source]

Convert Bitmasks to tensor.

Parameters
  • bitmasks (list[BitmapMasks]) – The BitmapMasks list. Each item is for one img.

  • target_sz (tuple(int, int)) – The target tensor of size \((H, W)\).

Returns

The list of kernel tensors. Each element stands for one kernel level.

Return type

list[Tensor]

forward(pred_maps, downsample_ratio, gt_text_mask, gt_center_region_mask, gt_mask, gt_radius_map, gt_sin_map, gt_cos_map)[source]
Parameters
  • pred_maps (Tensor) – The prediction map of shape \((N, 5, H, W)\), where each dimension is the map of “text_region”, “center_region”, “sin_map”, “cos_map”, and “radius_map” respectively.

  • downsample_ratio (float) – Downsample ratio.

  • gt_text_mask (list[BitmapMasks]) – Gold text masks.

  • gt_center_region_mask (list[BitmapMasks]) – Gold center region masks.

  • gt_mask (list[BitmapMasks]) – Gold general masks.

  • gt_radius_map (list[BitmapMasks]) – Gold radius maps.

  • gt_sin_map (list[BitmapMasks]) – Gold sin maps.

  • gt_cos_map (list[BitmapMasks]) – Gold cos maps.

Returns

A loss dict with loss_text, loss_center, loss_radius, loss_sin and loss_cos.

Return type

dict

textdet_postprocess

class mmocr.models.textdet.postprocess.DBPostprocessor(text_repr_type='poly', mask_thr=0.3, min_text_score=0.3, min_text_width=5, unclip_ratio=1.5, max_candidates=3000, **kwargs)[source]

Decoding predictions of DbNet to instances. This is partially adapted from https://github.com/MhLiao/DB.

Parameters
  • text_repr_type (str) – The boundary encoding type ‘poly’ or ‘quad’.

  • mask_thr (float) – The mask threshold value for binarization.

  • min_text_score (float) – The threshold value for converting binary map to shrink text regions.

  • min_text_width (int) – The minimum width of boundary polygon/box predicted.

  • unclip_ratio (float) – The unclip ratio for text regions dilation.

  • max_candidates (int) – The maximum candidate number.

class mmocr.models.textdet.postprocess.DRRGPostprocessor(link_thr, **kwargs)[source]

Merge text components and construct boundaries of text instances.

Parameters

link_thr (float) – The edge score threshold.

class mmocr.models.textdet.postprocess.FCEPostprocessor(fourier_degree, num_reconstr_points, text_repr_type='poly', alpha=1.0, beta=2.0, score_thr=0.3, nms_thr=0.1, **kwargs)[source]

Decoding predictions of FCENet to instances.

Parameters
  • fourier_degree (int) – The maximum Fourier transform degree k.

  • num_reconstr_points (int) – The points number of the polygon reconstructed from predicted Fourier coefficients.

  • text_repr_type (str) – Boundary encoding type ‘poly’ or ‘quad’.

  • scale (int) – The down-sample scale of the prediction.

  • alpha (float) – The parameter to calculate final scores. Score_{final} = (Score_{text region} ^ alpha) * (Score_{text center region}^ beta)

  • beta (float) – The parameter to calculate final score.

  • score_thr (float) – The threshold used to filter out the final candidates.

  • nms_thr (float) – The threshold of nms.

class mmocr.models.textdet.postprocess.PANPostprocessor(text_repr_type='poly', min_text_confidence=0.5, min_kernel_confidence=0.5, min_text_avg_confidence=0.85, min_text_area=16, **kwargs)[source]

Convert scores to quadrangles via post processing in PANet. This is partially adapted from https://github.com/WenmuZhou/PAN.pytorch.

Parameters
  • text_repr_type (str) – The boundary encoding type ‘poly’ or ‘quad’.

  • min_text_confidence (float) – The minimal text confidence.

  • min_kernel_confidence (float) – The minimal kernel confidence.

  • min_text_avg_confidence (float) – The minimal text average confidence.

  • min_text_area (int) – The minimal text instance region area.

class mmocr.models.textdet.postprocess.PSEPostprocessor(text_repr_type='poly', min_kernel_confidence=0.5, min_text_avg_confidence=0.85, min_kernel_area=0, min_text_area=16, **kwargs)[source]

Decoding predictions of PSENet to instances. This is partially adapted from https://github.com/whai362/PSENet.

Parameters
  • text_repr_type (str) – The boundary encoding type ‘poly’ or ‘quad’.

  • min_kernel_confidence (float) – The minimal kernel confidence.

  • min_text_avg_confidence (float) – The minimal text average confidence.

  • min_kernel_area (int) – The minimal text kernel area.

  • min_text_area (int) – The minimal text instance region area.

class mmocr.models.textdet.postprocess.TextSnakePostprocessor(text_repr_type='poly', min_text_region_confidence=0.6, min_center_region_confidence=0.2, min_center_area=30, disk_overlap_thr=0.03, radius_shrink_ratio=1.03, **kwargs)[source]

Decoding predictions of TextSnake to instances. This was partially adapted from https://github.com/princewang1994/TextSnake.pytorch.

Parameters
  • text_repr_type (str) – The boundary encoding type ‘poly’ or ‘quad’.

  • min_text_region_confidence (float) – The confidence threshold of text region in TextSnake.

  • min_center_region_confidence (float) – The confidence threshold of text center region in TextSnake.

  • min_center_area (int) – The minimal text center region area.

  • disk_overlap_thr (float) – The radius overlap threshold for merging disks.

  • radius_shrink_ratio (float) – The shrink ratio of ordered disks radii.

textrecog_recognizer

class mmocr.models.textrecog.recognizer.ABINet(preprocessor=None, backbone=None, encoder=None, decoder=None, iter_size=1, fuser=None, loss=None, label_convertor=None, train_cfg=None, test_cfg=None, max_seq_len=40, pretrained=None, init_cfg=None)[source]

Implementation of `Read Like Humans: Autonomous, Bidirectional and Iterative LanguageModeling for Scene Text Recognition.

<https://arxiv.org/pdf/2103.06495.pdf>`_

forward_train(img, img_metas)[source]
Parameters
  • img (tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.

  • img_metas (list[dict]) – A list of image info dict where each dict contains: ‘img_shape’, ‘filename’, and may also contain ‘ori_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet.datasets.pipelines.Collect.

Returns

A dictionary of loss components.

Return type

dict[str, tensor]

simple_test(img, img_metas, **kwargs)[source]

Test function with test time augmentation.

Parameters
  • imgs (torch.Tensor) – Image input tensor.

  • img_metas (list[dict]) – List of image information.

Returns

Text label result of each image.

Return type

list[str]

class mmocr.models.textrecog.recognizer.BaseRecognizer(init_cfg=None)[source]

Base class for text recognition.

abstract aug_test(imgs, img_metas, **kwargs)[source]

Test function with test time augmentation.

Parameters
  • imgs (list[tensor]) – Tensor should have shape NxCxHxW, which contains all images in the batch.

  • img_metas (list[list[dict]]) – The metadata of images.

abstract extract_feat(imgs)[source]

Extract features from images.

forward(img, img_metas, return_loss=True, **kwargs)[source]

Calls either forward_train() or forward_test() depending on whether return_loss is True.

Note that img and img_meta are single-nested (i.e. tensor and list[dict]).

forward_test(imgs, img_metas, **kwargs)[source]
Parameters
  • imgs (tensor | list[tensor]) – Tensor should have shape NxCxHxW, which contains all images in the batch.

  • img_metas (list[dict] | list[list[dict]]) – The outer list indicates images in a batch.

abstract forward_train(imgs, img_metas, **kwargs)[source]
Parameters
  • img (tensor) – tensors with shape (N, C, H, W). Typically should be mean centered and std scaled.

  • img_metas (list[dict]) – List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details of the values of these keys, see mmdet.datasets.pipelines.Collect.

  • kwargs (keyword arguments) – Specific to concrete implementation.

show_result(img, result, gt_label='', win_name='', show=False, wait_time=0, out_file=None, **kwargs)[source]

Draw result on img.

Parameters
  • img (str or tensor) – The image to be displayed.

  • result (dict) – The results to draw on img.

  • gt_label (str) – Ground truth label of img.

  • win_name (str) – The window name.

  • wait_time (int) – Value of waitKey param. Default: 0.

  • show (bool) – Whether to show the image. Default: False.

  • out_file (str or None) – The output filename. Default: None.

Returns

Only if not show or out_file.

Return type

img (tensor)

train_step(data, optimizer)[source]

The iteration step during training.

This method defines an iteration step during training, except for the back propagation and optimizer update, which are done by an optimizer hook. Note that in some complicated cases or models (e.g. GAN), the whole process (including the back propagation and optimizer update) is also defined by this method.

Parameters
  • data (dict) – The outputs of dataloader.

  • optimizer (torch.optim.Optimizer | dict) – The optimizer of runner is passed to train_step(). This argument is unused and reserved.

Returns

It should contain at least 3 keys: loss, log_vars,

num_samples.

  • loss is a tensor for back propagation, which is a

weighted sum of multiple losses. - log_vars contains all the variables to be sent to the logger. - num_samples indicates the batch size used for averaging the logs (Note: for the DDP model, num_samples refers to the batch size for each GPU).

Return type

dict

val_step(data, optimizer)[source]

The iteration step during validation.

This method shares the same signature as train_step(), but is used during val epochs. Note that the evaluation after training epochs is not implemented by this method, but by an evaluation hook.

class mmocr.models.textrecog.recognizer.CRNNNet(preprocessor=None, backbone=None, encoder=None, decoder=None, loss=None, label_convertor=None, train_cfg=None, test_cfg=None, max_seq_len=40, pretrained=None, init_cfg=None)[source]

CTC-loss based recognizer.

class mmocr.models.textrecog.recognizer.EncodeDecodeRecognizer(preprocessor=None, backbone=None, encoder=None, decoder=None, loss=None, label_convertor=None, train_cfg=None, test_cfg=None, max_seq_len=40, pretrained=None, init_cfg=None)[source]

Base class for encode-decode recognizer.

aug_test(imgs, img_metas, **kwargs)[source]

Test function as well as time augmentation.

Parameters
  • imgs (list[tensor]) – Tensor should have shape NxCxHxW, which contains all images in the batch.

  • img_metas (list[list[dict]]) – The metadata of images.

extract_feat(img)[source]

Directly extract features from the backbone.

forward_train(img, img_metas)[source]
Parameters
  • img (tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.

  • img_metas (list[dict]) – A list of image info dict where each dict contains: ‘img_shape’, ‘filename’, and may also contain ‘ori_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet.datasets.pipelines.Collect.

Returns

A dictionary of loss components.

Return type

dict[str, tensor]

simple_test(img, img_metas, **kwargs)[source]

Test function with test time augmentation.

Parameters
  • imgs (torch.Tensor) – Image input tensor.

  • img_metas (list[dict]) – List of image information.

Returns

Text label result of each image.

Return type

list[str]

class mmocr.models.textrecog.recognizer.NRTR(preprocessor=None, backbone=None, encoder=None, decoder=None, loss=None, label_convertor=None, train_cfg=None, test_cfg=None, max_seq_len=40, pretrained=None, init_cfg=None)[source]

Implementation of NRTR

class mmocr.models.textrecog.recognizer.RobustScanner(preprocessor=None, backbone=None, encoder=None, decoder=None, loss=None, label_convertor=None, train_cfg=None, test_cfg=None, max_seq_len=40, pretrained=None, init_cfg=None)[source]

Implementation of `RobustScanner.

<https://arxiv.org/pdf/2007.07542.pdf>

class mmocr.models.textrecog.recognizer.SARNet(preprocessor=None, backbone=None, encoder=None, decoder=None, loss=None, label_convertor=None, train_cfg=None, test_cfg=None, max_seq_len=40, pretrained=None, init_cfg=None)[source]

Implementation of SAR

class mmocr.models.textrecog.recognizer.SATRN(preprocessor=None, backbone=None, encoder=None, decoder=None, loss=None, label_convertor=None, train_cfg=None, test_cfg=None, max_seq_len=40, pretrained=None, init_cfg=None)[source]

Implementation of SATRN

class mmocr.models.textrecog.recognizer.SegRecognizer(preprocessor=None, backbone=None, neck=None, head=None, loss=None, label_convertor=None, train_cfg=None, test_cfg=None, pretrained=None, init_cfg=None)[source]

Base class for segmentation based recognizer.

aug_test(imgs, img_metas, **kwargs)[source]

Test function with test time augmentation.

Parameters
  • imgs (list[tensor]) – Tensor should have shape NxCxHxW, which contains all images in the batch.

  • img_metas (list[list[dict]]) – The metadata of images.

extract_feat(img)[source]

Directly extract features from the backbone.

forward_train(img, img_metas, gt_kernels=None)[source]
Parameters
  • img (tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.

  • img_metas (list[dict]) – A list of image info dict where each dict contains: ‘img_shape’, ‘filename’, and may also contain ‘ori_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet.datasets.pipelines.Collect.

Returns

A dictionary of loss components.

Return type

dict[str, tensor]

simple_test(img, img_metas, **kwargs)[source]

Test function without test time augmentation.

Parameters
  • imgs (torch.Tensor) – Image input tensor.

  • img_metas (list[dict]) – List of image information.

Returns

Text label result of each image.

Return type

list[str]

textrecog_backbones

class mmocr.models.textrecog.backbones.NRTRModalityTransform(input_channels=3, init_cfg=[{'type': 'Kaiming', 'layer': 'Conv2d'}, {'type': 'Uniform', 'layer': 'BatchNorm2d'}])[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.textrecog.backbones.ResNet31OCR(base_channels=3, layers=[1, 2, 5, 3], channels=[64, 128, 256, 256, 512, 512, 512], out_indices=None, stage4_pool_cfg={'kernel_size': (2, 1), 'stride': (2, 1)}, last_stage_pool=False, init_cfg=[{'type': 'Kaiming', 'layer': 'Conv2d'}, {'type': 'Uniform', 'layer': 'BatchNorm2d'}])[source]
Implement ResNet backbone for text recognition, modified from

ResNet

Parameters
  • base_channels (int) – Number of channels of input image tensor.

  • layers (list[int]) – List of BasicBlock number for each stage.

  • channels (list[int]) – List of out_channels of Conv2d layer.

  • out_indices (None | Sequence[int]) – Indices of output stages.

  • stage4_pool_cfg (dict) – Dictionary to construct and configure pooling layer in stage 4.

  • last_stage_pool (bool) – If True, add MaxPool2d layer to last stage.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.textrecog.backbones.ResNetABI(in_channels=3, stem_channels=32, base_channels=32, arch_settings=[3, 4, 6, 6, 3], strides=[2, 1, 2, 1, 1], out_indices=None, last_stage_pool=False, init_cfg=[{'type': 'Xavier', 'layer': 'Conv2d'}, {'type': 'Constant', 'val': 1, 'layer': 'BatchNorm2d'}])[source]

Implement ResNet backbone for text recognition, modified from `ResNet.

<https://arxiv.org/pdf/1512.03385.pdf>`_ and https://github.com/FangShancheng/ABINet

Parameters
  • in_channels (int) – Number of channels of input image tensor.

  • stem_channels (int) – Number of stem channels.

  • base_channels (int) – Number of base channels.

  • arch_settings (list[int]) – List of BasicBlock number for each stage.

  • strides (Sequence[int]) – Strides of the first block of each stage.

  • out_indices (None | Sequence[int]) – Indices of output stages. If not specified, only the last stage will be returned.

  • last_stage_pool (bool) – If True, add MaxPool2d layer to last stage.

forward(x)[source]
Parameters

x (Tensor) – Image tensor of shape \((N, 3, H, W)\).

Returns

Feature tensor. Its shape depends on ResNetABI’s config. It can be a list of feature outputs at specific layers if out_indices is specified.

Return type

Tensor or list[Tensor]

class mmocr.models.textrecog.backbones.ShallowCNN(input_channels=1, hidden_dim=512, init_cfg=[{'type': 'Kaiming', 'layer': 'Conv2d'}, {'type': 'Uniform', 'layer': 'BatchNorm2d'}])[source]

Implement Shallow CNN block for SATRN.

SATRN: On Recognizing Texts of Arbitrary Shapes with 2D Self-Attention.

Parameters
  • base_channels (int) – Number of channels of input image tensor \(D_i\).

  • hidden_dim (int) – Size of hidden layers of the model \(D_m\).

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(x)[source]
Parameters

x (Tensor) – Input image feature \((N, D_i, H, W)\).

Returns

A tensor of shape \((N, D_m, H/4, W/4)\).

Return type

Tensor

class mmocr.models.textrecog.backbones.VeryDeepVgg(leaky_relu=True, input_channels=3, init_cfg=[{'type': 'Xavier', 'layer': 'Conv2d'}, {'type': 'Uniform', 'layer': 'BatchNorm2d'}])[source]

Implement VGG-VeryDeep backbone for text recognition, modified from VGG-VeryDeep

Parameters
  • leaky_relu (bool) – Use leakyRelu or not.

  • input_channels (int) – Number of channels of input image tensor.

forward(x)[source]
Parameters

x (Tensor) – Images of shape \((N, C, H, W)\).

Returns

The feature Tensor of shape \((N, 512, H/32, (W/4+1)\).

Return type

Tensor

textrecog_necks

class mmocr.models.textrecog.necks.FPNOCR(in_channels, out_channels, last_stage_only=True, init_cfg=None)[source]

FPN-like Network for segmentation based text recognition.

Parameters
  • in_channels (list[int]) – Number of input channels \(C_i\) for each scale.

  • out_channels (int) – Number of output channels \(C_{out}\) for each scale.

  • last_stage_only (bool) – If True, output last stage only.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(inputs)[source]
Parameters

inputs (list[Tensor]) – A list of n tensors. Each tensor has the shape of \((N, C_i, H_i, W_i)\). It usually expects 4 tensors (C2-C5 features) from ResNet.

Returns

A tuple of n-1 tensors. Each has the of shape \((N, C_{out}, H_{n-2-i}, W_{n-2-i})\). If last_stage_only=True (default), the size of the tuple is 1 and only the last element will be returned.

Return type

tuple(Tensor)

textrecog_heads

class mmocr.models.textrecog.heads.SegHead(in_channels=128, num_classes=37, upsample_param=None, init_cfg=None)[source]

Head for segmentation based text recognition.

Parameters
  • in_channels (int) – Number of input channels \(C\).

  • num_classes (int) – Number of output classes \(C_{out}\).

  • upsample_param (dict | None) – Config dict for interpolation layer. Default: dict(scale_factor=1.0, mode='nearest')

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(out_neck)[source]
Parameters

out_neck (list[Tensor]) – A list of tensor of shape \((N, C_i, H_i, W_i)\). The network only uses the last one (out_neck[-1]).

Returns

A tensor of shape \((N, C_{out}, kH, kW)\) where \(k\) is determined by upsample_param.

Return type

Tensor

textrecog_convertors

class mmocr.models.textrecog.convertors.ABIConvertor(dict_type='DICT90', dict_file=None, dict_list=None, with_unknown=True, max_seq_len=40, lower=False, start_end_same=True, **kwargs)[source]

Convert between text, index and tensor for encoder-decoder based pipeline. Modified from AttnConvertor to get closer to ABINet’s original implementation.

Parameters
  • dict_type (str) – Type of dict, should be one of {‘DICT36’, ‘DICT90’}.

  • dict_file (None|str) – Character dict file path. If not none, higher priority than dict_type.

  • dict_list (None|list[str]) – Character list. If not none, higher priority than dict_type, but lower than dict_file.

  • with_unknown (bool) – If True, add UKN token to class.

  • max_seq_len (int) – Maximum sequence length of label.

  • lower (bool) – If True, convert original string to lower case.

  • start_end_same (bool) – Whether use the same index for start and end token or not. Default: True.

str2tensor(strings)[source]

Convert text-string into tensor. Different from mmocr.models.textrecog.convertors.AttnConvertor, the targets field returns target index no longer than max_seq_len (EOS token included).

Parameters

strings (list[str]) – For instance, [‘hello’, ‘world’]

Returns

A dict with two tensors.

  • targets (list[Tensor]): [torch.Tensor([1,2,3,3,4,8]), torch.Tensor([5,4,6,3,7,8])]
  • padded_targets (Tensor): Tensor of shape (bsz * max_seq_len)).

Return type

dict

class mmocr.models.textrecog.convertors.AttnConvertor(dict_type='DICT90', dict_file=None, dict_list=None, with_unknown=True, max_seq_len=40, lower=False, start_end_same=True, **kwargs)[source]

Convert between text, index and tensor for encoder-decoder based pipeline.

Parameters
  • dict_type (str) – Type of dict, should be one of {‘DICT36’, ‘DICT90’}.

  • dict_file (None|str) – Character dict file path. If not none, higher priority than dict_type.

  • dict_list (None|list[str]) – Character list. If not none, higher priority than dict_type, but lower than dict_file.

  • with_unknown (bool) – If True, add UKN token to class.

  • max_seq_len (int) – Maximum sequence length of label.

  • lower (bool) – If True, convert original string to lower case.

  • start_end_same (bool) – Whether use the same index for start and end token or not. Default: True.

str2tensor(strings)[source]

Convert text-string into tensor. :param strings: [‘hello’, ‘world’] :type strings: list[str]

Returns

Tensor | list[tensor]):
tensors (list[Tensor]): [torch.Tensor([1,2,3,3,4]),

torch.Tensor([5,4,6,3,7])]

padded_targets (Tensor(bsz * max_seq_len))

Return type

dict (str

tensor2idx(outputs, img_metas=None)[source]

Convert output tensor to text-index :param outputs: model outputs with size: N * T * C :type outputs: tensor :param img_metas: Each dict contains one image info. :type img_metas: list[dict]

Returns

[[1,2,3,3,4], [5,4,6,3,7]] scores (list[list[float]]): [[0.9,0.8,0.95,0.97,0.94],

[0.9,0.9,0.98,0.97,0.96]]

Return type

indexes (list[list[int]])

class mmocr.models.textrecog.convertors.BaseConvertor(dict_type='DICT90', dict_file=None, dict_list=None)[source]

Convert between text, index and tensor for text recognize pipeline.

Parameters
  • dict_type (str) – Type of dict, should be either ‘DICT36’ or ‘DICT90’.

  • dict_file (None|str) – Character dict file path. If not none, the dict_file is of higher priority than dict_type.

  • dict_list (None|list[str]) – Character list. If not none, the list is of higher priority than dict_type, but lower than dict_file.

idx2str(indexes)[source]

Convert indexes to text strings.

Parameters

indexes (list[list[int]]) – [[1,2,3,3,4], [5,4,6,3,7]].

Returns

[‘hello’, ‘world’].

Return type

strings (list[str])

num_classes()[source]

Number of output classes.

str2idx(strings)[source]

Convert strings to indexes.

Parameters

strings (list[str]) – [‘hello’, ‘world’].

Returns

[[1,2,3,3,4], [5,4,6,3,7]].

Return type

indexes (list[list[int]])

str2tensor(strings)[source]

Convert text-string to input tensor.

Parameters

strings (list[str]) – [‘hello’, ‘world’].

Returns

[torch.Tensor([1,2,3,3,4]),

torch.Tensor([5,4,6,3,7])].

Return type

tensors (list[torch.Tensor])

tensor2idx(output)[source]

Convert model output tensor to character indexes and scores. :param output: The model outputs with size: N * T * C :type output: tensor

Returns

[[1,2,3,3,4], [5,4,6,3,7]]. scores (list[list[float]]): [[0.9,0.8,0.95,0.97,0.94],

[0.9,0.9,0.98,0.97,0.96]].

Return type

indexes (list[list[int]])

class mmocr.models.textrecog.convertors.CTCConvertor(dict_type='DICT90', dict_file=None, dict_list=None, with_unknown=True, lower=False, **kwargs)[source]

Convert between text, index and tensor for CTC loss-based pipeline.

Parameters
  • dict_type (str) – Type of dict, should be either ‘DICT36’ or ‘DICT90’.

  • dict_file (None|str) – Character dict file path. If not none, the file is of higher priority than dict_type.

  • dict_list (None|list[str]) – Character list. If not none, the list is of higher priority than dict_type, but lower than dict_file.

  • with_unknown (bool) – If True, add UKN token to class.

  • lower (bool) – If True, convert original string to lower case.

str2tensor(strings)[source]

Convert text-string to ctc-loss input tensor.

Parameters

strings (list[str]) – [‘hello’, ‘world’].

Returns

tensor | list[tensor]):
tensors (list[tensor]): [torch.Tensor([1,2,3,3,4]),

torch.Tensor([5,4,6,3,7])].

flatten_targets (tensor): torch.Tensor([1,2,3,3,4,5,4,6,3,7]). target_lengths (tensor): torch.IntTensot([5,5]).

Return type

dict (str

tensor2idx(output, img_metas, topk=1, return_topk=False)[source]

Convert model output tensor to index-list. :param output: The model outputs with size: N * T * C. :type output: tensor :param img_metas: Each dict contains one image info. :type img_metas: list[dict] :param topk: The highest k classes to be returned. :type topk: int :param return_topk: Whether to return topk or just top1. :type return_topk: bool

Returns

[[1,2,3,3,4], [5,4,6,3,7]]. scores (list[list[float]]): [[0.9,0.8,0.95,0.97,0.94],

[0.9,0.9,0.98,0.97,0.96]] (

indexes_topk (list[list[list[int]->len=topk]]): scores_topk (list[list[list[float]->len=topk]])

).

Return type

indexes (list[list[int]])

class mmocr.models.textrecog.convertors.SegConvertor(dict_type='DICT36', dict_file=None, dict_list=None, with_unknown=True, lower=False, **kwargs)[source]

Convert between text, index and tensor for segmentation based pipeline.

Parameters
  • dict_type (str) – Type of dict, should be either ‘DICT36’ or ‘DICT90’.

  • dict_file (None|str) – Character dict file path. If not none, the file is of higher priority than dict_type.

  • dict_list (None|list[str]) – Character list. If not none, the list

  • of higher priority than dict_type (is) –

  • lower than dict_file. (but) –

  • with_unknown (bool) – If True, add UKN token to class.

  • lower (bool) – If True, convert original string to lower case.

tensor2str(output, img_metas=None)[source]

Convert model output tensor to string labels. :param output: Model outputs with size: N * C * H * W :type output: tensor :param img_metas: Each dict contains one image info. :type img_metas: list[dict]

Returns

Decoded text labels. scores (list[list[float]]): Decoded chars scores.

Return type

texts (list[str])

textrecog_encoders

class mmocr.models.textrecog.encoders.ABIVisionModel(encoder={'type': 'TransformerEncoder'}, decoder={'type': 'ABIVisionDecoder'}, init_cfg={'layer': 'Conv2d', 'type': 'Xavier'}, **kwargs)[source]

A wrapper of visual feature encoder and language token decoder that converts visual features into text tokens.

Implementation of VisionEncoder in

ABINet.

Parameters
  • encoder (dict) – Config for image feature encoder.

  • decoder (dict) – Config for language token decoder.

  • init_cfg (dict) – Specifies the initialization method for model layers.

forward(feat, img_metas=None)[source]
Parameters

feat (Tensor) – Images of shape (N, E, H, W).

Returns

A dict with keys feature, logits and attn_scores.

  • feature (Tensor): Shape (N, T, E). Raw visual features for language decoder.
  • logits (Tensor): Shape (N, T, C). The raw logits for characters. C is the number of characters.
  • attn_scores (Tensor): Shape (N, T, H, W). Intermediate result for vision-language aligner.

Return type

dict

class mmocr.models.textrecog.encoders.BaseEncoder(init_cfg=None)[source]

Base Encoder class for text recognition.

forward(feat, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.textrecog.encoders.ChannelReductionEncoder(in_channels, out_channels, init_cfg={'layer': 'Conv2d', 'type': 'Xavier'})[source]

Change the channel number with a one by one convoluational layer.

Parameters
  • in_channels (int) – Number of input channels.

  • out_channels (int) – Number of output channels.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(feat, img_metas=None)[source]
Parameters
  • feat (Tensor) – Image features with the shape of \((N, C_{in}, H, W)\).

  • img_metas (None) – Unused.

Returns

A tensor of shape \((N, C_{out}, H, W)\).

Return type

Tensor

class mmocr.models.textrecog.encoders.NRTREncoder(n_layers=6, n_head=8, d_k=64, d_v=64, d_model=512, d_inner=256, dropout=0.1, init_cfg=None, **kwargs)[source]

Transformer Encoder block with self attention mechanism.

Parameters
  • n_layers (int) – The number of sub-encoder-layers in the encoder (default=6).

  • n_head (int) – The number of heads in the multiheadattention models (default=8).

  • d_k (int) – Total number of features in key.

  • d_v (int) – Total number of features in value.

  • d_model (int) – The number of expected features in the decoder inputs (default=512).

  • d_inner (int) – The dimension of the feedforward network model (default=256).

  • dropout (float) – Dropout layer on attn_output_weights.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(feat, img_metas=None)[source]
Parameters
  • feat (Tensor) – Backbone output of shape \((N, C, H, W)\).

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

The encoder output tensor. Shape \((N, T, C)\).

Return type

Tensor

class mmocr.models.textrecog.encoders.SAREncoder(enc_bi_rnn=False, enc_do_rnn=0.0, enc_gru=False, d_model=512, d_enc=512, mask=True, init_cfg=[{'type': 'Xavier', 'layer': 'Conv2d'}, {'type': 'Uniform', 'layer': 'BatchNorm2d'}], **kwargs)[source]

Implementation of encoder module in `SAR.

<https://arxiv.org/abs/1811.00751>`_.

Parameters
  • enc_bi_rnn (bool) – If True, use bidirectional RNN in encoder.

  • enc_do_rnn (float) – Dropout probability of RNN layer in encoder.

  • enc_gru (bool) – If True, use GRU, else LSTM in encoder.

  • d_model (int) – Dim \(D_i\) of channels from backbone.

  • d_enc (int) – Dim \(D_m\) of encoder RNN layer.

  • mask (bool) – If True, mask padding in RNN sequence.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(feat, img_metas=None)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

A tensor of shape \((N, D_m)\).

Return type

Tensor

class mmocr.models.textrecog.encoders.SatrnEncoder(n_layers=12, n_head=8, d_k=64, d_v=64, d_model=512, n_position=100, d_inner=256, dropout=0.1, init_cfg=None, **kwargs)[source]

Implement encoder for SATRN, see `SATRN.

<https://arxiv.org/abs/1910.04396>`_.

Parameters
  • n_layers (int) – Number of attention layers.

  • n_head (int) – Number of parallel attention heads.

  • d_k (int) – Dimension of the key vector.

  • d_v (int) – Dimension of the value vector.

  • d_model (int) – Dimension \(D_m\) of the input from previous model.

  • n_position (int) – Length of the positional encoding vector. Must be greater than max_seq_len.

  • d_inner (int) – Hidden dimension of feedforward layers.

  • dropout (float) – Dropout rate.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(feat, img_metas=None)[source]
Parameters
  • feat (Tensor) – Feature tensor of shape \((N, D_m, H, W)\).

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

A tensor of shape \((N, T, D_m)\).

Return type

Tensor

class mmocr.models.textrecog.encoders.TransformerEncoder(n_layers=2, n_head=8, d_model=512, d_inner=2048, dropout=0.1, max_len=256, init_cfg=None)[source]

Implement transformer encoder for text recognition, modified from <https://github.com/FangShancheng/ABINet>.

Parameters
  • n_layers (int) – Number of attention layers.

  • n_head (int) – Number of parallel attention heads.

  • d_model (int) – Dimension \(D_m\) of the input from previous model.

  • d_inner (int) – Hidden dimension of feedforward layers.

  • dropout (float) – Dropout rate.

  • max_len (int) – Maximum output sequence length \(T\).

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(feature)[source]
Parameters

feature (Tensor) – Feature tensor of shape \((N, D_m, H, W)\).

Returns

Features of shape \((N, D_m, H, W)\).

Return type

Tensor

textrecog_decoders

class mmocr.models.textrecog.decoders.ABILanguageDecoder(d_model=512, n_head=8, d_inner=2048, n_layers=4, max_seq_len=40, dropout=0.1, detach_tokens=True, num_chars=90, use_self_attn=False, pad_idx=0, init_cfg=None, **kwargs)[source]

Transformer-based language model responsible for spell correction. Implementation of language model of

Parameters
  • d_model (int) – Hidden size of input.

  • n_head (int) – Number of multi-attention heads.

  • d_inner (int) – Hidden size of feedforward network model.

  • n_layers (int) – The number of similar decoding layers.

  • max_seq_len (int) – Maximum text sequence length \(T\).

  • dropout (float) – Dropout rate.

  • detach_tokens (bool) – Whether to block the gradient flow at input tokens.

  • num_chars (int) – Number of text characters \(C\).

  • use_self_attn (bool) – If True, use self attention in decoder layers, otherwise cross attention will be used.

  • pad_idx (bool) – The index of the token indicating the end of output, which is used to compute the length of output. It is usually the index of <EOS> or <PAD> token.

  • init_cfg (dict) – Specifies the initialization method for model layers.

forward_train(feat, logits, targets_dict, img_metas)[source]
Parameters

logits (Tensor) – Raw language logitis. Shape (N, T, C).

Returns

A dict with keys feature and logits. feature (Tensor): Shape (N, T, E). Raw textual features for vision

language aligner.

logits (Tensor): Shape (N, T, C). The raw logits for characters

after spell correction.

class mmocr.models.textrecog.decoders.ABIVisionDecoder(in_channels=512, num_channels=64, attn_height=8, attn_width=32, attn_mode='nearest', max_seq_len=40, num_chars=90, init_cfg={'layer': 'Conv2d', 'type': 'Xavier'}, **kwargs)[source]

Converts visual features into text characters.

Implementation of VisionEncoder in

ABINet.

Parameters
  • in_channels (int) – Number of channels \(E\) of input vector.

  • num_channels (int) – Number of channels of hidden vectors in mini U-Net.

  • h (int) – Height \(H\) of input image features.

  • w (int) – Width \(W\) of input image features.

  • in_channels – Number of channels of input image features.

  • num_channels – Number of channels of hidden vectors in mini U-Net.

  • attn_height (int) – Height \(H\) of input image features.

  • attn_width (int) – Width \(W\) of input image features.

  • attn_mode (str) – Upsampling mode for torch.nn.Upsample in mini U-Net.

  • max_seq_len (int) – Maximum text sequence length \(T\).

  • num_chars (int) – Number of text characters \(C\).

  • init_cfg (dict) – Specifies the initialization method for model layers.

forward_train(feat, out_enc=None, targets_dict=None, img_metas=None)[source]
Parameters

feat (Tensor) – Image features of shape (N, E, H, W).

Returns

A dict with keys feature, logits and attn_scores.

  • feature (Tensor): Shape (N, T, E). Raw visual features for language decoder.
  • logits (Tensor): Shape (N, T, C). The raw logits for characters.
  • attn_scores (Tensor): Shape (N, T, H, W). Intermediate result for vision-language aligner.

Return type

dict

class mmocr.models.textrecog.decoders.BaseDecoder(init_cfg=None, **kwargs)[source]

Base decoder class for text recognition.

forward(feat, out_enc, targets_dict=None, img_metas=None, train_mode=True)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.textrecog.decoders.CRNNDecoder(in_channels=None, num_classes=None, rnn_flag=False, init_cfg={'layer': 'Conv2d', 'type': 'Xavier'}, **kwargs)[source]

Decoder for CRNN.

Parameters
  • in_channels (int) – Number of input channels.

  • num_classes (int) – Number of output classes.

  • rnn_flag (bool) – Use RNN or CNN as the decoder.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward_test(feat, out_enc, img_metas)[source]
Parameters

feat (Tensor) – A Tensor of shape \((N, H, 1, W)\).

Returns

The raw logit tensor. Shape \((N, W, C)\) where \(C\) is num_classes.

Return type

Tensor

forward_train(feat, out_enc, targets_dict, img_metas)[source]
Parameters

feat (Tensor) – A Tensor of shape \((N, H, 1, W)\).

Returns

The raw logit tensor. Shape \((N, W, C)\) where \(C\) is num_classes.

Return type

Tensor

class mmocr.models.textrecog.decoders.NRTRDecoder(n_layers=6, d_embedding=512, n_head=8, d_k=64, d_v=64, d_model=512, d_inner=256, n_position=200, dropout=0.1, num_classes=93, max_seq_len=40, start_idx=1, padding_idx=92, init_cfg=None, **kwargs)[source]

Transformer Decoder block with self attention mechanism.

Parameters
  • n_layers (int) – Number of attention layers.

  • d_embedding (int) – Language embedding dimension.

  • n_head (int) – Number of parallel attention heads.

  • d_k (int) – Dimension of the key vector.

  • d_v (int) – Dimension of the value vector.

  • d_model (int) – Dimension \(D_m\) of the input from previous model.

  • d_inner (int) – Hidden dimension of feedforward layers.

  • n_position (int) – Length of the positional encoding vector. Must be greater than max_seq_len.

  • dropout (float) – Dropout rate.

  • num_classes (int) – Number of output classes \(C\).

  • max_seq_len (int) – Maximum output sequence length \(T\).

  • start_idx (int) – The index of <SOS>.

  • padding_idx (int) – The index of <PAD>.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

Warning

This decoder will not predict the final class which is assumed to be <PAD>. Therefore, its output size is always \(C - 1\). <PAD> is also ignored by loss as specified in mmocr.models.textrecog.recognizer.EncodeDecodeRecognizer.

forward_train(feat, out_enc, targets_dict, img_metas)[source]
Parameters
  • feat (None) – Unused.

  • out_enc (Tensor) – Encoder output of shape \((N, T, D_m)\) where \(D_m\) is d_model.

  • targets_dict (dict) – A dict with the key padded_targets, a tensor of shape \((N, T)\). Each element is the index of a character.

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

The raw logit tensor. Shape \((N, T, C)\).

Return type

Tensor

static get_subsequent_mask(seq)[source]

For masking out the subsequent info.

class mmocr.models.textrecog.decoders.ParallelSARDecoder(num_classes=37, enc_bi_rnn=False, dec_bi_rnn=False, dec_do_rnn=0.0, dec_gru=False, d_model=512, d_enc=512, d_k=64, pred_dropout=0.0, max_seq_len=40, mask=True, start_idx=0, padding_idx=92, pred_concat=False, init_cfg=None, **kwargs)[source]

Implementation Parallel Decoder module in `SAR.

<https://arxiv.org/abs/1811.00751>`_.

Parameters
  • num_classes (int) – Output class number \(C\).

  • channels (list[int]) – Network layer channels.

  • enc_bi_rnn (bool) – If True, use bidirectional RNN in encoder.

  • dec_bi_rnn (bool) – If True, use bidirectional RNN in decoder.

  • dec_do_rnn (float) – Dropout of RNN layer in decoder.

  • dec_gru (bool) – If True, use GRU, else LSTM in decoder.

  • d_model (int) – Dim of channels from backbone \(D_i\).

  • d_enc (int) – Dim of encoder RNN layer \(D_m\).

  • d_k (int) – Dim of channels of attention module.

  • pred_dropout (float) – Dropout probability of prediction layer.

  • max_seq_len (int) – Maximum sequence length for decoding.

  • mask (bool) – If True, mask padding in feature map.

  • start_idx (int) – Index of start token.

  • padding_idx (int) – Index of padding token.

  • pred_concat (bool) – If True, concat glimpse feature from attention with holistic feature and hidden state.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

Warning

This decoder will not predict the final class which is assumed to be <PAD>. Therefore, its output size is always \(C - 1\). <PAD> is also ignored by loss as specified in mmocr.models.textrecog.recognizer.EncodeDecodeRecognizer.

forward_test(feat, out_enc, img_metas)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

A raw logit tensor of shape \((N, T, C-1)\).

Return type

Tensor

forward_train(feat, out_enc, targets_dict, img_metas)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • targets_dict (dict) – A dict with the key padded_targets, a tensor of shape \((N, T)\). Each element is the index of a character.

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

A raw logit tensor of shape \((N, T, C-1)\).

Return type

Tensor

class mmocr.models.textrecog.decoders.ParallelSARDecoderWithBS(beam_width=5, num_classes=37, enc_bi_rnn=False, dec_bi_rnn=False, dec_do_rnn=0, dec_gru=False, d_model=512, d_enc=512, d_k=64, pred_dropout=0.0, max_seq_len=40, mask=True, start_idx=0, padding_idx=0, pred_concat=False, init_cfg=None, **kwargs)[source]

Parallel Decoder module with beam-search in SAR.

Parameters

beam_width (int) – Width for beam search.

forward_test(feat, out_enc, img_metas)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

A raw logit tensor of shape \((N, T, C-1)\).

Return type

Tensor

class mmocr.models.textrecog.decoders.PositionAttentionDecoder(num_classes=None, rnn_layers=2, dim_input=512, dim_model=128, max_seq_len=40, mask=True, return_feature=False, encode_value=False, init_cfg=None)[source]

Position attention decoder for RobustScanner.

RobustScanner: RobustScanner: Dynamically Enhancing Positional Clues for Robust Text Recognition

Parameters
  • num_classes (int) – Number of output classes \(C\).

  • rnn_layers (int) – Number of RNN layers.

  • dim_input (int) – Dimension \(D_i\) of input vector feat.

  • dim_model (int) – Dimension \(D_m\) of the model. Should also be the same as encoder output vector out_enc.

  • max_seq_len (int) – Maximum output sequence length \(T\).

  • mask (bool) – Whether to mask input features according to img_meta['valid_ratio'].

  • return_feature (bool) – Return feature or logits as the result.

  • encode_value (bool) – Whether to use the output of encoder out_enc as value of attention layer. If False, the original feature feat will be used.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

Warning

This decoder will not predict the final class which is assumed to be <PAD>. Therefore, its output size is always \(C - 1\). <PAD> is also ignored by loss as specified in mmocr.models.textrecog.recognizer.EncodeDecodeRecognizer.

forward_test(feat, out_enc, img_metas)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

A raw logit tensor of shape \((N, T, C-1)\) if return_feature=False. Otherwise it would be the hidden feature before the prediction projection layer, whose shape is \((N, T, D_m)\).

Return type

Tensor

forward_train(feat, out_enc, targets_dict, img_metas)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • targets_dict (dict) – A dict with the key padded_targets, a tensor of shape \((N, T)\). Each element is the index of a character.

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

A raw logit tensor of shape \((N, T, C-1)\) if return_feature=False. Otherwise it will be the hidden feature before the prediction projection layer, whose shape is \((N, T, D_m)\).

Return type

Tensor

class mmocr.models.textrecog.decoders.RobustScannerDecoder(num_classes=None, dim_input=512, dim_model=128, max_seq_len=40, start_idx=0, mask=True, padding_idx=None, encode_value=False, hybrid_decoder=None, position_decoder=None, init_cfg=None)[source]

Decoder for RobustScanner.

RobustScanner: RobustScanner: Dynamically Enhancing Positional Clues for Robust Text Recognition

Parameters
  • num_classes (int) – Number of output classes \(C\).

  • dim_input (int) – Dimension \(D_i\) of input vector feat.

  • dim_model (int) – Dimension \(D_m\) of the model. Should also be the same as encoder output vector out_enc.

  • max_seq_len (int) – Maximum output sequence length \(T\).

  • start_idx (int) – The index of <SOS>.

  • mask (bool) – Whether to mask input features according to img_meta['valid_ratio'].

  • padding_idx (int) – The index of <PAD>.

  • encode_value (bool) – Whether to use the output of encoder out_enc as value of attention layer. If False, the original feature feat will be used.

  • hybrid_decoder (dict) – Configuration dict for hybrid decoder.

  • position_decoder (dict) – Configuration dict for position decoder.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

Warning

This decoder will not predict the final class which is assumed to be <PAD>. Therefore, its output size is always \(C - 1\). <PAD> is also ignored by loss as specified in mmocr.models.textrecog.recognizer.EncodeDecodeRecognizer.

forward_test(feat, out_enc, img_metas)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

The output logit sequence tensor of shape \((N, T, C-1)\).

Return type

Tensor

forward_train(feat, out_enc, targets_dict, img_metas)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • targets_dict (dict) – A dict with the key padded_targets, a tensor of shape \((N, T)\). Each element is the index of a character.

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

A raw logit tensor of shape \((N, T, C-1)\).

Return type

Tensor

class mmocr.models.textrecog.decoders.SequenceAttentionDecoder(num_classes=None, rnn_layers=2, dim_input=512, dim_model=128, max_seq_len=40, start_idx=0, mask=True, padding_idx=None, dropout=0, return_feature=False, encode_value=False, init_cfg=None)[source]

Sequence attention decoder for RobustScanner.

RobustScanner: RobustScanner: Dynamically Enhancing Positional Clues for Robust Text Recognition

Parameters
  • num_classes (int) – Number of output classes \(C\).

  • rnn_layers (int) – Number of RNN layers.

  • dim_input (int) – Dimension \(D_i\) of input vector feat.

  • dim_model (int) – Dimension \(D_m\) of the model. Should also be the same as encoder output vector out_enc.

  • max_seq_len (int) – Maximum output sequence length \(T\).

  • start_idx (int) – The index of <SOS>.

  • mask (bool) – Whether to mask input features according to img_meta['valid_ratio'].

  • padding_idx (int) – The index of <PAD>.

  • dropout (float) – Dropout rate.

  • return_feature (bool) – Return feature or logits as the result.

  • encode_value (bool) – Whether to use the output of encoder out_enc as value of attention layer. If False, the original feature feat will be used.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

Warning

This decoder will not predict the final class which is assumed to be <PAD>. Therefore, its output size is always \(C - 1\). <PAD> is also ignored by loss as specified in mmocr.models.textrecog.recognizer.EncodeDecodeRecognizer.

forward_test(feat, out_enc, img_metas)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

The output logit sequence tensor of shape \((N, T, C-1)\).

Return type

Tensor

forward_test_step(feat, out_enc, decode_sequence, current_step, img_metas)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • decode_sequence (Tensor) – Shape \((N, T)\). The tensor that stores history decoding result.

  • current_step (int) – Current decoding step.

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

Shape \((N, C-1)\). The logit tensor of predicted tokens at current time step.

Return type

Tensor

forward_train(feat, out_enc, targets_dict, img_metas)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • targets_dict (dict) – A dict with the key padded_targets, a tensor of shape \((N, T)\). Each element is the index of a character.

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

A raw logit tensor of shape \((N, T, C-1)\) if return_feature=False. Otherwise it would be the hidden feature before the prediction projection layer, whose shape is \((N, T, D_m)\).

Return type

Tensor

class mmocr.models.textrecog.decoders.SequentialSARDecoder(num_classes=37, enc_bi_rnn=False, dec_bi_rnn=False, dec_gru=False, d_k=64, d_model=512, d_enc=512, pred_dropout=0.0, mask=True, max_seq_len=40, start_idx=0, padding_idx=92, pred_concat=False, init_cfg=None, **kwargs)[source]

Implementation Sequential Decoder module in `SAR.

<https://arxiv.org/abs/1811.00751>`_.

Parameters
  • num_classes (int) – Output class number \(C\).

  • enc_bi_rnn (bool) – If True, use bidirectional RNN in encoder.

  • dec_bi_rnn (bool) – If True, use bidirectional RNN in decoder.

  • dec_do_rnn (float) – Dropout of RNN layer in decoder.

  • dec_gru (bool) – If True, use GRU, else LSTM in decoder.

  • d_k (int) – Dim of conv layers in attention module.

  • d_model (int) – Dim of channels from backbone \(D_i\).

  • d_enc (int) – Dim of encoder RNN layer \(D_m\).

  • pred_dropout (float) – Dropout probability of prediction layer.

  • max_seq_len (int) – Maximum sequence length during decoding.

  • mask (bool) – If True, mask padding in feature map.

  • start_idx (int) – Index of start token.

  • padding_idx (int) – Index of padding token.

  • pred_concat (bool) – If True, concat glimpse feature from attention with holistic feature and hidden state.

forward_test(feat, out_enc, img_metas)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

A raw logit tensor of shape \((N, T, C-1)\).

Return type

Tensor

forward_train(feat, out_enc, targets_dict, img_metas=None)[source]
Parameters
  • feat (Tensor) – Tensor of shape \((N, D_i, H, W)\).

  • out_enc (Tensor) – Encoder output of shape \((N, D_m, H, W)\).

  • targets_dict (dict) – A dict with the key padded_targets, a tensor of shape \((N, T)\). Each element is the index of a character.

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

A raw logit tensor of shape \((N, T, C-1)\).

Return type

Tensor

textrecog_losses

class mmocr.models.textrecog.losses.ABILoss(enc_weight=1.0, dec_weight=1.0, fusion_weight=1.0, num_classes=37, **kwargs)[source]

Implementation of ABINet multiloss that allows mixing different types of losses with weights.

Parameters
  • enc_weight (float) – The weight of encoder loss. Defaults to 1.0.

  • dec_weight (float) – The weight of decoder loss. Defaults to 1.0.

  • fusion_weight (float) – The weight of fuser (aligner) loss. Defaults to 1.0.

  • num_classes (int) – Number of unique output language tokens.

Returns

A dictionary whose key/value pairs are the losses of three modules.

forward(outputs, targets_dict, img_metas=None)[source]
Parameters
  • outputs (dict) – The output dictionary with at least one of out_enc, out_dec and out_fusers specified.

  • targets_dict (dict) – The target dictionary containing the key padded_targets, which represents target sequences in shape (batch_size, sequence_length).

Returns

A loss dictionary with loss_visual, loss_lang and loss_fusion. Each should either be the loss tensor or 0 if the output of its corresponding module is not given.

class mmocr.models.textrecog.losses.CELoss(ignore_index=- 1, reduction='none', ignore_first_char=False)[source]

Implementation of loss module for encoder-decoder based text recognition method with CrossEntropy loss.

Parameters
  • ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient.

  • reduction (str) – Specifies the reduction to apply to the output, should be one of the following: (‘none’, ‘mean’, ‘sum’).

  • ignore_first_char (bool) – Whether to ignore the first token in target ( usually the start token). If True, the last token of the output sequence will also be removed to be aligned with the target length.

forward(outputs, targets_dict, img_metas=None)[source]
Parameters
  • outputs (Tensor) – A raw logit tensor of shape \((N, T, C)\).

  • targets_dict (dict) – A dict with a key padded_targets, which is a tensor of shape \((N, T)\). Each element is the index of a character.

  • img_metas (None) – Unused.

Returns

A loss dict with the key loss_ce.

Return type

dict

class mmocr.models.textrecog.losses.CTCLoss(flatten=True, blank=0, reduction='mean', zero_infinity=False, **kwargs)[source]

Implementation of loss module for CTC-loss based text recognition.

Parameters
  • flatten (bool) – If True, use flattened targets, else padded targets.

  • blank (int) – Blank label. Default 0.

  • reduction (str) – Specifies the reduction to apply to the output, should be one of the following: (‘none’, ‘mean’, ‘sum’).

  • zero_infinity (bool) – Whether to zero infinite losses and the associated gradients. Default: False. Infinite losses mainly occur when the inputs are too short to be aligned to the targets.

forward(outputs, targets_dict, img_metas=None)[source]
Parameters
  • outputs (Tensor) – A raw logit tensor of shape \((N, T, C)\).

  • targets_dict (dict) –

    A dict with 3 keys target_lengths, flatten_targets and targets.

    • target_lengths (Tensor): A tensor of shape \((N)\). Each item is the length of a word.
    • flatten_targets (Tensor): Used if self.flatten=True (default). A tensor of shape (sum(targets_dict[‘target_lengths’])). Each item is the index of a character.
    • targets (Tensor): Used if self.flatten=False. A tensor of \((N, T)\). Empty slots are padded with self.blank.

  • img_metas (dict) – A dict that contains meta information of input images. Preferably with the key valid_ratio.

Returns

The loss dict with key loss_ctc.

Return type

dict

class mmocr.models.textrecog.losses.SARLoss(ignore_index=0, reduction='mean', **kwargs)[source]

Implementation of loss module in `SAR.

<https://arxiv.org/abs/1811.00751>`_.

Parameters
  • ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient.

  • reduction (str) – Specifies the reduction to apply to the output, should be one of the following: (“none”, “mean”, “sum”).

Warning

SARLoss assumes that the first input token is always <SOS>.

class mmocr.models.textrecog.losses.SegLoss(seg_downsample_ratio=0.5, seg_with_loss_weight=True, ignore_index=255, **kwargs)[source]

Implementation of loss module for segmentation based text recognition method.

Parameters
  • seg_downsample_ratio (float) – Downsample ratio of segmentation map.

  • seg_with_loss_weight (bool) – If True, set weight for segmentation loss.

  • ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient.

forward(out_neck, out_head, gt_kernels)[source]
Parameters
  • out_neck (None) – Unused.

  • out_head (Tensor) – The output from head whose shape is \((N, C, H, W)\).

  • gt_kernels (BitmapMasks) – The ground truth masks.

Returns

A loss dictionary with the key loss_seg.

Return type

dict

class mmocr.models.textrecog.losses.TFLoss(ignore_index=- 1, reduction='none', flatten=True, **kwargs)[source]

Implementation of loss module for transformer.

Parameters
  • ignore_index (int, optional) – The character index to be ignored in loss computation.

  • reduction (str) – Type of reduction to apply to the output, should be one of the following: (“none”, “mean”, “sum”).

  • flatten (bool) – Whether to flatten the vectors for loss computation.

Warning

TFLoss assumes that the first input token is always <SOS>.

textrecog_backbones

class mmocr.models.textrecog.backbones.NRTRModalityTransform(input_channels=3, init_cfg=[{'type': 'Kaiming', 'layer': 'Conv2d'}, {'type': 'Uniform', 'layer': 'BatchNorm2d'}])[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.textrecog.backbones.ResNet31OCR(base_channels=3, layers=[1, 2, 5, 3], channels=[64, 128, 256, 256, 512, 512, 512], out_indices=None, stage4_pool_cfg={'kernel_size': (2, 1), 'stride': (2, 1)}, last_stage_pool=False, init_cfg=[{'type': 'Kaiming', 'layer': 'Conv2d'}, {'type': 'Uniform', 'layer': 'BatchNorm2d'}])[source]
Implement ResNet backbone for text recognition, modified from

ResNet

Parameters
  • base_channels (int) – Number of channels of input image tensor.

  • layers (list[int]) – List of BasicBlock number for each stage.

  • channels (list[int]) – List of out_channels of Conv2d layer.

  • out_indices (None | Sequence[int]) – Indices of output stages.

  • stage4_pool_cfg (dict) – Dictionary to construct and configure pooling layer in stage 4.

  • last_stage_pool (bool) – If True, add MaxPool2d layer to last stage.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.textrecog.backbones.ResNetABI(in_channels=3, stem_channels=32, base_channels=32, arch_settings=[3, 4, 6, 6, 3], strides=[2, 1, 2, 1, 1], out_indices=None, last_stage_pool=False, init_cfg=[{'type': 'Xavier', 'layer': 'Conv2d'}, {'type': 'Constant', 'val': 1, 'layer': 'BatchNorm2d'}])[source]

Implement ResNet backbone for text recognition, modified from `ResNet.

<https://arxiv.org/pdf/1512.03385.pdf>`_ and https://github.com/FangShancheng/ABINet

Parameters
  • in_channels (int) – Number of channels of input image tensor.

  • stem_channels (int) – Number of stem channels.

  • base_channels (int) – Number of base channels.

  • arch_settings (list[int]) – List of BasicBlock number for each stage.

  • strides (Sequence[int]) – Strides of the first block of each stage.

  • out_indices (None | Sequence[int]) – Indices of output stages. If not specified, only the last stage will be returned.

  • last_stage_pool (bool) – If True, add MaxPool2d layer to last stage.

forward(x)[source]
Parameters

x (Tensor) – Image tensor of shape \((N, 3, H, W)\).

Returns

Feature tensor. Its shape depends on ResNetABI’s config. It can be a list of feature outputs at specific layers if out_indices is specified.

Return type

Tensor or list[Tensor]

class mmocr.models.textrecog.backbones.ShallowCNN(input_channels=1, hidden_dim=512, init_cfg=[{'type': 'Kaiming', 'layer': 'Conv2d'}, {'type': 'Uniform', 'layer': 'BatchNorm2d'}])[source]

Implement Shallow CNN block for SATRN.

SATRN: On Recognizing Texts of Arbitrary Shapes with 2D Self-Attention.

Parameters
  • base_channels (int) – Number of channels of input image tensor \(D_i\).

  • hidden_dim (int) – Size of hidden layers of the model \(D_m\).

  • init_cfg (dict or list[dict], optional) – Initialization configs.

forward(x)[source]
Parameters

x (Tensor) – Input image feature \((N, D_i, H, W)\).

Returns

A tensor of shape \((N, D_m, H/4, W/4)\).

Return type

Tensor

class mmocr.models.textrecog.backbones.VeryDeepVgg(leaky_relu=True, input_channels=3, init_cfg=[{'type': 'Xavier', 'layer': 'Conv2d'}, {'type': 'Uniform', 'layer': 'BatchNorm2d'}])[source]

Implement VGG-VeryDeep backbone for text recognition, modified from VGG-VeryDeep

Parameters
  • leaky_relu (bool) – Use leakyRelu or not.

  • input_channels (int) – Number of channels of input image tensor.

forward(x)[source]
Parameters

x (Tensor) – Images of shape \((N, C, H, W)\).

Returns

The feature Tensor of shape \((N, 512, H/32, (W/4+1)\).

Return type

Tensor

textrecog_layers

class mmocr.models.textrecog.layers.Adaptive2DPositionalEncoding(d_hid=512, n_height=100, n_width=100, dropout=0.1, init_cfg=[{'type': 'Xavier', 'layer': 'Conv2d'}])[source]
Implement Adaptive 2D positional encoder for SATRN, see

SATRN Modified from https://github.com/Media-Smart/vedastr Licensed under the Apache License, Version 2.0 (the “License”);

Parameters
  • d_hid (int) – Dimensions of hidden layer.

  • n_height (int) – Max height of the 2D feature output.

  • n_width (int) – Max width of the 2D feature output.

  • dropout (int) – Size of hidden layers of the model.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.textrecog.layers.BasicBlock(inplanes, planes, stride=1, dilation=1, downsample=None, use_conv1x1=False, style='pytorch', with_cp=False)[source]
class mmocr.models.textrecog.layers.BidirectionalLSTM(nIn, nHidden, nOut)[source]
forward(input)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.textrecog.layers.Bottleneck(inplanes, planes, stride=1, downsample=False)[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.textrecog.layers.DotProductAttentionLayer(dim_model=None)[source]
forward(query, key, value, mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.textrecog.layers.PositionAwareLayer(dim_model, rnn_layers=2)[source]
forward(img_feature)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmocr.models.textrecog.layers.RobustScannerFusionLayer(dim_model, dim=- 1, init_cfg=None)[source]
forward(x0, x1)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

kie_extractors

class mmocr.models.kie.extractors.SDMGR(backbone, neck=None, bbox_head=None, extractor={'featmap_strides': [1], 'roi_layer': {'output_size': 7, 'type': 'RoIAlign'}, 'type': 'mmdet.SingleRoIExtractor'}, visual_modality=False, train_cfg=None, test_cfg=None, class_list=None, init_cfg=None, openset=False)[source]

The implementation of the paper: Spatial Dual-Modality Graph Reasoning for Key Information Extraction. https://arxiv.org/abs/2103.14470.

Parameters
  • visual_modality (bool) – Whether use the visual modality.

  • class_list (None | str) – Mapping file of class index to class name. If None, class index will be shown in show_results, else class name.

extract_feat(img, gt_bboxes)[source]

Directly extract features from the backbone+neck.

forward_test(img, img_metas, relations, texts, gt_bboxes, rescale=False)[source]

Args: imgs (List[Tensor]): the outer list indicates test-time

augmentations and inner Tensor should have a shape NxCxHxW, which contains all images in the batch.

img_metas (List[List[dict]]): the outer list indicates test-time

augs (multiscale, flip, etc.) and the inner list indicates images in a batch.

forward_train(img, img_metas, relations, texts, gt_bboxes, gt_labels)[source]
Parameters
  • img (tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.

  • img_metas (list[dict]) – A list of image info dict where each dict contains: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details of the values of these keys, please see mmdet.datasets.pipelines.Collect.

  • relations (list[tensor]) – Relations between bboxes.

  • texts (list[tensor]) – Texts in bboxes.

  • gt_bboxes (list[tensor]) – Each item is the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.

  • gt_labels (list[tensor]) – Class indices corresponding to each box.

Returns

A dictionary of loss components.

Return type

dict[str, tensor]

show_result(img, result, boxes, win_name='', show=False, wait_time=0, out_file=None, **kwargs)[source]

Draw result on img.

Parameters
  • img (str or tensor) – The image to be displayed.

  • result (dict) – The results to draw on img.

  • boxes (list) – Bbox of img.

  • win_name (str) – The window name.

  • wait_time (int) – Value of waitKey param. Default: 0.

  • show (bool) – Whether to show the image. Default: False.

  • out_file (str or None) – The output filename. Default: None.

Returns

Only if not show or out_file.

Return type

img (tensor)

kie_heads

class mmocr.models.kie.heads.SDMGRHead(num_chars=92, visual_dim=64, fusion_dim=1024, node_input=32, node_embed=256, edge_input=5, edge_embed=256, num_gnn=2, num_classes=26, loss={'type': 'SDMGRLoss'}, bidirectional=False, train_cfg=None, test_cfg=None, init_cfg={'mean': 0, 'override': {'name': 'edge_embed'}, 'std': 0.01, 'type': 'Normal'})[source]
forward(relations, texts, x=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

kie_losses

class mmocr.models.kie.losses.SDMGRLoss(node_weight=1.0, edge_weight=1.0, ignore=- 100)[source]

The implementation the loss of key information extraction proposed in the paper: Spatial Dual-Modality Graph Reasoning for Key Information Extraction.

https://arxiv.org/abs/2103.14470.

forward(node_preds, edge_preds, gts)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

mmocr.datasets

class mmocr.datasets.BaseDataset(ann_file, loader, pipeline, img_prefix='', test_mode=False)[source]

Custom dataset for text detection, text recognition, and their downstream tasks.

  1. The text detection annotation format is as follows: The annotations field is optional for testing (this is one line of anno_file, with line-json-str

    converted to dict for visualizing only).

    {

    “file_name”: “sample.jpg”, “height”: 1080, “width”: 960, “annotations”:

    [
    {

    “iscrowd”: 0, “category_id”: 1, “bbox”: [357.0, 667.0, 804.0, 100.0], “segmentation”: [[361, 667, 710, 670,

    72, 767, 357, 763]]

    }

    ]

    }

  2. The two text recognition annotation formats are as follows: The x1,y1,x2,y2,x3,y3,x4,y4 field is used for online crop augmentation during training.

    format1: sample.jpg hello format2: sample.jpg 20 20 100 20 100 40 20 40 hello

Parameters
  • ann_file (str) – Annotation file path.

  • pipeline (list[dict]) – Processing pipeline.

  • loader (dict) – Dictionary to construct loader to load annotation infos.

  • img_prefix (str, optional) – Image prefix to generate full image path.

  • test_mode (bool, optional) – If set True, try…except will be turned off in __getitem__.

evaluate(results, metric=None, logger=None, **kwargs)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

Returns

float]

Return type

dict[str

format_results(results, **kwargs)[source]

Placeholder to format result to dataset-specific output.

pre_pipeline(results)[source]

Prepare results dict for pipeline.

prepare_test_img(img_info)[source]

Get testing data from pipeline.

Parameters

idx (int) – Index of data.

Returns

Testing data after pipeline with new keys introduced by

pipeline.

Return type

dict

prepare_train_img(index)[source]

Get training data and annotations from pipeline.

Parameters

index (int) – Index of data.

Returns

Training data and annotation after pipeline with new keys

introduced by pipeline.

Return type

dict

class mmocr.datasets.CustomFormatBundle(keys=[], call_super=True, visualize={'boundary_key': None, 'flag': False})[source]

Custom formatting bundle.

It formats common fields such as ‘img’ and ‘proposals’ as done in DefaultFormatBundle, while other fields such as ‘gt_kernels’ and ‘gt_effective_region_mask’ will be formatted to DC as follows:

  • gt_kernels: to DataContainer (cpu_only=True)

  • gt_effective_mask: to DataContainer (cpu_only=True)

Parameters
  • keys (list[str]) – Fields to be formatted to DC only.

  • call_super (bool) – If True, format common fields by DefaultFormatBundle, else format fields in keys above only.

  • visualize (dict) – If flag=True, visualize gt mask for debugging.

class mmocr.datasets.DBNetTargets(shrink_ratio=0.4, thr_min=0.3, thr_max=0.7, min_short_size=8)[source]

Generate gt shrunk text, gt threshold map, and their effective region masks to learn DBNet: Real-time Scene Text Detection with Differentiable Binarization [https://arxiv.org/abs/1911.08947]. This was partially adapted from https://github.com/MhLiao/DB.

Parameters
  • shrink_ratio (float) – The area shrunk ratio between text kernels and their text masks.

  • thr_min (float) – The minimum value of the threshold map.

  • thr_max (float) – The maximum value of the threshold map.

  • min_short_size (int) – The minimum size of polygon below which the polygon is invalid.

draw_border_map(polygon, canvas, mask)[source]

Generate threshold map for one polygon.

Parameters
  • polygon (ndarray) – The polygon boundary ndarray.

  • canvas (ndarray) – The generated threshold map.

  • mask (ndarray) – The generated threshold mask.

find_invalid(results)[source]

Find invalid polygons.

Parameters

results (dict) – The dict containing gt_mask.

Returns

The indicators for ignoring polygons.

Return type

ignore_tags (list[bool])

generate_targets(results)[source]

Generate the gt targets for DBNet.

Parameters

results (dict) – The input result dictionary.

Returns

The output result dictionary.

Return type

results (dict)

generate_thr_map(img_size, polygons)[source]

Generate threshold map.

Parameters
  • img_size (tuple(int)) – The image size (h,w)

  • polygons (list(ndarray)) – The polygon list.

Returns

The generated threshold map. thr_mask (ndarray): The effective mask of threshold map.

Return type

thr_map (ndarray)

ignore_texts(results, ignore_tags)[source]

Ignore gt masks and gt_labels while padding gt_masks_ignore in results given ignore_tags.

Parameters
  • results (dict) – Result for one image.

  • ignore_tags (list[int]) – Indicate whether to ignore its corresponding ground truth text.

Returns

Results after filtering.

Return type

results (dict)

invalid_polygon(poly)[source]

Judge the input polygon is invalid or not. It is invalid if its area smaller than 1 or the shorter side of its minimum bounding box smaller than min_short_size.

Parameters

poly (ndarray) – The polygon boundary point sequence.

Returns

Whether the polygon is invalid.

Return type

True/False (bool)

class mmocr.datasets.FCENetTargets(fourier_degree=5, resample_step=4.0, center_region_shrink_ratio=0.3, level_size_divisors=(8, 16, 32), level_proportion_range=((0, 0.4), (0.3, 0.7), (0.6, 1.0)))[source]

Generate the ground truth targets of FCENet: Fourier Contour Embedding for Arbitrary-Shaped Text Detection.

[https://arxiv.org/abs/2104.10442]

Parameters
  • fourier_degree (int) – The maximum Fourier transform degree k.

  • resample_step (float) – The step size for resampling the text center line (TCL). It’s better not to exceed half of the minimum width.

  • center_region_shrink_ratio (float) – The shrink ratio of text center region.

  • level_size_divisors (tuple(int)) – The downsample ratio on each level.

  • level_proportion_range (tuple(tuple(int))) – The range of text sizes assigned to each level.

cal_fourier_signature(polygon, fourier_degree)[source]

Calculate Fourier signature from input polygon.

Parameters
  • polygon (ndarray) – The input polygon.

  • fourier_degree (int) – The maximum Fourier degree K.

Returns

An array shaped (2k+1, 2) containing

real part and image part of 2k+1 Fourier coefficients.

Return type

fourier_signature (ndarray)

clockwise(c, fourier_degree)[source]

Make sure the polygon reconstructed from Fourier coefficients c in the clockwise direction.

Parameters

polygon (list[float]) – The origin polygon.

Returns

The polygon in clockwise point order.

Return type

new_polygon (lost[float])

generate_center_region_mask(img_size, text_polys)[source]

Generate text center region mask.

Parameters
  • img_size (tuple) – The image size of (height, width).

  • text_polys (list[list[ndarray]]) – The list of text polygons.

Returns

The text center region mask.

Return type

center_region_mask (ndarray)

generate_fourier_maps(img_size, text_polys)[source]

Generate Fourier coefficient maps.

Parameters
  • img_size (tuple) – The image size of (height, width).

  • text_polys (list[list[ndarray]]) – The list of text polygons.

Returns

The Fourier coefficient real part maps. fourier_image_map (ndarray): The Fourier coefficient image part

maps.

Return type

fourier_real_map (ndarray)

generate_level_targets(img_size, text_polys, ignore_polys)[source]

Generate ground truth target on each level.

Parameters
  • img_size (list[int]) – Shape of input image.

  • text_polys (list[list[ndarray]]) – A list of ground truth polygons.

  • ignore_polys (list[list[ndarray]]) – A list of ignored polygons.

Returns

A list of ground target on each level.

Return type

level_maps (list(ndarray))

generate_targets(results)[source]

Generate the ground truth targets for FCENet.

Parameters

results (dict) – The input result dictionary.

Returns

The output result dictionary.

Return type

results (dict)

normalize_polygon(polygon)[source]

Normalize one polygon so that its start point is at right most.

Parameters

polygon (list[float]) – The origin polygon.

Returns

The polygon with start point at right.

Return type

new_polygon (lost[float])

poly2fourier(polygon, fourier_degree)[source]

Perform Fourier transformation to generate Fourier coefficients ck from polygon.

Parameters
  • polygon (ndarray) – An input polygon.

  • fourier_degree (int) – The maximum Fourier degree K.

Returns

Fourier coefficients.

Return type

c (ndarray(complex))

resample_polygon(polygon, n=400)[source]

Resample one polygon with n points on its boundary.

Parameters
  • polygon (list[float]) – The input polygon.

  • n (int) – The number of resampled points.

Returns

The resampled polygon.

Return type

resampled_polygon (list[float])

class mmocr.datasets.HardDiskLoader(ann_file, parser, repeat=1)[source]

Load annotation file from hard disk to RAM.

Parameters

ann_file (str) – Annotation file path.

class mmocr.datasets.IcdarDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True, select_first_k=- 1)[source]
evaluate(results, metric='hmean-iou', logger=None, score_thr=0.3, rank_list=None, **kwargs)[source]

Evaluate the hmean metric.

Parameters
  • results (list[dict]) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

  • rank_list (str) – json file used to save eval result of each image after ranking.

Returns

float]]: The evaluation results.

Return type

dict[dict[str

load_annotations(ann_file)[source]

Load annotation from COCO style annotation file.

Parameters

ann_file (str) – Path of annotation file.

Returns

Annotation info from COCO api.

Return type

list[dict]

class mmocr.datasets.KIEDataset(ann_file=None, loader=None, dict_file=None, img_prefix='', pipeline=None, norm=10.0, directed=False, test_mode=True, **kwargs)[source]
Parameters
  • ann_file (str) – Annotation file path.

  • pipeline (list[dict]) – Processing pipeline.

  • loader (dict) – Dictionary to construct loader to load annotation infos.

  • img_prefix (str, optional) – Image prefix to generate full image path.

  • test_mode (bool, optional) – If True, try…except will be turned off in __getitem__.

  • dict_file (str) – Character dict file path.

  • norm (float) – Norm to map value from one range to another.

compute_relation(boxes)[source]

Compute relation between every two boxes.

evaluate(results, metric='macro_f1', metric_options={'macro_f1': {'ignores': []}}, **kwargs)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

Returns

float]

Return type

dict[str

list_to_numpy(ann_infos)[source]

Convert bboxes, relations, texts and labels to ndarray.

pad_text_indices(text_inds)[source]

Pad text index to same length.

pre_pipeline(results)[source]

Prepare results dict for pipeline.

prepare_train_img(index)[source]

Get training data and annotations from pipeline.

Parameters

index (int) – Index of data.

Returns

Training data and annotation after pipeline with new keys

introduced by pipeline.

Return type

dict

class mmocr.datasets.LineJsonParser(keys=[])[source]

Parse json-string of one line in annotation file to dict format.

Parameters

keys (list[str]) – Keys in both json-string and result dict.

class mmocr.datasets.LineStrParser(keys=['filename', 'text'], keys_idx=[0, 1], separator=' ', **kwargs)[source]

Parse string of one line in annotation file to dict format.

Parameters
  • keys (list[str]) – Keys in result dict.

  • keys_idx (list[int]) – Value index in sub-string list for each key above.

  • separator (str) – Separator to separate string to list of sub-string.

class mmocr.datasets.LmdbLoader(ann_file, parser, repeat=1)[source]

Load annotation file with lmdb storage backend.

class mmocr.datasets.NerDataset(ann_file, loader, pipeline, img_prefix='', test_mode=False)[source]

Custom dataset for named entity recognition tasks.

Parameters
  • ann_file (txt) – Annotation file path.

  • loader (dict) – Dictionary to construct loader to load annotation infos.

  • pipeline (list[dict]) – Processing pipeline.

  • test_mode (bool, optional) – If True, try…except will be turned off in __getitem__.

evaluate(results, metric=None, logger=None, **kwargs)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

Returns

A dict containing the following keys:

’acc’, ‘recall’, ‘f1-score’.

Return type

info (dict)

prepare_train_img(index)[source]

Get training data and annotations after pipeline.

Parameters

index (int) – Index of data.

Returns

Training data and annotation after pipeline with new keys introduced by pipeline.

Return type

dict

class mmocr.datasets.OCRDataset(ann_file, loader, pipeline, img_prefix='', test_mode=False)[source]
evaluate(results, metric='acc', logger=None, **kwargs)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

Returns

float]

Return type

dict[str

pre_pipeline(results)[source]

Prepare results dict for pipeline.

class mmocr.datasets.OCRSegDataset(ann_file, loader, pipeline, img_prefix='', test_mode=False)[source]
pre_pipeline(results)[source]

Prepare results dict for pipeline.

prepare_train_img(index)[source]

Get training data and annotations from pipeline.

Parameters

index (int) – Index of data.

Returns

Training data and annotation after pipeline with new keys

introduced by pipeline.

Return type

dict

class mmocr.datasets.OpensetKIEDataset(ann_file, loader, dict_file, img_prefix='', pipeline=None, norm=10.0, link_type='one-to-one', edge_thr=0.5, test_mode=True, key_node_idx=1, value_node_idx=2, node_classes=4)[source]

Openset KIE classifies the nodes (i.e. text boxes) into bg/key/value categories, and additionally learns key-value relationship among nodes.

Parameters
  • ann_file (str) – Annotation file path.

  • loader (dict) – Dictionary to construct loader to load annotation infos.

  • dict_file (str) – Character dict file path.

  • img_prefix (str, optional) – Image prefix to generate full image path.

  • pipeline (list[dict]) – Processing pipeline.

  • norm (float) – Norm to map value from one range to another.

  • link_type (str) – one-to-one | one-to-many | many-to-one | many-to-many. For many-to-many, one key box can have many values and vice versa.

  • edge_thr (float) – Score threshold for a valid edge.

  • test_mode (bool, optional) – If True, try…except will be turned off in __getitem__.

  • key_node_idx (int) – Index of key in node classes.

  • value_node_idx (int) – Index of value in node classes.

  • node_classes (int) – Number of node classes.

compute_openset_f1(preds, gts)[source]

Compute openset macro-f1 and micro-f1 score.

Parameters
  • preds – (list[dict]): List of prediction results, including keys: filename, pairs, etc.

  • gts – (list[dict]): List of ground-truth infos, including keys: filename, pairs, etc.

Returns

Evaluation result with keys: node_openset_micro_f1, node_openset_macro_f1, edge_openset_f1.

Return type

dict

decode_gt(filename)[source]

Decode ground truth.

Assemble boxes and labels into bboxes.

decode_pred(result)[source]

Decode prediction.

Assemble boxes and predicted labels into bboxes, and convert edges into matrix.

evaluate(results, metric='openset_f1', metric_options=None, **kwargs)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

Returns

float]

Return type

dict[str

list_to_numpy(ann_infos)[source]

Convert bboxes, relations, texts and labels to ndarray.

pre_pipeline(results)[source]

Prepare results dict for pipeline.

class mmocr.datasets.TextDetDataset(ann_file, loader, pipeline, img_prefix='', test_mode=False)[source]
evaluate(results, metric='hmean-iou', score_thr=0.3, rank_list=None, logger=None, **kwargs)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • score_thr (float) – Score threshold for prediction map.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

  • rank_list (str) – json file used to save eval result of each image after ranking.

Returns

float]

Return type

dict[str

prepare_train_img(index)[source]

Get training data and annotations from pipeline.

Parameters

index (int) – Index of data.

Returns

Training data and annotation after pipeline with new keys

introduced by pipeline.

Return type

dict

class mmocr.datasets.UniformConcatDataset(datasets, separate_eval=True, pipeline=None, force_apply=False, **kwargs)[source]

A wrapper of ConcatDataset which support dataset pipeline assignment and replacement.

Parameters
  • datasets (list[dict] | list[list[dict]]) – A list of datasets cfgs.

  • separate_eval (bool) – Whether to evaluate the results separately if it is used as validation dataset. Defaults to True.

  • pipeline (None | list[dict] | list[list[dict]]) – If None, each dataset in datasets use its own pipeline; If list[dict], it will be assigned to the dataset whose pipeline is None in datasets; If list[list[dict]], pipeline of dataset which is None in datasets will be replaced by the corresponding pipeline in the list.

  • force_apply (bool) – If True, apply pipeline above to each dataset even if it have its own pipeline. Default: False.

mmocr.datasets.build_dataloader(dataset, samples_per_gpu, workers_per_gpu, num_gpus=1, dist=True, shuffle=True, seed=None, runner_type='EpochBasedRunner', persistent_workers=False, **kwargs)[source]

Build PyTorch DataLoader.

In distributed training, each GPU/process has a dataloader. In non-distributed training, there is only one dataloader for all GPUs.

Parameters
  • dataset (Dataset) – A PyTorch dataset.

  • samples_per_gpu (int) – Number of training samples on each GPU, i.e., batch size of each GPU.

  • workers_per_gpu (int) – How many subprocesses to use for data loading for each GPU.

  • num_gpus (int) – Number of GPUs. Only used in non-distributed training.

  • dist (bool) – Distributed training/test or not. Default: True.

  • shuffle (bool) – Whether to shuffle the data at every epoch. Default: True.

  • seed (int, Optional) – Seed to be used. Default: None.

  • runner_type (str) – Type of runner. Default: EpochBasedRunner

  • persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. This argument is only valid when PyTorch>=1.7.0. Default: False.

  • kwargs – any keyword argument to be used to initialize DataLoader

Returns

A PyTorch dataloader.

Return type

DataLoader

datasets

class mmocr.datasets.base_dataset.BaseDataset(ann_file, loader, pipeline, img_prefix='', test_mode=False)[source]

Custom dataset for text detection, text recognition, and their downstream tasks.

  1. The text detection annotation format is as follows: The annotations field is optional for testing (this is one line of anno_file, with line-json-str

    converted to dict for visualizing only).

    {

    “file_name”: “sample.jpg”, “height”: 1080, “width”: 960, “annotations”:

    [
    {

    “iscrowd”: 0, “category_id”: 1, “bbox”: [357.0, 667.0, 804.0, 100.0], “segmentation”: [[361, 667, 710, 670,

    72, 767, 357, 763]]

    }

    ]

    }

  2. The two text recognition annotation formats are as follows: The x1,y1,x2,y2,x3,y3,x4,y4 field is used for online crop augmentation during training.

    format1: sample.jpg hello format2: sample.jpg 20 20 100 20 100 40 20 40 hello

Parameters
  • ann_file (str) – Annotation file path.

  • pipeline (list[dict]) – Processing pipeline.

  • loader (dict) – Dictionary to construct loader to load annotation infos.

  • img_prefix (str, optional) – Image prefix to generate full image path.

  • test_mode (bool, optional) – If set True, try…except will be turned off in __getitem__.

evaluate(results, metric=None, logger=None, **kwargs)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

Returns

float]

Return type

dict[str

format_results(results, **kwargs)[source]

Placeholder to format result to dataset-specific output.

pre_pipeline(results)[source]

Prepare results dict for pipeline.

prepare_test_img(img_info)[source]

Get testing data from pipeline.

Parameters

idx (int) – Index of data.

Returns

Testing data after pipeline with new keys introduced by

pipeline.

Return type

dict

prepare_train_img(index)[source]

Get training data and annotations from pipeline.

Parameters

index (int) – Index of data.

Returns

Training data and annotation after pipeline with new keys

introduced by pipeline.

Return type

dict

class mmocr.datasets.icdar_dataset.IcdarDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True, select_first_k=- 1)[source]
evaluate(results, metric='hmean-iou', logger=None, score_thr=0.3, rank_list=None, **kwargs)[source]

Evaluate the hmean metric.

Parameters
  • results (list[dict]) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

  • rank_list (str) – json file used to save eval result of each image after ranking.

Returns

float]]: The evaluation results.

Return type

dict[dict[str

load_annotations(ann_file)[source]

Load annotation from COCO style annotation file.

Parameters

ann_file (str) – Path of annotation file.

Returns

Annotation info from COCO api.

Return type

list[dict]

class mmocr.datasets.ocr_dataset.OCRDataset(ann_file, loader, pipeline, img_prefix='', test_mode=False)[source]
evaluate(results, metric='acc', logger=None, **kwargs)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

Returns

float]

Return type

dict[str

pre_pipeline(results)[source]

Prepare results dict for pipeline.

class mmocr.datasets.ocr_seg_dataset.OCRSegDataset(ann_file, loader, pipeline, img_prefix='', test_mode=False)[source]
pre_pipeline(results)[source]

Prepare results dict for pipeline.

prepare_train_img(index)[source]

Get training data and annotations from pipeline.

Parameters

index (int) – Index of data.

Returns

Training data and annotation after pipeline with new keys

introduced by pipeline.

Return type

dict

class mmocr.datasets.text_det_dataset.TextDetDataset(ann_file, loader, pipeline, img_prefix='', test_mode=False)[source]
evaluate(results, metric='hmean-iou', score_thr=0.3, rank_list=None, logger=None, **kwargs)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • score_thr (float) – Score threshold for prediction map.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

  • rank_list (str) – json file used to save eval result of each image after ranking.

Returns

float]

Return type

dict[str

prepare_train_img(index)[source]

Get training data and annotations from pipeline.

Parameters

index (int) – Index of data.

Returns

Training data and annotation after pipeline with new keys

introduced by pipeline.

Return type

dict

class mmocr.datasets.kie_dataset.KIEDataset(ann_file=None, loader=None, dict_file=None, img_prefix='', pipeline=None, norm=10.0, directed=False, test_mode=True, **kwargs)[source]
Parameters
  • ann_file (str) – Annotation file path.

  • pipeline (list[dict]) – Processing pipeline.

  • loader (dict) – Dictionary to construct loader to load annotation infos.

  • img_prefix (str, optional) – Image prefix to generate full image path.

  • test_mode (bool, optional) – If True, try…except will be turned off in __getitem__.

  • dict_file (str) – Character dict file path.

  • norm (float) – Norm to map value from one range to another.

compute_relation(boxes)[source]

Compute relation between every two boxes.

evaluate(results, metric='macro_f1', metric_options={'macro_f1': {'ignores': []}}, **kwargs)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.

Returns

float]

Return type

dict[str

list_to_numpy(ann_infos)[source]

Convert bboxes, relations, texts and labels to ndarray.

pad_text_indices(text_inds)[source]

Pad text index to same length.

pre_pipeline(results)[source]

Prepare results dict for pipeline.

prepare_train_img(index)[source]

Get training data and annotations from pipeline.

Parameters

index (int) – Index of data.

Returns

Training data and annotation after pipeline with new keys

introduced by pipeline.

Return type

dict

pipelines

class mmocr.datasets.pipelines.ColorJitter(**kwargs)[source]

An interface for torch color jitter so that it can be invoked in mmdetection pipeline.

class mmocr.datasets.pipelines.CustomFormatBundle(keys=[], call_super=True, visualize={'boundary_key': None, 'flag': False})[source]

Custom formatting bundle.

It formats common fields such as ‘img’ and ‘proposals’ as done in DefaultFormatBundle, while other fields such as ‘gt_kernels’ and ‘gt_effective_region_mask’ will be formatted to DC as follows:

  • gt_kernels: to DataContainer (cpu_only=True)

  • gt_effective_mask: to DataContainer (cpu_only=True)

Parameters
  • keys (list[str]) – Fields to be formatted to DC only.

  • call_super (bool) – If True, format common fields by DefaultFormatBundle, else format fields in keys above only.

  • visualize (dict) – If flag=True, visualize gt mask for debugging.

class mmocr.datasets.pipelines.DBNetTargets(shrink_ratio=0.4, thr_min=0.3, thr_max=0.7, min_short_size=8)[source]

Generate gt shrunk text, gt threshold map, and their effective region masks to learn DBNet: Real-time Scene Text Detection with Differentiable Binarization [https://arxiv.org/abs/1911.08947]. This was partially adapted from https://github.com/MhLiao/DB.

Parameters
  • shrink_ratio (float) – The area shrunk ratio between text kernels and their text masks.

  • thr_min (float) – The minimum value of the threshold map.

  • thr_max (float) – The maximum value of the threshold map.

  • min_short_size (int) – The minimum size of polygon below which the polygon is invalid.

draw_border_map(polygon, canvas, mask)[source]

Generate threshold map for one polygon.

Parameters
  • polygon (ndarray) – The polygon boundary ndarray.

  • canvas (ndarray) – The generated threshold map.

  • mask (ndarray) – The generated threshold mask.

find_invalid(results)[source]

Find invalid polygons.

Parameters

results (dict) – The dict containing gt_mask.

Returns

The indicators for ignoring polygons.

Return type

ignore_tags (list[bool])

generate_targets(results)[source]

Generate the gt targets for DBNet.

Parameters

results (dict) – The input result dictionary.

Returns

The output result dictionary.

Return type

results (dict)

generate_thr_map(img_size, polygons)[source]

Generate threshold map.

Parameters
  • img_size (tuple(int)) – The image size (h,w)

  • polygons (list(ndarray)) – The polygon list.

Returns

The generated threshold map. thr_mask (ndarray): The effective mask of threshold map.

Return type

thr_map (ndarray)

ignore_texts(results, ignore_tags)[source]

Ignore gt masks and gt_labels while padding gt_masks_ignore in results given ignore_tags.

Parameters
  • results (dict) – Result for one image.

  • ignore_tags (list[int]) – Indicate whether to ignore its corresponding ground truth text.

Returns

Results after filtering.

Return type

results (dict)

invalid_polygon(poly)[source]

Judge the input polygon is invalid or not. It is invalid if its area smaller than 1 or the shorter side of its minimum bounding box smaller than min_short_size.

Parameters

poly (ndarray) – The polygon boundary point sequence.

Returns

Whether the polygon is invalid.

Return type

True/False (bool)

class mmocr.datasets.pipelines.FCENetTargets(fourier_degree=5, resample_step=4.0, center_region_shrink_ratio=0.3, level_size_divisors=(8, 16, 32), level_proportion_range=((0, 0.4), (0.3, 0.7), (0.6, 1.0)))[source]

Generate the ground truth targets of FCENet: Fourier Contour Embedding for Arbitrary-Shaped Text Detection.

[https://arxiv.org/abs/2104.10442]

Parameters
  • fourier_degree (int) – The maximum Fourier transform degree k.

  • resample_step (float) – The step size for resampling the text center line (TCL). It’s better not to exceed half of the minimum width.

  • center_region_shrink_ratio (float) – The shrink ratio of text center region.

  • level_size_divisors (tuple(int)) – The downsample ratio on each level.

  • level_proportion_range (tuple(tuple(int))) – The range of text sizes assigned to each level.

cal_fourier_signature(polygon, fourier_degree)[source]

Calculate Fourier signature from input polygon.

Parameters
  • polygon (ndarray) – The input polygon.

  • fourier_degree (int) – The maximum Fourier degree K.

Returns

An array shaped (2k+1, 2) containing

real part and image part of 2k+1 Fourier coefficients.

Return type

fourier_signature (ndarray)

clockwise(c, fourier_degree)[source]

Make sure the polygon reconstructed from Fourier coefficients c in the clockwise direction.

Parameters

polygon (list[float]) – The origin polygon.

Returns

The polygon in clockwise point order.

Return type

new_polygon (lost[float])

generate_center_region_mask(img_size, text_polys)[source]

Generate text center region mask.

Parameters
  • img_size (tuple) – The image size of (height, width).

  • text_polys (list[list[ndarray]]) – The list of text polygons.

Returns

The text center region mask.

Return type

center_region_mask (ndarray)

generate_fourier_maps(img_size, text_polys)[source]

Generate Fourier coefficient maps.

Parameters
  • img_size (tuple) – The image size of (height, width).

  • text_polys (list[list[ndarray]]) – The list of text polygons.

Returns

The Fourier coefficient real part maps. fourier_image_map (ndarray): The Fourier coefficient image part

maps.

Return type

fourier_real_map (ndarray)

generate_level_targets(img_size, text_polys, ignore_polys)[source]

Generate ground truth target on each level.

Parameters
  • img_size (list[int]) – Shape of input image.

  • text_polys (list[list[ndarray]]) – A list of ground truth polygons.

  • ignore_polys (list[list[ndarray]]) – A list of ignored polygons.

Returns

A list of ground target on each level.

Return type

level_maps (list(ndarray))

generate_targets(results)[source]

Generate the ground truth targets for FCENet.

Parameters

results (dict) – The input result dictionary.

Returns

The output result dictionary.

Return type

results (dict)

normalize_polygon(polygon)[source]

Normalize one polygon so that its start point is at right most.

Parameters

polygon (list[float]) – The origin polygon.

Returns

The polygon with start point at right.

Return type

new_polygon (lost[float])

poly2fourier(polygon, fourier_degree)[source]

Perform Fourier transformation to generate Fourier coefficients ck from polygon.

Parameters
  • polygon (ndarray) – An input polygon.

  • fourier_degree (int) – The maximum Fourier degree K.

Returns

Fourier coefficients.

Return type

c (ndarray(complex))

resample_polygon(polygon, n=400)[source]

Resample one polygon with n points on its boundary.

Parameters
  • polygon (list[float]) – The input polygon.

  • n (int) – The number of resampled points.

Returns

The resampled polygon.

Return type

resampled_polygon (list[float])

class mmocr.datasets.pipelines.FancyPCA(eig_vec=None, eig_val=None)[source]

Implementation of PCA based image augmentation, proposed in the paper Imagenet Classification With Deep Convolutional Neural Networks.

It alters the intensities of RGB values along the principal components of ImageNet dataset.

class mmocr.datasets.pipelines.ImgAug(args=None)[source]

A wrapper to use imgaug https://github.com/aleju/imgaug.

Parameters

args ([list[list|dict]]) – The argumentation list. For details, please refer to imgaug document. Take args=[[‘Fliplr’, 0.5], dict(cls=’Affine’, rotate=[-10, 10]), [‘Resize’, [0.5, 3.0]]] as an example. The args horizontally flip images with probability 0.5, followed by random rotation with angles in range [-10, 10], and resize with an independent scale in range [0.5, 3.0] for each side of images.

class mmocr.datasets.pipelines.KIEFormatBundle(img_to_float=True)[source]

Key information extraction formatting bundle.

Based on the DefaultFormatBundle, itt simplifies the pipeline of formatting common fields, including “img”, “proposals”, “gt_bboxes”, “gt_labels”, “gt_masks”, “gt_semantic_seg”, “relations” and “texts”. These fields are formatted as follows.

  • img: (1) transpose, (2) to tensor, (3) to DataContainer (stack=True)

  • proposals: (1) to tensor, (2) to DataContainer

  • gt_bboxes: (1) to tensor, (2) to DataContainer

  • gt_bboxes_ignore: (1) to tensor, (2) to DataContainer

  • gt_labels: (1) to tensor, (2) to DataContainer

  • gt_masks: (1) to tensor, (2) to DataContainer (cpu_only=True)

  • gt_semantic_seg: (1) unsqueeze dim-0 (2) to tensor,
    1. to DataContainer (stack=True)

  • relations: (1) scale, (2) to tensor, (3) to DataContainer

  • texts: (1) to tensor, (2) to DataContainer

class mmocr.datasets.pipelines.LoadImageFromNdarray(to_float32=False, color_type='color', file_client_args={'backend': 'disk'})[source]

Load an image from np.ndarray.

Similar with LoadImageFromFile, but the image read from results['img'], which is np.ndarray.

class mmocr.datasets.pipelines.LoadTextAnnotations(with_bbox=True, with_label=True, with_mask=False, with_seg=False, poly2mask=True, use_img_shape=False)[source]

Load annotations for text detection.

Parameters
  • with_bbox (bool) – Whether to parse and load the bbox annotation. Default: True.

  • with_label (bool) – Whether to parse and load the label annotation. Default: True.

  • with_mask (bool) – Whether to parse and load the mask annotation. Default: False.

  • with_seg (bool) – Whether to parse and load the semantic segmentation annotation. Default: False.

  • poly2mask (bool) – Whether to convert the instance masks from polygons to bitmaps. Default: True.

  • use_img_shape (bool) – Use the shape of loaded image from previous pipeline LoadImageFromFile to generate mask.

process_polygons(polygons)[source]

Convert polygons to list of ndarray and filter invalid polygons.

Parameters

polygons (list[list]) – Polygons of one instance.

Returns

Processed polygons.

Return type

list[numpy.ndarray]

class mmocr.datasets.pipelines.MultiRotateAugOCR(transforms, rotate_degrees=None, force_rotate=False)[source]

Test-time augmentation with multiple rotations in the case that img_height > img_width.

An example configuration is as follows:

rotate_degrees=[0, 90, 270],
transforms=[
    dict(
        type='ResizeOCR',
        height=32,
        min_width=32,
        max_width=160,
        keep_aspect_ratio=True),
    dict(type='ToTensorOCR'),
    dict(type='NormalizeOCR', **img_norm_cfg),
    dict(
        type='Collect',
        keys=['img'],
        meta_keys=[
            'filename', 'ori_shape', 'img_shape', 'valid_ratio'
        ]),
]

After MultiRotateAugOCR with above configuration, the results are wrapped into lists of the same length as follows:

dict(
    img=[...],
    img_shape=[...]
    ...
)
Parameters
  • transforms (list[dict]) – Transformation applied for each augmentation.

  • rotate_degrees (list[int] | None) – Degrees of anti-clockwise rotation.

  • force_rotate (bool) – If True, rotate image by ‘rotate_degrees’ while ignore image aspect ratio.

class mmocr.datasets.pipelines.NerTransform(label_convertor, max_len)[source]

Convert text to ID and entity in ground truth to label ID. The masks and tokens are generated at the same time. The four parameters will be used as input to the model.

Parameters
  • label_convertor – Convert text to ID and entity

  • ground truth to label ID. (in) –

  • max_len (int) – Limited maximum input length.

class mmocr.datasets.pipelines.NormalizeOCR(mean, std)[source]

Normalize a tensor image with mean and standard deviation.

class mmocr.datasets.pipelines.OCRSegTargets(label_convertor=None, attn_shrink_ratio=0.5, seg_shrink_ratio=0.25, box_type='char_rects', pad_val=255)[source]

Generate gt shrunk kernels for segmentation based OCR framework.

Parameters
  • label_convertor (dict) – Dictionary to construct label_convertor to convert char to index.

  • attn_shrink_ratio (float) – The area shrunk ratio between attention kernels and gt text masks.

  • seg_shrink_ratio (float) – The area shrunk ratio between segmentation kernels and gt text masks.

  • box_type (str) – Character box type, should be either ‘char_rects’ or ‘char_quads’, with ‘char_rects’ for rectangle with xyxy style and ‘char_quads’ for quadrangle with x1y1x2y2x3y3x4y4 style.

generate_kernels(resize_shape, pad_shape, char_boxes, char_inds, shrink_ratio=0.5, binary=True)[source]

Generate char instance kernels for one shrink ratio.

Parameters
  • resize_shape (tuple(int, int)) – Image size (height, width) after resizing.

  • pad_shape (tuple(int, int)) – Image size (height, width) after padding.

  • char_boxes (list[list[float]]) – The list of char polygons.

  • char_inds (list[int]) – List of char indexes.

  • shrink_ratio (float) – The shrink ratio of kernel.

  • binary (bool) – If True, return binary ndarray containing 0 & 1 only.

Returns

The text kernel mask of (height, width).

Return type

char_kernel (ndarray)

shrink_char_quad(char_quad, shrink_ratio)[source]

Shrink char box in style of quadrangle.

Parameters
  • char_quad (list[float]) – Char box with format [x1, y1, x2, y2, x3, y3, x4, y4].

  • shrink_ratio (float) – The area shrunk ratio between gt kernels and gt text masks.

shrink_char_rect(char_rect, shrink_ratio)[source]

Shrink char box in style of rectangle.

Parameters
  • char_rect (list[float]) – Char box with format [x_min, y_min, x_max, y_max].

  • shrink_ratio (float) – The area shrunk ratio between gt kernels and gt text masks.

class mmocr.datasets.pipelines.OneOfWrapper(transforms)[source]

Randomly select and apply one of the transforms, each with the equal chance.

Warning

Different from albumentations, this wrapper only runs the selected transform, but doesn’t guarantee the transform can always be applied to the input if the transform comes with a probability to run.

Parameters

transforms (list[dict|callable]) – Candidate transforms to be applied.

class mmocr.datasets.pipelines.OnlineCropOCR(box_keys=['x1', 'y1', 'x2', 'y2', 'x3', 'y3', 'x4', 'y4'], jitter_prob=0.5, max_jitter_ratio_x=0.05, max_jitter_ratio_y=0.02)[source]

Crop text areas from whole image with bounding box jitter. If no bbox is given, return directly.

Parameters
  • box_keys (list[str]) – Keys in results which correspond to RoI bbox.

  • jitter_prob (float) – The probability of box jitter.

  • max_jitter_ratio_x (float) – Maximum horizontal jitter ratio relative to height.

  • max_jitter_ratio_y (float) – Maximum vertical jitter ratio relative to height.

class mmocr.datasets.pipelines.OpencvToPil(**kwargs)[source]

Convert numpy.ndarray (bgr) to PIL Image (rgb).

class mmocr.datasets.pipelines.PANetTargets(shrink_ratio=(1.0, 0.5), max_shrink=20)[source]

Generate the ground truths for PANet: Efficient and Accurate Arbitrary- Shaped Text Detection with Pixel Aggregation Network.

[https://arxiv.org/abs/1908.05900]. This code is partially adapted from https://github.com/WenmuZhou/PAN.pytorch.

Parameters
  • shrink_ratio (tuple[float]) – The ratios for shrinking text instances.

  • max_shrink (int) – The maximum shrink distance.

generate_targets(results)[source]

Generate the gt targets for PANet.

Parameters

results (dict) – The input result dictionary.

Returns

The output result dictionary.

Return type

results (dict)

class mmocr.datasets.pipelines.PilToOpencv(**kwargs)[source]

Convert PIL Image (rgb) to numpy.ndarray (bgr).

class mmocr.datasets.pipelines.PyramidRescale(factor=4, base_shape=(128, 512), randomize_factor=True)[source]

Resize the image to the base shape, downsample it with gaussian pyramid, and rescale it back to original size.

Adapted from https://github.com/FangShancheng/ABINet.

Parameters
  • factor (int) – The decay factor from base size, or the number of downsampling operations from the base layer.

  • base_shape (tuple(int)) – The shape of the base layer of the pyramid.

  • randomize_factor (bool) – If True, the final factor would be a random integer in [0, factor].

Required Keys
  • img (ndarray): The input image.
Affected Keys
Modified
  • img (ndarray): The modified image.
class mmocr.datasets.pipelines.RandomCropInstances(target_size, instance_key, mask_type='inx0', positive_sample_ratio=0.625)[source]

Randomly crop images and make sure to contain text instances.

Parameters
  • target_size (tuple or int) – (height, width)

  • positive_sample_ratio (float) – The probability of sampling regions that go through positive regions.

class mmocr.datasets.pipelines.RandomCropPolyInstances(instance_key='gt_masks', crop_ratio=0.625, min_side_ratio=0.4)[source]

Randomly crop images and make sure to contain at least one intact instance.

sample_crop_box(img_size, results)[source]

Generate crop box and make sure not to crop the polygon instances.

Parameters
  • img_size (tuple(int)) – The image size (h, w).

  • results (dict) – The results dict.

class mmocr.datasets.pipelines.RandomPaddingOCR(max_ratio=None, box_type=None)[source]

Pad the given image on all sides, as well as modify the coordinates of character bounding box in image.

Parameters
  • max_ratio (list[int]) – [left, top, right, bottom].

  • box_type (None|str) – Character box type. If not none, should be either ‘char_rects’ or ‘char_quads’, with ‘char_rects’ for rectangle with xyxy style and ‘char_quads’ for quadrangle with x1y1x2y2x3y3x4y4 style.

class mmocr.datasets.pipelines.RandomRotateImageBox(min_angle=- 10, max_angle=10, box_type='char_quads')[source]

Rotate augmentation for segmentation based text recognition.

Parameters
  • min_angle (int) – Minimum rotation angle for image and box.

  • max_angle (int) – Maximum rotation angle for image and box.

  • box_type (str) – Character box type, should be either ‘char_rects’ or ‘char_quads’, with ‘char_rects’ for rectangle with xyxy style and ‘char_quads’ for quadrangle with x1y1x2y2x3y3x4y4 style.

class mmocr.datasets.pipelines.RandomRotateTextDet(rotate_ratio=1.0, max_angle=10)[source]

Randomly rotate images.

class mmocr.datasets.pipelines.RandomWrapper(transforms, p)[source]

Run a transform or a sequence of transforms with probability p.

Parameters
  • transforms (list[dict|callable]) – Transform(s) to be applied.

  • p (int|float) – Probability of running transform(s).

class mmocr.datasets.pipelines.ResizeNoImg(img_scale, keep_ratio=True)[source]

Image resizing without img.

Used for KIE.

class mmocr.datasets.pipelines.ResizeOCR(height, min_width=None, max_width=None, keep_aspect_ratio=True, img_pad_value=0, width_downsample_ratio=0.0625, backend=None)[source]

Image resizing and padding for OCR.

Parameters
  • height (int | tuple(int)) – Image height after resizing.

  • min_width (none | int | tuple(int)) – Image minimum width after resizing.

  • max_width (none | int | tuple(int)) – Image maximum width after resizing.

  • keep_aspect_ratio (bool) – Keep image aspect ratio if True during resizing, Otherwise resize to the size height * max_width.

  • img_pad_value (int) – Scalar to fill padding area.

  • width_downsample_ratio (float) – Downsample ratio in horizontal direction from input image to output feature.

  • backend (str | None) – The image resize backend type. Options are cv2, pillow, None. If backend is None, the global imread_backend specified by mmcv.use_backend() will be used. Default: None.

class mmocr.datasets.pipelines.ScaleAspectJitter(img_scale=None, multiscale_mode='range', ratio_range=None, keep_ratio=False, resize_type='around_min_img_scale', aspect_ratio_range=None, long_size_bound=None, short_size_bound=None, scale_range=None)[source]

Resize image and segmentation mask encoded by coordinates.

Allowed resize types are around_min_img_scale, long_short_bound, and indep_sample_in_range.

class mmocr.datasets.pipelines.TextSnakeTargets(orientation_thr=2.0, resample_step=4.0, center_region_shrink_ratio=0.3)[source]

Generate the ground truth targets of TextSnake: TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes.

[https://arxiv.org/abs/1807.01544]. This was partially adapted from https://github.com/princewang1994/TextSnake.pytorch.

Parameters

orientation_thr (float) – The threshold for distinguishing between head edge and tail edge among the horizontal and vertical edges of a quadrangle.

cal_curve_length(line)[source]

Calculate the length of each edge on the discrete curve and the sum.

Parameters

line (ndarray) – The points composing a discrete curve.

Returns

Returns (edges_length, total_length).

  • edge_length (ndarray): The length of each edge on the discrete curve.
  • total_length (float): The total length of the discrete curve.

Return type

tuple

draw_center_region_maps(top_line, bot_line, center_line, center_region_mask, radius_map, sin_map, cos_map, region_shrink_ratio)[source]

Draw attributes on text center region.

Parameters
  • top_line (ndarray) – The points composing top curved sideline of text polygon.

  • bot_line (ndarray) – The points composing bottom curved sideline of text polygon.

  • center_line (ndarray) – The points composing the center line of text instance.

  • center_region_mask (ndarray) – The text center region mask.

  • radius_map (ndarray) – The map where the distance from point to sidelines will be drawn on for each pixel in text center region.

  • sin_map (ndarray) – The map where vector_sin(theta) will be drawn on text center regions. Theta is the angle between tangent line and vector (1, 0).

  • cos_map (ndarray) – The map where vector_cos(theta) will be drawn on text center regions. Theta is the angle between tangent line and vector (1, 0).

  • region_shrink_ratio (float) – The shrink ratio of text center.

find_head_tail(points, orientation_thr)[source]

Find the head edge and tail edge of a text polygon.

Parameters
  • points (ndarray) – The points composing a text polygon.

  • orientation_thr (float) – The threshold for distinguishing between head edge and tail edge among the horizontal and vertical edges of a quadrangle.

Returns

The indexes of two points composing head edge. tail_inds (list): The indexes of two points composing tail edge.

Return type

head_inds (list)

generate_center_mask_attrib_maps(img_size, text_polys)[source]

Generate text center region mask and geometric attribute maps.

Parameters
  • img_size (tuple) – The image size of (height, width).

  • text_polys (list[list[ndarray]]) – The list of text polygons.

Returns

The text center region mask. radius_map (ndarray): The distance map from each pixel in text

center region to top sideline.

sin_map (ndarray): The sin(theta) map where theta is the angle

between vector (top point - bottom point) and vector (1, 0).

cos_map (ndarray): The cos(theta) map where theta is the angle

between vector (top point - bottom point) and vector (1, 0).

Return type

center_region_mask (ndarray)

generate_targets(results)[source]

Generate the gt targets for TextSnake.

Parameters

results (dict) – The input result dictionary.

Returns

The output result dictionary.

Return type

results (dict)

generate_text_region_mask(img_size, text_polys)[source]

Generate text center region mask and geometry attribute maps.

Parameters
  • img_size (tuple) – The image size (height, width).

  • text_polys (list[list[ndarray]]) – The list of text polygons.

Returns

The text region mask.

Return type

text_region_mask (ndarray)

reorder_poly_edge(points)[source]

Get the respective points composing head edge, tail edge, top sideline and bottom sideline.

Parameters

points (ndarray) – The points composing a text polygon.

Returns

The two points composing the head edge of text

polygon.

tail_edge (ndarray): The two points composing the tail edge of text

polygon.

top_sideline (ndarray): The points composing top curved sideline of

text polygon.

bot_sideline (ndarray): The points composing bottom curved sideline

of text polygon.

Return type

head_edge (ndarray)

resample_line(line, n)[source]

Resample n points on a line.

Parameters
  • line (ndarray) – The points composing a line.

  • n (int) – The resampled points number.

Returns

The points composing the resampled line.

Return type

resampled_line (ndarray)

resample_sidelines(sideline1, sideline2, resample_step)[source]

Resample two sidelines to be of the same points number according to step size.

Parameters
  • sideline1 (ndarray) – The points composing a sideline of a text polygon.

  • sideline2 (ndarray) – The points composing another sideline of a text polygon.

  • resample_step (float) – The resampled step size.

Returns

The resampled line 1. resampled_line2 (ndarray): The resampled line 2.

Return type

resampled_line1 (ndarray)

class mmocr.datasets.pipelines.ToTensorNER[source]

Convert data with list type to tensor.

class mmocr.datasets.pipelines.ToTensorOCR[source]

Convert a PIL Image or numpy.ndarray to tensor.

class mmocr.datasets.pipelines.TorchVisionWrapper(op, **kwargs)[source]

A wrapper of torchvision trasnforms. It applies specific transform to img and updates img_shape accordingly.

Warning

This transform only affects the image but not its associated annotations, such as word bounding boxes and polygon masks. Therefore, it may only be applicable to text recognition tasks.

Parameters
  • op (str) – The name of any transform class in torchvision.transforms().

  • **kwargs – Arguments that will be passed to initializer of torchvision transform.

Required Keys
  • img (ndarray): The input image.
Affected Keys
Modified
  • img (ndarray): The modified image.
Added
  • img_shape (tuple(int)): Size of the modified image.
mmocr.datasets.pipelines.sort_vertex(points_x, points_y)[source]

Sort box vertices in clockwise order from left-top first.

Parameters
  • points_x (list[float]) – x of four vertices.

  • points_y (list[float]) – y of four vertices.

Returns

x of sorted four vertices. sorted_points_y (list[float]): y of sorted four vertices.

Return type

sorted_points_x (list[float])

mmocr.datasets.pipelines.sort_vertex8(points)[source]

Sort vertex with 8 points [x1 y1 x2 y2 x3 y3 x4 y4]

utils

class mmocr.datasets.utils.HardDiskLoader(ann_file, parser, repeat=1)[source]

Load annotation file from hard disk to RAM.

Parameters

ann_file (str) – Annotation file path.

class mmocr.datasets.utils.LineJsonParser(keys=[])[source]

Parse json-string of one line in annotation file to dict format.

Parameters

keys (list[str]) – Keys in both json-string and result dict.

class mmocr.datasets.utils.LineStrParser(keys=['filename', 'text'], keys_idx=[0, 1], separator=' ', **kwargs)[source]

Parse string of one line in annotation file to dict format.

Parameters
  • keys (list[str]) – Keys in result dict.

  • keys_idx (list[int]) – Value index in sub-string list for each key above.

  • separator (str) – Separator to separate string to list of sub-string.

class mmocr.datasets.utils.LmdbLoader(ann_file, parser, repeat=1)[source]

Load annotation file with lmdb storage backend.

Read the Docs v: v0.4.0
Versions
latest
stable
v0.4.0
v0.3.0
v0.2.1
v0.2.0
v0.1.0
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.