Changelog of v1.x

v1.0.0 (04/06/2023)

We are excited to announce the first official release of MMOCR 1.0, with numerous enhancements, bug fixes, and the introduction of new dataset support!

🌟 Highlights

  • Support for SCUT-CTW1500, SynthText, and MJSynth datasets

  • Updated FAQ and documentation

  • Deprecation of file_client_args in favor of backend_args

  • Added a new MMOCR tutorial notebook

🆕 New Features & Enhancement

  • Add SCUT-CTW1500 by @Mountchicken in

  • Cherry Pick #1205 by @gaotongxiao in

  • Make lanms-neo optional by @gaotongxiao in

  • SynthText by @gaotongxiao in

  • Deprecate file_client_args and use backend_args instead by @gaotongxiao in

  • MJSynth by @gaotongxiao in

  • Add MMOCR tutorial notebook by @gaotongxiao in

  • decouple batch_size to det_batch_size, rec_batch_size and kie_batch_size in MMOCRInferencer by @hugotong6425 in

  • Accepts local-rank in and by @gaotongxiao in

  • update stitch_boxes_into_lines by @cherryjm in

  • Add tests for pytorch 2.0 by @gaotongxiao in

📝 Docs

  • FAQ by @gaotongxiao in

  • Remove LoadImageFromLMDB from docs by @gaotongxiao in

  • Mark projects in docs by @gaotongxiao in

  • add opendatalab download link by @jorie-peng in

  • Fix some deadlinks in the docs by @gaotongxiao in

  • Fix quick run by @gaotongxiao in

  • Dataset by @gaotongxiao in

  • Update faq by @gaotongxiao in

  • more social network links by @fengshiwest in

  • Update docs after branch switching by @gaotongxiao in

🛠️ Bug Fixes:

  • Place dicts to .mim by @gaotongxiao in

  • Test svtr_small instead of svtr_tiny by @gaotongxiao in

  • Add pse weight to metafile by @gaotongxiao in

  • Synthtext metafile by @gaotongxiao in

  • Clear up some unused scripts by @gaotongxiao in

  • if dst not exists, when move a single file may raise a file not exists error. by @KevinNuNu in

  • CTW1500 by @gaotongxiao in

  • MJSynth & SynthText Dataset Preparer config by @gaotongxiao in

  • Use poly_intersection instead of poly.intersection to avoid sup… by @gaotongxiao in

  • Abinet: fix ValueError: Blur limit must be odd when centered=True. Got: (3, 6) by @hugotong6425 in

  • Bug generated during kie inference visualization by @Yangget in

  • Revert sync bn in inferencer by @gaotongxiao in

  • Fix mmdet digit version by @gaotongxiao in

🎉 New Contributors

  • @jorie-peng made their first contribution in

  • @hugotong6425 made their first contribution in

  • @fengshiwest made their first contribution in

  • @cherryjm made their first contribution in

  • @Yangget made their first contribution in

Thank you to all the contributors for making this release possible! We’re excited about the new features and enhancements in this version, and we’re looking forward to your feedback and continued support. Happy coding! 🚀

Full Changelog:…v1.0.0


v1.0.0rc6 (03/07/2023)


  1. Two new models, ABCNet v2 (inference only) and SPTS are added to projects/ folder.

  2. Announcing Inferencer, a unified inference interface in OpenMMLab for everyone’s easy access and quick inference with all the pre-trained weights. Docs

  3. Users can use test-time augmentation for text recognition tasks. Docs

  4. Support batch augmentation through BatchAugSampler, which is a technique used in SPTS.

  5. Dataset Preparer has been refactored to allow more flexible configurations. Besides, users are now able to prepare text recognition datasets in LMDB formats. Docs

  6. Some textspotting datasets have been revised to enhance the correctness and consistency with the common practice.

  7. Potential spurious warnings from shapely have been eliminated.


This version requires MMEngine >= 0.6.0, MMCV >= 2.0.0rc4 and MMDet >= 3.0.0rc5.

New Features & Enhancements

  • Discard deprecated lmdb dataset format and only support img+label now by @gaotongxiao in

  • abcnetv2 inference by @Harold-lkk in

  • Add RepeatAugSampler by @gaotongxiao in

  • SPTS by @gaotongxiao in

  • Refactor Inferencers by @gaotongxiao in

  • Dynamic return type for rescale_polygons by @gaotongxiao in

  • Revise upstream version limit by @gaotongxiao in

  • TextRecogCropConverter add crop with opencv warpPersepective function by @KevinNuNu in

  • change cudnn benchmark to false by @Harold-lkk in

  • Add ST-pretrained DB-series models and logs by @gaotongxiao in

  • Only keep meta and state_dict when publish model by @Harold-lkk in

  • Rec TTA by @Harold-lkk in

  • Speedup formatting by replacing np.transpose with torch… by @gaotongxiao in

  • Support auto import modules from registry. by @Harold-lkk in

  • Support batch visualization & dumping in Inferencer by @gaotongxiao in

  • add a new argument font_properties to set a specific font file in order to draw Chinese characters properly by @KevinNuNu in

  • Refactor data converter and gather by @Harold-lkk in

  • Support batch augmentation through BatchAugSampler by @gaotongxiao in

  • Put all registry into by @Harold-lkk in

  • train by @gaotongxiao in

  • configs for regression benchmark by @gaotongxiao in

  • Support lmdb format in Dataset Preparer by @gaotongxiao in


  • update the link of DBNet by @AllentDan in

  • Add notice for default branch switching by @gaotongxiao in

  • docs: Add twitter discord medium youtube link by @vansin in

  • Remove unsupported datasets in docs by @gaotongxiao in

Bug Fixes

  • Update dockerfile by @gaotongxiao in

  • Explicitly create np object array for compatibility by @gaotongxiao in

  • Fix a minor error in docstring by @Mountchicken in

  • Fix lint by @triple-Mu in

  • Fix LoadOCRAnnotation ut by @Harold-lkk in

  • Fix isort pre-commit error by @KevinNuNu in

  • Update owners by @xinke-wang in

  • Detect intersection before using shapley.intersection to eliminate spurious warnings by @gaotongxiao in

  • Fix some inferencer bugs by @gaotongxiao in

  • Fix textocr ignore flag by @xinke-wang in

  • Add missing softmax in ASTER forward_test by @Mountchicken in

  • Fix head in readme by @vansin in

  • Fix some browse dataset script bugs and draw textdet gt instance with ignore flags by @KevinNuNu in

  • icdar textrecog ann parser skip data with ignore flag by @KevinNuNu in

  • bezier_to_polygon -> bezier2polygon by @double22a in

  • Fix docs recog CharMetric P/R error definition by @KevinNuNu in

  • Remove outdated resources in demo/ by @gaotongxiao in

  • Fix wrong ic13 textspotting split data; add lexicons to ic13, ic15 and totaltext by @gaotongxiao in

  • SPTS readme by @gaotongxiao in

New Contributors

  • @triple-Mu made their first contribution in

  • @double22a made their first contribution in

Full Changelog:…v1.0.0rc6

v1.0.0rc5 (01/06/2023)


  1. Two models, Aster and SVTR, are added to our model zoo. The full implementation of ABCNet is also available now.

  2. Dataset Preparer supports 5 more datasets: CocoTextV2, FUNSD, TextOCR, NAF, SROIE.

  3. We have 4 more text recognition transforms, and two helper transforms. See for details.

  4. The transform, FixInvalidPolygon, is getting smarter at dealing with invalid polygons, and now capable of handling more weird annotations. As a result, a complete training cycle on TotalText dataset can be performed bug-free. The weights of DBNet and FCENet pretrained on TotalText are also released.

New Features & Enhancements

  • Update ic15 det config according to DataPrepare by @Harold-lkk in

  • Refactor icdardataset metainfo to lowercase. by @Harold-lkk in

  • Add ASTER Encoder by @Mountchicken in

  • Add ASTER decoder by @Mountchicken in

  • Add ASTER config by @Mountchicken in

  • Update ASTER config by @Mountchicken in

  • Support to visualize original dataset by @xinke-wang in

  • Add CocoTextv2 to dataset preparer by @xinke-wang in

  • Add Funsd to dataset preparer by @xinke-wang in

  • Add TextOCR to Dataset Preparer by @xinke-wang in

  • Refine example projects and readme by @gaotongxiao in

  • Enhance FixInvalidPolygon, add RemoveIgnored transform by @gaotongxiao in

  • ConditionApply by @Harold-lkk in

  • Add NAF to dataset preparer by @Mountchicken in

  • Add SROIE to dataset preparer by @FerryHuang in

  • Add svtr decoder by @willpat1213 in

  • Add missing unit tests by @Mountchicken in

  • Add svtr encoder by @willpat1213 in

  • ABCNet train by @Harold-lkk in

  • Totaltext cfgs for DB and FCE by @gaotongxiao in

  • Add Aliases to models by @gaotongxiao in

  • SVTR transforms by @gaotongxiao in

  • Add SVTR framework and configs by @gaotongxiao in

  • Issue Template by @Harold-lkk in


  • Add Chinese translation for by @xinke-wang in

  • updata abcnet doc by @Harold-lkk in

  • update the dbnetpp`s readme file by @zhuyue66 in

  • Inferencer docs by @gaotongxiao in

Bug Fixes

  • nn.SmoothL1Loss beta can not be zero in PyTorch 1.13 version by @Harold-lkk in

  • ctc loss bug if target is empty by @Harold-lkk in

  • Add torch 1.13 by @gaotongxiao in

  • Remove outdated tutorial link by @gaotongxiao in

  • Dev 1.x some doc mistakes by @KevinNuNu in

  • Support custom font to visualize some languages (e.g. Korean) by @ProtossDragoon in

  • db_module_loss,negative number encountered in sqrt by @KevinNuNu in

  • Use int instead of by @gaotongxiao in

  • Remove support for py3.6 by @gaotongxiao in

New Contributors

  • @zhuyue66 made their first contribution in

  • @KevinNuNu made their first contribution in

  • @FerryHuang made their first contribution in

  • @willpat1213 made their first contribution in

Full Changelog:…v1.0.0rc5

v1.0.0rc4 (12/06/2022)


  1. Dataset Preparer can automatically generate base dataset configs at the end of the preparation process, and supports 6 more datasets: IIIT5k, CUTE80, ICDAR2013, ICDAR2015, SVT, SVTP.

  2. Introducing our projects/ folder - implementing new models and features into OpenMMLab’s algorithm libraries has long been complained to be troublesome due to the rigorous requirements on code quality, which could hinder the fast iteration of SOTA models and might discourage community members from sharing their latest outcome here. We now introduce projects/ folder, where some experimental features, frameworks and models can be placed, only needed to satisfy the minimum requirement on the code quality. Everyone is welcome to post their implementation of any great ideas in this folder! We also add the first example project to illustrate what we expect a good project to have (check out the raw content of for more info!).

  3. Inside the projects/ folder, we are releasing the preview version of ABCNet, which is the first implementation of text spotting models in MMOCR. It’s inference-only now, but the full implementation will be available very soon.

New Features & Enhancements

  • Add SVT to dataset preparer by @xinke-wang in

  • Polish bbox2poly by @gaotongxiao in

  • Add SVTP to dataset preparer by @xinke-wang in

  • Iiit5k converter by @Harold-lkk in

  • Add cute80 to dataset preparer by @xinke-wang in

  • Add IC13 preparer by @xinke-wang in

  • Add ‘Projects/’ folder, and the first example project by @gaotongxiao in

  • Rename to {dataset-name}_task_train/test by @Harold-lkk in

  • Add to the tools by @IncludeMathH in

  • Add get_md5 by @gaotongxiao in

  • Add config generator by @gaotongxiao in

  • Support IC15_1811 by @gaotongxiao in

  • Update CT80 config by @gaotongxiao in

  • Add config generators to all textdet and textrecog configs by @gaotongxiao in

  • Refactor TPS by @Mountchicken in

  • Add TextSpottingConfigGenerator by @gaotongxiao in

  • Add common typing by @Harold-lkk in

  • Update textrecog config and readme by @gaotongxiao in

  • Support head loss or postprocessor is None for only infer by @Harold-lkk in

  • Textspotting datasample by @Harold-lkk in

  • Simplify mono_gather by @gaotongxiao in

  • ABCNet v1 infer by @Harold-lkk in


  • Add Chinese Guidance on How to Add New Datasets to Dataset Preparer by @xinke-wang in

  • Update the qq group link by @vansin in

  • Collapse some sections; update logo url by @gaotongxiao in

  • Update dataset preparer (CN) by @gaotongxiao in

Bug Fixes

  • Fix two bugs in dataset preparer by @xinke-wang in

  • Register bug of CLIPResNet by @jyshee in

  • Being more conservative on Dataset Preparer by @gaotongxiao in

  • python -m pip upgrade in windows by @Harold-lkk in

  • Fix wildreceipt metafile by @xinke-wang in

  • Fix Dataset Preparer Extract by @xinke-wang in

  • Fix ICDARTxtParser by @xinke-wang in

  • Fix Dataset Zoo Script by @xinke-wang in

  • Fix crop without padding and recog metainfo delete unuse info by @Harold-lkk in

  • Automatically create nonexistent directory for base configs by @gaotongxiao in

  • Change mmcv.dump to mmengine.dump by @ProtossDragoon in

  • mmocr.utils.typing -> mmocr.utils.typing_utils by @gaotongxiao in

  • Wildreceipt tests by @gaotongxiao in

  • Fix judge exist dir by @Harold-lkk in

  • Fix IC13 textdet config by @xinke-wang in

  • Fix IC13 textrecog annotations by @gaotongxiao in

  • Auto scale lr by @gaotongxiao in

  • Fix icdar data parse for text containing separator by @Harold-lkk in

  • Fix textspotting ut by @Harold-lkk in

  • Fix TextSpottingConfigGenerator and TextSpottingDataConverter by @gaotongxiao in

  • Keep E2E Inferencer output simple by @gaotongxiao in

New Contributors

  • @jyshee made their first contribution in

  • @ProtossDragoon made their first contribution in

  • @IncludeMathH made their first contribution in

Full Changelog:…v1.0.0rc4

v1.0.0rc3 (11/03/2022)


  1. We release several pretrained models using oCLIP-ResNet as the backbone, which is a ResNet variant trained with oCLIP and can significantly boost the performance of text detection models.

  2. Preparing datasets is troublesome and tedious, especially in OCR domain where multiple datasets are usually required. In order to free our users from laborious work, we designed a Dataset Preparer to help you get a bunch of datasets ready for use, with only one line of command! Dataset Preparer is also crafted to consist of a series of reusable modules, each responsible for handling one of the standardized phases throughout the preparation process, shortening the development cycle on supporting new datasets.

New Features & Enhancements

  • Add Dataset Preparer by @xinke-wang in

  • support modified resnet structure used in oCLIP by @HannibalAPE in

  • Add oCLIP configs by @gaotongxiao in


  • Update by @rogachevai in

  • Refine some docs by @gaotongxiao in

  • Update some dataset preparer related docs by @xinke-wang in

  • oclip readme by @Harold-lkk in

Bug Fixes

  • Fix offline_eval error caused by new data flow by @gaotongxiao in

New Contributors

  • @rogachevai made their first contribution in

  • @HannibalAPE made their first contribution in

Full Changelog:…v1.0.0rc3

v1.0.0rc2 (10/14/2022)

This release relaxes the version requirement of MMEngine to >=0.1.0, < 1.0.0.

v1.0.0rc1 (10/09/2022)


This release fixes a severe bug leading to inaccurate metric report in multi-GPU training. We release the weights for all the text recognition models in MMOCR 1.0 architecture. The inference shorthand for them are also added back to Besides, more documentation chapters are available now.

New Features & Enhancements

  • Simplify the Mask R-CNN config by @xinke-wang in

  • auto scale lr by @Harold-lkk in

  • Update paths to pretrain weights by @gaotongxiao in

  • Streamline duplicated split_result in pan_postprocessor by @gaotongxiao in

  • Update model links in and by @gaotongxiao in

  • Update rec configs by @gaotongxiao in

  • Visualizer refine by @Harold-lkk in

  • Support get flops and parameters in dev-1.x by @vansin in


  • intersphinx and api by @Harold-lkk in

  • Fix quickrun by @gaotongxiao in

  • Fix some docs issues by @gaotongxiao in

  • Add Documents for DataElements by @xinke-wang in

  • config english by @Harold-lkk in

  • Metrics by @xinke-wang in

  • Add version switcher to menu by @gaotongxiao in

  • Data Transforms by @xinke-wang in

  • Fix inference docs by @gaotongxiao in

  • Fix some docs by @xinke-wang in

  • Add maintenance plan to migration guide by @xinke-wang in

  • Update Recog Models by @xinke-wang in

Bug Fixes

  • clear metric.results only done in main process by @Harold-lkk in

  • Fix a bug in MMDetWrapper by @xinke-wang in

  • Fix by @Mountchicken in

  • ImgAugWrapper: Do not cilp polygons if not applicable by @gaotongxiao in

  • Fix CI by @gaotongxiao in

  • Fix merge stage test by @gaotongxiao in

  • Del CI support for torch 1.5.1 by @gaotongxiao in

  • Test windows cu111 by @gaotongxiao in

  • Fix windows CI by @gaotongxiao in

  • Upgrade pre commit hooks by @Harold-lkk in

  • Skip invalid augmented polygons in ImgAugWrapper by @gaotongxiao in

New Contributors

  • @vansin made their first contribution in

Full Changelog:…v1.0.0rc1

v1.0.0rc0 (09/01/2022)

We are excited to announce the release of MMOCR 1.0.0rc0. MMOCR 1.0.0rc0 is the first version of MMOCR 1.x, a part of the OpenMMLab 2.0 projects. Built upon the new training engine, MMOCR 1.x unifies the interfaces of dataset, models, evaluation, and visualization with faster training and testing speed.


  1. New engines. MMOCR 1.x is based on MMEngine, which provides a general and powerful runner that allows more flexible customizations and significantly simplifies the entrypoints of high-level interfaces.

  2. Unified interfaces. As a part of the OpenMMLab 2.0 projects, MMOCR 1.x unifies and refactors the interfaces and internal logics of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms.

  3. Cross project calling. Benefiting from the unified design, you can use the models implemented in other OpenMMLab projects, such as MMDet. We provide an example of how to use MMDetection’s Mask R-CNN through MMDetWrapper. Check our documents for more details. More wrappers will be released in the future.

  4. Stronger visualization. We provide a series of useful tools which are mostly based on brand-new visualizers. As a result, it is more convenient for the users to explore the models and datasets now.

  5. More documentation and tutorials. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it here.

Breaking Changes

We briefly list the major breaking changes here. We will update the migration guide to provide complete details and migration instructions.


  • MMOCR 1.x relies on MMEngine to run. MMEngine is a new foundational library for training deep learning models in OpenMMLab 2.0 models. The dependencies of file IO and training are migrated from MMCV 1.x to MMEngine.

  • MMOCR 1.x relies on MMCV>=2.0.0rc0. Although MMCV no longer maintains the training functionalities since 2.0.0rc0, MMOCR 1.x relies on the data transforms, CUDA operators, and image processing interfaces in MMCV. Note that the package mmcv is the version that provide pre-built CUDA operators and mmcv-lite does not since MMCV 2.0.0rc0, while mmcv-full has been deprecated.

Training and testing

  • MMOCR 1.x uses Runner in MMEngine rather than that in MMCV. The new Runner implements and unifies the building logic of dataset, model, evaluation, and visualizer. Therefore, MMOCR 1.x no longer maintains the building logics of those modules in mmocr.train.apis and tools/ Those code have been migrated into MMEngine. Please refer to the migration guide of Runner in MMEngine for more details.

  • The Runner in MMEngine also supports testing and validation. The testing scripts are also simplified, which has similar logic as that in training scripts to build the runner.

  • The execution points of hooks in the new Runner have been enriched to allow more flexible customization. Please refer to the migration guide of Hook in MMEngine for more details.

  • Learning rate and momentum scheduling has been migrated from Hook to Parameter Scheduler in MMEngine. Please refer to the migration guide of Parameter Scheduler in MMEngine for more details.



The Dataset classes implemented in MMOCR 1.x all inherits from the BaseDetDataset, which inherits from the BaseDataset in MMEngine. There are several changes of Dataset in MMOCR 1.x.

  • All the datasets support to serialize the data list to reduce the memory when multiple workers are built to accelerate data loading.

  • The interfaces are changed accordingly.

Data Transforms

The data transforms in MMOCR 1.x all inherits from those in MMCV>=2.0.0rc0, which follows a new convention in OpenMMLab 2.0 projects. The changes are listed as below:

  • The interfaces are also changed. Please refer to the API Reference

  • The functionality of some data transforms (e.g., Resize) are decomposed into several transforms.

  • The same data transforms in different OpenMMLab 2.0 libraries have the same augmentation implementation and the logic of the same arguments, i.e., Resize in MMDet 3.x and MMOCR 1.x will resize the image in the exact same manner given the same arguments.


The models in MMOCR 1.x all inherits from BaseModel in MMEngine, which defines a new convention of models in OpenMMLab 2.0 projects. Users can refer to the tutorial of model in MMengine for more details. Accordingly, there are several changes as the following:

  • The model interfaces, including the input and output formats, are significantly simplified and unified following the new convention in MMOCR 1.x. Specifically, all the input data in training and testing are packed into inputs and data_samples, where inputs contains model inputs like a list of image tensors, and data_samples contains other information of the current data sample such as ground truths and model predictions. In this way, different tasks in MMOCR 1.x can share the same input arguments, which makes the models more general and suitable for multi-task learning.

  • The model has a data preprocessor module, which is used to pre-process the input data of model. In MMOCR 1.x, the data preprocessor usually does necessary steps to form the input images into a batch, such as padding. It can also serve as a place for some special data augmentations or more efficient data transformations like normalization.

  • The internal logic of model have been changed. In MMOCR 0.x, model used forward_train and simple_test to deal with different model forward logics. In MMOCR 1.x and OpenMMLab 2.0, the forward function has three modes: loss, predict, and tensor for training, inference, and tracing or other purposes, respectively. The forward function calls self.loss(), self.predict(), and self._forward() given the modes loss, predict, and tensor, respectively.


MMOCR 1.x mainly implements corresponding metrics for each task, which are manipulated by Evaluator to complete the evaluation. In addition, users can build evaluator in MMOCR 1.x to conduct offline evaluation, i.e., evaluate predictions that may not produced by MMOCR, prediction follows our dataset conventions. More details can be find in the Evaluation Tutorial in MMEngine.


The functions of visualization in MMOCR 1.x are removed. Instead, in OpenMMLab 2.0 projects, we use Visualizer to visualize data. MMOCR 1.x implements TextDetLocalVisualizer, TextRecogLocalVisualizer, and KIELocalVisualizer to allow visualization of ground truths, model predictions, and feature maps, etc., at any place, for the three tasks supported in MMOCR. It also supports to dump the visualization data to any external visualization backends such as Tensorboard and Wandb. Check our Visualization Document for more details.


  • Most models enjoy a performance improvement from the new framework and refactor of data transforms. For example, in MMOCR 1.x, DBNet-R50 achieves 0.854 hmean score on ICDAR 2015, while the counterpart can only get 0.840 hmean score in MMOCR 0.x.

  • Support mixed precision training of most of the models. However, the rest models are not supported yet because the operators they used might not be representable in fp16. We will update the documentation and list the results of mixed precision training.

Ongoing changes

  1. Test-time augmentation: which was supported in MMOCR 0.x, is not implemented yet in this version due to limited time slot. We will support it in the following releases with a new and simplified design.

  2. Inference interfaces: a unified inference interfaces will be supported in the future to ease the use of released models.

  3. Interfaces of useful tools that can be used in notebook: more useful tools that implemented in the tools/ directory will have their python interfaces so that they can be used through notebook and in downstream libraries.

  4. Documentation: we will add more design docs, tutorials, and migration guidance so that the community can deep dive into our new design, participate the future development, and smoothly migrate downstream libraries to MMOCR 1.x.

Read the Docs v: latest
On Read the Docs
Project Home

Free document hosting provided by Read the Docs.