Shortcuts

NRTREncoder

class mmocr.models.textrecog.NRTREncoder(n_layers=6, n_head=8, d_k=64, d_v=64, d_model=512, d_inner=256, dropout=0.1, init_cfg=None)[源代码]

Transformer Encoder block with self attention mechanism.

参数
  • n_layers (int) – The number of sub-encoder-layers in the encoder. Defaults to 6.

  • n_head (int) – The number of heads in the multiheadattention models Defaults to 8.

  • d_k (int) – Total number of features in key. Defaults to 64.

  • d_v (int) – Total number of features in value. Defaults to 64.

  • d_model (int) – The number of expected features in the decoder inputs. Defaults to 512.

  • d_inner (int) – The dimension of the feedforward network model. Defaults to 256.

  • dropout (float) – Dropout rate for MHSA and FFN. Defaults to 0.1.

  • init_cfg (dict or list[dict], optional) – Initialization configs.

返回类型

None

forward(feat, data_samples=None)[源代码]
参数
  • feat (Tensor) – Backbone output of shape \((N, C, H, W)\).

  • data_samples (list[TextRecogDataSample]) – Batch of TextRecogDataSample, containing valid_ratio information. Defaults to None.

返回

The encoder output tensor. Shape \((N, T, C)\).

返回类型

Tensor

Read the Docs v: latest
Versions
latest
stable
0.x
dev-1.x
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.