Shortcuts

MultiHeadAttention

class mmocr.models.common.MultiHeadAttention(n_head=8, d_model=512, d_k=64, d_v=64, dropout=0.1, qkv_bias=False)[source]

Multi-Head Attention module.

Parameters
  • n_head (int) – The number of heads in the multiheadattention models (default=8).

  • d_model (int) – The number of expected features in the decoder inputs (default=512).

  • d_k (int) – Total number of features in key.

  • d_v (int) – Total number of features in value.

  • dropout (float) – Dropout layer on attn_output_weights.

  • qkv_bias (bool) – Add bias in projection layer. Default: False.

forward(q, k, v, mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Read the Docs v: dev-1.x
Versions
latest
stable
v1.0.1
v1.0.0
0.x
v0.6.3
v0.6.2
v0.6.1
v0.6.0
v0.5.0
v0.4.1
v0.4.0
v0.3.0
v0.2.1
v0.2.0
v0.1.0
dev-1.x
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.