Skip to main content

Table 6 Modules’ inference time on CoNLL-14

From: A multi-task learning framework for efficient grammatical error correction of textual messages in mobile communications

Module

Inference time (s)

1

8

16

32

Encoder

14.26

2.03

1.00

0.53

Discriminator

0.56

0.08

0.05

0.03

Detector

2.84

0.98

0.76

0.63

Corrector

1.49

0.32

0.18

0.11

Total

20.76

3.96

2.75

2.47

  1. 1/8/16/32 refers to the setting of batch size