Experiments with a PPM Compression-Based Method for English-Chinese Bilingual Sentence Alignment
Allbwn ymchwil: Pennod mewn Llyfr/Adroddiad/Trafodion Cynhadledd › Pennod
StandardStandard
Statistical Language and Speech Processing Volume 8791 of the series Lecture Notes in Computer Science. Springer, 2014. t. 70-81.
Allbwn ymchwil: Pennod mewn Llyfr/Adroddiad/Trafodion Cynhadledd › Pennod
HarvardHarvard
APA
CBE
MLA
VancouverVancouver
Author
RIS
TY - CHAP
T1 - Experiments with a PPM Compression-Based Method for English-Chinese Bilingual Sentence Alignment
AU - Liu, W.
AU - Chang, Z.
AU - Teahan, W.J.
PY - 2014/9/3
Y1 - 2014/9/3
N2 - Alignment of parallel corpora is a crucial step prior to training statistical language models for machine translation. This paper investigates compression-based methods for aligning sentences in an English-Chinese parallel corpus. Four metrics for matching sentences required for measuring the alignment at the sentence level are compared: the standard sentence length ratio (SLR), and three new metrics, absolute sentence length difference (SLD), compression code length ratio (CR), and absolute compression code length difference (CD). Initial experiments with CR show that using the Prediction by Partial Matching (PPM) compression scheme, a method that also performs well at many language modeling tasks, significantly outperforms the other standard compression algorithms Gzip and Bzip2. The paper then shows that for sentence alignment of a parallel corpus with ground truth judgments, the compression code length ratio using PPM always performs better than sentence length ratio and the difference measurements also work better than the ratio measurements.
AB - Alignment of parallel corpora is a crucial step prior to training statistical language models for machine translation. This paper investigates compression-based methods for aligning sentences in an English-Chinese parallel corpus. Four metrics for matching sentences required for measuring the alignment at the sentence level are compared: the standard sentence length ratio (SLR), and three new metrics, absolute sentence length difference (SLD), compression code length ratio (CR), and absolute compression code length difference (CD). Initial experiments with CR show that using the Prediction by Partial Matching (PPM) compression scheme, a method that also performs well at many language modeling tasks, significantly outperforms the other standard compression algorithms Gzip and Bzip2. The paper then shows that for sentence alignment of a parallel corpus with ground truth judgments, the compression code length ratio using PPM always performs better than sentence length ratio and the difference measurements also work better than the ratio measurements.
U2 - 10.1007/978-3-319-11397-5_5
DO - 10.1007/978-3-319-11397-5_5
M3 - Chapter
SN - 9783319113968
SP - 70
EP - 81
BT - Statistical Language and Speech Processing Volume 8791 of the series Lecture Notes in Computer Science
PB - Springer
ER -