Experiments with a PPM Compression-Based Method for English-Chinese Bilingual Sentence Alignment

Research output: Chapter in Book/Report/Conference proceedingChapter

Electronic versions

Alignment of parallel corpora is a crucial step prior to training statistical language models for machine translation. This paper investigates compression-based methods for aligning sentences in an English-Chinese parallel corpus. Four metrics for matching sentences required for measuring the alignment at the sentence level are compared: the standard sentence length ratio (SLR), and three new metrics, absolute sentence length difference (SLD), compression code length ratio (CR), and absolute compression code length difference (CD). Initial experiments with CR show that using the Prediction by Partial Matching (PPM) compression scheme, a method that also performs well at many language modeling tasks, significantly outperforms the other standard compression algorithms Gzip and Bzip2. The paper then shows that for sentence alignment of a parallel corpus with ground truth judgments, the compression code length ratio using PPM always performs better than sentence length ratio and the difference measurements also work better than the ratio measurements.
Original languageEnglish
Title of host publicationStatistical Language and Speech Processing Volume 8791 of the series Lecture Notes in Computer Science
PublisherSpringer
Pages70-81
ISBN (print)9783319113968
DOIs
Publication statusPublished - 3 Sept 2014
View graph of relations