Abstract
We describe an unsupervised method to create pseudo-parallel corpora for machine translation (MT) from unaligned text. We use multilingual BERT to create source and target sentence embeddings for nearest-neighbor search and adapt the model via self-training. We validate our technique by extracting parallel sentence pairs on the BUCC 2017 bitext mining task and observe up to a 24.5 point increase (absolute) in F1 scores over previous unsupervised methods. We then improve an XLM-based unsupervised neural MT system pre-trained on Wikipedia by supplementing it with pseudo-parallel text mined from the same corpus, boosting unsupervised translation performance by up to 3.5 BLEU on the WMT’14 French-English and WMT’16 German-English tasks and outperforming the previous state-of-the-art. Finally, we enrich the IWSLT’15 English-Vietnamese corpus with pseudo-parallel Wikipedia sentence pairs, yielding a 1.2 BLEU improvement on the low-resource MT task. We demonstrate that unsupervised bitext mining is an effective way of augmenting MT datasets and complements existing techniques like initializing with pre-trained contextual embeddings.
1 Introduction
Large corpora of parallel sentences are prerequisites for training models across a diverse set of applications, such as neural machine translation (NMT; Bahdanau et al., 2015), paraphrase generation (Bannard and Callison-Burch, 2005), and aligned multilingual sentence embeddings (Artetxe and Schwenk, 2019b). Systems that extract parallel corpora typically rely on various cross-lingual resources (e.g., bilingual lexicons, parallel corpora), but recent work has shown that unsupervised parallel sentence mining (Hangya et al., 2018) and unsupervised NMT (Artetxe et al., 2018; Lample et al., 2018a) produce surprisingly good results.1
Existing approaches to unsupervised parallel sentence (or bitext) mining start from bilingual word embeddings (BWEs) learned via an unsupervised, adversarial approach (Lample et al., 2018b). Hangya et al. (2018) created sentence representations by mean-pooling BWEs over content words. To disambiguate semantically similar but non-parallel sentences, Hangya and Fraser (2019) additionally proposed parallel segment detection by searching for paired substrings with high similarity scores per word. However, using word embeddings to generate sentence embeddings ignores sentential context, which may degrade bitext retrieval performance.
We describe a new unsupervised bitext mining approach based on contextual embeddings. We create sentence embeddings by mean-pooling the outputs of multilingual BERT (mBERT; Devlin et al., 2019), which is pre-trained on unaligned Wikipedia sentences across 104 languages. For a pair of source and target languages, we find candidate translations by using nearest-neighbor search with margin-based similarity scores between pairs of mBERT-embedded source and target sentences. We bootstrap a dataset of positive and negative sentence pairs from these initial neighborhoods of candidates, then self-train mBERT on its own outputs. A final retrieval step gives a corpus of pseudo-parallel sentence pairs, which we expect to be a mix of actual translations and semantically related non-translations.
We apply our technique on the BUCC 2017 parallel sentence mining task (Zweigenbaum et al., 2017). We achieve state-of-the-art F1 scores on unsupervised bitext mining, with an improvement of up to 24.5 points (absolute) on published results (Hangya and Fraser, 2019). Other work (e.g., Libovický et al., 2019) has shown that retrieval performance varies substantially with the layer of mBERT used to generate sentence representations; using the optimal mBERT layer yields an improvement as large as 44.9 points.
Furthermore, our pseudo-parallel text improves unsupervised NMT (UNMT) performance. We build upon the UNMT framework of Lample et al. (2018c) and XLM (Lample and Conneau, 2019) by incorporating our pseudo-parallel text (also derived from Wikipedia) at training time. This boosts performance on WMT’14 En-Fr and WMT’16 En-De by up to 3.5 BLEU over the XLM baseline, outperforming the state-of-the-art on unsupervised NMT (Song et al., 2019).
Finally, we demonstrate the practical value of unsupervised bitext mining in the low-resource setting. We augment the English-Vietnamese corpus (133k pairs) from the IWSLT’15 translation task Cettolo et al. (2015) with our pseudo-bitext from Wikipedia (400k pairs), and observe a 1.2 BLEU increase over the best published model (Nguyen and Salazar, 2019). When we reduced the amount of parallel and monolingual Vietnamese data by a factor of ten (13.3k pairs), the model trained with pseudo-bitext performed 7 BLEU points better than a model trained on the reduced parallel text alone.
2 Our Approach
Our aim is to create a bilingual sentence embedding space where, for each source sentence embedding, a sufficiently close nearest neighbor among the target sentence embeddings is its translation. By aligning source and target sentence embeddings in this way, we can extract sentence pairs to create new parallel corpora. Artetxe and Schwenk (2019a) construct this space by training a joint encoder-decoder MT model over multiple language pairs and using the resulting encoder to generate sentence embeddings. A margin-based similarity score is then computed between embeddings for retrieval (Section 2.2). However, this approach requires large parallel corpora to train the encoder-decoder model in the first place.
We investigate whether contextualized sentence embeddings created with unaligned text are useful for unsupervised bitext retrieval. Previous work explored the use of multilingual sentence encoders taken from machine translation models (e.g., Artetxe and Schwenk, 2019b; Lu et al., 2018) for zero-shot cross-lingual transfer. Our work is motivated by recent success in tasks like zero-shot text classification and named entity recognition (e.g., Keung et al., 2019; Mulcaire et al., 2019) with multilingual contextual embeddings, which exhibit cross-lingual properties despite being trained without parallel sentences.
We illustrate our method in Figure 1. We first retrieve the candidate translation pairs:
- •
Each source and target language sentence is converted into an embedding vector with mBERT via mean-pooling.
- •
Margin-based scores are computed for each sentence pair using the k nearest neighbors of the source and target sentences (Sec. 2.2).
- •
Each source sentence is paired with its nearest neighbor in the target language based on this score.
- •
We select a threshold score that keeps some top percentage of pairs (Sec. 2.2).
- •
Rule-based filters are applied to further remove mismatched sentence pairs (Sec. 2.3).
The remaining candidate pairs are used to bootstrap a dataset for self-training mBERT as follows:
- •
Each candidate pair (a source sentence and its closest nearest neighbor above the threshold) is taken as a positive example.
- •
This source sentence is also paired with its next k − 1 neighbors to give hard negative examples (we compare this with random negative samples in Sec. 3.3).
- •
We finetune mBERT to produce sentence embeddings that discriminate between positive and negative pairs (Sec. 2.4).
After self-training, the finetuned mBERT model is used to generate new sentence embeddings. Parallel sentences should be closer to each other in this new embedding space, which improves retrieval performance.
2.1 Sentence Embeddings and Nearest-neighbor Search
We use mBERT (Devlin et al., 2019) to create sentence embeddings for both languages by mean-pooling the representations from the final layer. We use FAISS (Johnson et al., 2017) to perform exact nearest-neighbor search on the embeddings. We compare every sentence in the source language to every sentence in the target language; we do not use links between Wikipedia articles or other metadata to reduce the size of the search space. In our experiments, we retrieve the k = 4 closest target sentences for each source sentence; the source language is always non-English, while the target language is always English.
2.2 Margin-based Score
2.3 Rule-based Filtering
We also apply two simple filtering steps before finalizing the candidate pairs list:
- •
Digit filtering: Sentence pairs that are translations of each other must have digit sequences that match exactly.2
- •
Edit distance: Sentences from English Wikipedia sometimes appear in non-English pages and vice versa. We remove sentence pairs where the content of the source and target share substantial overlap (i.e., the character-level edit distance is ≤50%).
2.4 Self-training
We devise an unsupervised self-training technique to improve mBERT for bitext retrieval using mBERT’s own outputs. For each source sentence, if the nearest target sentence is within the threshold and not filtered out, the pair is treated as a positive sentence. We then keep the next k − 1 nearest neighbors as negative sentences. Altogether, these give us a training set of examples which are labeled as positive or negative pairs.
Note that we only finetune fsrc (parameters Θsrc) and we hold ftgt fixed. If both fsrc and ftgt are updated, then the training process collapses to a trivial solution, since the model will map all pseudo-parallel pairs to one representation and all non-parallel pairs to another. We hold ftgt fixed, which forces fsrc to align its outputs to the target (in our experiments, always English) mBERT embeddings.
After finetuning, we use the updated fsrc to generate new non-English sentence embeddings. We then repeat the retrieval process with FAISS, yielding a final set of pseudo-parallel pairs after thresholding and filtering.
3 Unsupervised Bitext Mining
We apply our method to the BUCC 2017 shared task, “Spotting Parallel Sentences in Comparable Corpora” (Zweigenbaum et al., 2017). The task involves retrieving parallel sentences from monolingual corpora derived from Wikipedia. Parallel sentences were inserted into the corpora in a contextually appropriate manner by the task organizers. The shared task assessed retrieval systems for precision, recall, and F1-score on four language pairs: De-En, Fr-En, Ru-En, and Zh-En. Prior work on unsupervised bitext mining has generally studied the European language pairs to avoid dealing with Chinese word segmentation (Hangya et al., 2018; Hangya and Fraser, 2019).
3.1 Setup
For each BUCC language pair, we take the corresponding source and target monolingual corpus, which have been pre-split into training, sample, and test sets at a ratio of 49%–2%–49%. The identity of the parallel sentence pairs for the test set were not publicly released, and are only available for the training set. Following the convention established in Hangya and Fraser (2019) and Artetxe and Schwenk (2019a), we use the test portion for unsupervised system development and evaluate on the training portion.
We use the reference FAISS implementation3 for nearest-neighbor search. We used the GluonNLP toolkit (Guo et al., 2020) with pre-trained mBERT weights4 for inference and self-training. We compute the margin similarity score in Eq. 1 with k = 4 nearest neighbors. We set a threshold on the score such that we retrieve the prior proportion (e.g., ∼2%) of parallel pairs in each language.
We then finetune mBERT via self-training. We take minibatches of 100 sentence pairs. We use the Adam optimizer with a constant learning rate of 0.00001 for 2 epochs. To avoid noisy translations, we finetune on the top 50% of the highest-scoring pairs from the retrieved bitext (e.g., if the prior proportion is 2%, then we would use the top 1% of sentence pairs for self-training).
We considered performing more than one round of self-training but found it was not helpful for the BUCC task. BUCC has very few parallel pairs (e.g., 9,000 pairs for Fr-En) per language and thus few positive pairs for our unsupervised method to find. The size of the self-training corpus is limited by the proportion of parallel sentences, and mBERT rapidly overfits to small datasets.
3.2 Results
We provide a few examples of the bitext we retrieved in Table 2. The examples were chosen from the high-scoring pairs and verified to be correct translations.
Method . | De-En . | Fr-En . | Ru-En . | Zh-En . |
---|---|---|---|---|
Hangya and Fraser (2019) | ||||
avg. | 30.96 | 44.81 | 19.80 | − |
align-static | 42.81 | 42.21 | 24.53 | − |
align-dyn. | 43.35 | 43.44 | 24.97 | − |
Our method | ||||
mBERT (final layer) | 42.1 | 45.8 | 36.9 | 35.8 |
+ digit filtering (DF) | 47.0 | 49.3 | 41.2 | 38.0 |
+ edit distance (ED) | 47.0 | 49.3 | 41.2 | 38.0 |
+ self-training (ST) | 60.6 | 60.2 | 49.5 | 45.7 |
mBERT (layer 8) | 67.0 | 65.3 | 59.3 | 53.3 |
+ DF, ED, ST | 74.9 | 73.0 | 69.9 | 60.1 |
Method . | De-En . | Fr-En . | Ru-En . | Zh-En . |
---|---|---|---|---|
Hangya and Fraser (2019) | ||||
avg. | 30.96 | 44.81 | 19.80 | − |
align-static | 42.81 | 42.21 | 24.53 | − |
align-dyn. | 43.35 | 43.44 | 24.97 | − |
Our method | ||||
mBERT (final layer) | 42.1 | 45.8 | 36.9 | 35.8 |
+ digit filtering (DF) | 47.0 | 49.3 | 41.2 | 38.0 |
+ edit distance (ED) | 47.0 | 49.3 | 41.2 | 38.0 |
+ self-training (ST) | 60.6 | 60.2 | 49.5 | 45.7 |
mBERT (layer 8) | 67.0 | 65.3 | 59.3 | 53.3 |
+ DF, ED, ST | 74.9 | 73.0 | 69.9 | 60.1 |
Language pair . | Parallel sentence pair . |
---|---|
De-En | Beide Elemente des amerikanischen Traums haben heute einen Teil ihrer Anziehungskraft verloren. Both elements of the American dream have now lost something of their appeal. |
Fr-En | L’Allemagne à elle seule s’attend à recevoir pas moins d’un million de demandeurs d’asile cette année. Germany alone expects as many as a million asylum-seekers this year. |
Ru-En | Однако по решению Берлинского конгресса в 1881 году к территории Греции присоединилась Фессалия и часть Эпира.Nevertheless, in 1881, Thessaly and small parts of Epirus were ceded to Greece as part of the Treaty of Berlin. |
Zh-En | In the strange new world of today, the modern and the pre-modern depend on each other. |
Language pair . | Parallel sentence pair . |
---|---|
De-En | Beide Elemente des amerikanischen Traums haben heute einen Teil ihrer Anziehungskraft verloren. Both elements of the American dream have now lost something of their appeal. |
Fr-En | L’Allemagne à elle seule s’attend à recevoir pas moins d’un million de demandeurs d’asile cette année. Germany alone expects as many as a million asylum-seekers this year. |
Ru-En | Однако по решению Берлинского конгресса в 1881 году к территории Греции присоединилась Фессалия и часть Эпира.Nevertheless, in 1881, Thessaly and small parts of Epirus were ceded to Greece as part of the Treaty of Berlin. |
Zh-En | In the strange new world of today, the modern and the pre-modern depend on each other. |
Our retrieval results are in Table 1. We compare our results with strictly unsupervised techniques, which do not use bilingual lexicons, parallel text, or other cross-lingual resources. Using mBERT as-is with the margin-based score works reasonably well, giving F1 scores in the range of 35.8 to 45.8, which is competitive with the previous state-of-the-art for some pairs, and outperforming by 12 points in the case of Ru-En. Furthermore, applying simple rule-based filters (Sec. 2.3) on the candidate translation pairs adds a few more points, although the edit distance filter has a negligible effect when compared with the digit filter.
We see that finetuning mBERT on its own chosen sentence pairs (i.e., unsupervised self-training) yields significant improvements, adding another 8 to 14 points to the F1 score on top of filtering. In all, these F1 scores represent a 34% to 98% relative improvement over existing techniques in unsupervised parallel sentence extraction for these language pairs.
Libovický et al. (2019) explored bitext mining with mBERT in the supervised context and found that retrieval performance significantly varies with the mBERT layer used to create sentence embeddings. In particular, they found layer 8 embeddings gave the highest precision-at-1. We also observe an improvement (Table 1) in unsupervised retrieval of another 13 to 20 points by using the 8th layer instead of the default final layer (12th). We include these results but do not consider them unsupervised, as we would not know a priori which layer was best to use.
3.3 Choosing Negative Sentence Pairs
Other authors (e.g., Guo et al., 2018) have noted that the choice of negative examples has a considerable impact on metric learning. Specifically, using negative examples which are difficult to distinguish from the positive nearest neighbor is often beneficial for performance. We examine the impact of taking random sentences instead of the remaining k − 1 nearest neighbors as the negatives during self-training.
Our results are in Table 3. While self-training with random negatives still greatly improves the untuned baseline, the use of hard negative examples mined from the k-nearest neighborhood can make a significant difference to the final F1 score.
4 Bitext for Neural Machine Translation
A major application of bitext mining is to create new corpora for machine translation. We conduct an extrinsic evaluation of our unsupervised bitext mining approach on unsupervised (WMT’14 French-English, WMT’16 German-English) and low-resource (IWSLT’15 English-Vietnamese) translation tasks.
We perform large-scale unsupervised bitext extraction on the October 2019 Wikipedia dumps in various languages. We use wikifil.pl5 to extract paragraphs from Wikipedia and remove markup. We then use the syntok6 package for sentence segmentation. Finally, we reduce the size of the corpus by removing sentences that aren’t part of the body of Wikipedia pages. Sentences that contain *, =, //, ::, #, www, (talk), or the pattern [0-9]{2}:[0-9]{2} are filtered out.
We index, retrieve, and filter candidate sentence pairs with the procedure in Sec. 3. Unlike BUCC, the Wikipedia dataset does not fit in GPU memory. The processed corpus is quite large, with 133 million, 67 million, 36 million, and 6 million sentences in English, German, French, and Vietnamese respectively. We therefore shard the dataset into chunks of 32,768 sentences and perform nearest-neighbor comparisons in chunks for each language pair. We use a simple map-reduce algorithm to merge the intermediate results back together.
We follow the approach outlined in Sec. 2 for Wikipedia bitext mining. For each source sentence, we retrieve the four nearest target neighbors across the millions of sentences that we extracted from Wikipedia and compute the margin-based scores for each pair.
4.1 Unsupervised NMT
We show that our pseudo-parallel text can complement existing techniques for unsupervised translation (Artetxe et al., 2018; Lample et al., 2018c). In line with existing work on UNMT, we evaluate our approach on the WMT’14 Fr-En and WMT’16 De-En test sets.
Our UNMT experiments build upon the reference implementation7 of XLM (Lample and Conneau, 2019). The UNMT model is trained by alternating between two steps: a denoising autoencoder step and a backtranslation step (refer to Lample et al., 2018c for more details). The backtranslation step generates pseudo-parallel training data, and we incorporate our bitext during UNMT training in the same way, as another set of pseudo-parallel sentences. We also use the same initialization as Lample and Conneau (2019), where the UNMT models have encoders and decoders that are initialized with contextual embeddings trained on the source and target language Wikipedia corpora with the masked language model (MLM) objective; no parallel data is used.
We performed the exhaustive (Fr Wiki)-(En Wiki) and (De Wiki)-(En Wiki) nearest-neighbor comparison on eight V100 GPUs, which requires 3 to 4 days to complete per language pair. We retained the top 2.5 million pseudo-parallel Fr-En and De-En sentence pairs after mining.
4.2 Results
Our results are in Table 4. The addition of mined bitext consistently increases the BLEU score in both directions for WMT’14 Fr-En and WMT’16 De-En. Much of the existing work on improving UNMT focuses on improved initialization with contextual embeddings like XLM or MASS (Song et al., 2019). These embeddings were already pre-trained on Wikipedia data, so it is surprising that adding our pseudo-parallel Wikipedia sentences leads to a 2 to 3 BLEU improvement. In other words, our approach is complementary to pre-trained initialization techniques.
Reference . | Architecture . | Pre-training . | En-De . | De-En . | En-Fr . | Fr-En . |
---|---|---|---|---|---|---|
Artetxe et al. (2018) | 2-layer RNN | 6.89 | 10.16 | 15.13 | 15.56 | |
Lample et al. (2018a) | 3-layer RNN | 9.75 | 13.33 | 15.05 | 14.31 | |
Yang et al. (2018) | 4-layer Transformer | 10.86 | 14.62 | 16.97 | 15.58 | |
Lample et al. (2018c) | 4-layer Transformer | 17.16 | 21.00 | 25.14 | 24.18 | |
Song et al. (2019) | 6-layer Transformer | MASS | 28.3 | 35.2 | 37.5 | 34.9 |
XLM Baselines | ||||||
Lample and Conneau (2019) | 6-layer Transformer | XLM | – | – | 33.4 | 33.3 |
Song et al. (2019) | 6-layer Transformer | XLM | 27.0 | 34.3 | 33.4 | 33.3 |
XLM reference implementation | 6-layer Transformer | XLM | – | – | 36.6 | 34.0 |
Maximum performance across baselines | 6-layer Transformer | XLM | 27.0 | 34.3 | 36.6 | 34.0 |
Ours | ||||||
Our XLM baseline | 6-layer Transformer | XLM | 27.7 | 34.5 | 36.7 | 34.5 |
w/ pseudo-parallel text before ST | 6-layer Transformer | XLM | 30.4 | 36.3 | 39.7 | 35.9 |
w/ pseudo-parallel text after ST | 6-layer Transformer | XLM | 30.7 | 37.3 | 40.2 | 36.9 |
Reference . | Architecture . | Pre-training . | En-De . | De-En . | En-Fr . | Fr-En . |
---|---|---|---|---|---|---|
Artetxe et al. (2018) | 2-layer RNN | 6.89 | 10.16 | 15.13 | 15.56 | |
Lample et al. (2018a) | 3-layer RNN | 9.75 | 13.33 | 15.05 | 14.31 | |
Yang et al. (2018) | 4-layer Transformer | 10.86 | 14.62 | 16.97 | 15.58 | |
Lample et al. (2018c) | 4-layer Transformer | 17.16 | 21.00 | 25.14 | 24.18 | |
Song et al. (2019) | 6-layer Transformer | MASS | 28.3 | 35.2 | 37.5 | 34.9 |
XLM Baselines | ||||||
Lample and Conneau (2019) | 6-layer Transformer | XLM | – | – | 33.4 | 33.3 |
Song et al. (2019) | 6-layer Transformer | XLM | 27.0 | 34.3 | 33.4 | 33.3 |
XLM reference implementation | 6-layer Transformer | XLM | – | – | 36.6 | 34.0 |
Maximum performance across baselines | 6-layer Transformer | XLM | 27.0 | 34.3 | 36.6 | 34.0 |
Ours | ||||||
Our XLM baseline | 6-layer Transformer | XLM | 27.7 | 34.5 | 36.7 | 34.5 |
w/ pseudo-parallel text before ST | 6-layer Transformer | XLM | 30.4 | 36.3 | 39.7 | 35.9 |
w/ pseudo-parallel text after ST | 6-layer Transformer | XLM | 30.7 | 37.3 | 40.2 | 36.9 |
Previously (in Table 1), we saw that self-training improved the F1 score for BUCC bitext retrieval. The improvement in bitext quality carries over to UNMT, and providing better pseudo-parallel text yields a consistent improvement for all translation directions.
Our results are state-of-the-art in UNMT, but they should be interpreted relative to the strength of our XLM baseline. We are building on top of the XLM initialization, and the effectiveness of the initialization (and the various hyperparameters used during training and decoding) affects the strength of our final results. For example, we adjusted the beam width on our XLM baselines to attain BLEU scores which are similar to what others have published. One can apply our method to MASS, which performs better than XLM on UNMT, but we chose to report results on XLM because it has been validated on a wider range of tasks and languages.
We also trained a standard 6-layer transformer encoder-decoder model directly on the pseudo-parallel text. We used the standard implementation in Sockeye (Hieber et al., 2018) as-is, and trained models for French and German on 2.5 million Wikipedia sentence pairs. We withheld 10k pseudo-parallel pairs per language pair to serve as a development set. We achieved BLEU scores of 20.8, 21.1, 28.2, and 28.0 on En-De, De-En, En-Fr, and Fr-En respectively. BLEU scores were computed with SacreBLEU (Post, 2018). This compares favorably with the best UNMT results in Lample et al. (2018c), while avoiding the use of parallel development data altogether.
4.3 Low-resource NMT
French and German are high-resource languages and are linguistically close to English. We therefore evaluate our mined bitext on a low-resource, linguistically distant language pair. The IWSLT’15 English-Vietnamese MT task (Cettolo et al., 2015) provides 133k sentence pairs derived from translated TED talks transcripts and is a common benchmark for low-resource MT. We take supervised training data from the IWSLT task and augment it with different amounts of pseudo-parallel text mined from English and Vietnamese Wikipedia. Furthermore, we construct a very low-resource setting by downsampling the parallel text and monolingual Vietnamese Wikipedia text by a factor of ten (13.3k sentence pairs).
We use the reference implementation8 for the state-of-the-art model (Nguyen and Salazar, 2019), which is a highly regularized 6+6-layer transformer with pre-norm residual connections, scale normalization, and normalized word embeddings. We use the same hyperparameters (except for the dropout rate) but train on our augmented datasets. To mitigate domain shift, we finetune the best checkpoint for 75k more steps using only the IWSLT training data, in the spirit of “trivial” transfer learning for low-resource NMT (Kocmi and Bojar, 2018).
In Table 5, we show BLEU scores as more pseudo-parallel text is included during training. As in previous works on En-Vi (cf. Luong and Manning, 2015), we use tst2012 (1,553 pairs) and tst2013 (1,268 pairs) as our development and test sets respectively, we tokenize all data with Moses, and we report tokenized BLEU via multi-bleu.perl. The BLEU score increases monotonically with the size of the pseudo-parallel corpus and exceeds the state-of-the-art system’s BLEU by 1.2 points. This result is consistent with improvements observed with other types of monolingual data augmentation like pre-trained UNMT initialization, various forms of back-translation (Hoang et al., 2018; Zhou and Keung, 2020), and cross-view training (CVT; Clark et al., 2018):
. | En-Vi . |
---|---|
Luong and Manning (2015) | 26.4 |
Clark et al. (2018) | 28.9 |
Clark et al. (2018), with CVT | 29.6 |
Xu et al. (2019) | 31.4 |
Nguyen and Salazar (2019) | 32.8 (28.8) |
+ top 100k mined pairs | 33.2 (29.5) |
+ top 200k mined pairs | 33.9 (29.8) |
+ top 300k mined pairs | 34.0 (30.0) |
+ top 400k mined pairs | 34.1 (29.9) |
We describe our hyperparameter tuning and infrastructure following Dodge et al. (2019). The translation sections of this work mostly used default parameters, but we did tune the dropout rate (at 0.2 and 0.3) for each amount of mined bitext for the supervised En-Vi task (at 100k, 200k, 300k, and 400k sentence pairs). We include development scores for our best models; dropout of 0.3 did best for 0k and 100k, while 0.2 did best otherwise. Training takes less than a day on one V100 GPU.
To simulate a very low-resource task, we use one-tenth of the training data by downsampling the IWSLT En-Vi train set to 13.3k sentence pairs. Furthermore, we mine bitext from one-tenth of the monolingual Wiki Vi text and extract proportionately fewer sentence pairs (i.e., 10k, 20k, 30k, and 40k pairs). We use the implementation and hyperparameters for the regularized 4+4-layer transformer used by Nguyen and Salazar (2019) in a similar setting. We tune the dropout rate (0.2, 0.3, 0.4) to maximize development performance; 0.4 was best for 0k, 0.3 for 10k and 20k, and 0.2 for 30k and 40k. In Table 6, we see larger improvements in BLEU (4+ points) for the same relative increases in mined data (as compared to Table 5). In both cases, the rate of improvement tapers off as the quality and relative quantity of mined pairs degrades at each increase.
. | En-Vi, one-tenth . |
---|---|
13.3k pairs (from 133k original) | 20.7 (19.5) |
+ top 10k mined pairs | 25.0 (22.9) |
+ top 20k mined pairs | 26.7 (24.1) |
+ top 30k mined pairs | 27.3 (24.5) |
+ top 40k mined pairs | 27.7 (24.7) |
. | En-Vi, one-tenth . |
---|---|
13.3k pairs (from 133k original) | 20.7 (19.5) |
+ top 10k mined pairs | 25.0 (22.9) |
+ top 20k mined pairs | 26.7 (24.1) |
+ top 30k mined pairs | 27.3 (24.5) |
+ top 40k mined pairs | 27.7 (24.7) |
4.4 UNMT Ablation Study: Pre-training and Bitext Mining Corpora
In Sec. 4.2, we mined bitext from the October 2019 Wikipedia snapshot whereas the pre-trained XLM embeddings were created prior to January 2019. Hence, it is possible that the UNMT BLEU increase would be smaller if the bitext were mined from the same corpus used for pre-training. We ran an ablation study to show the effect (or lack thereof) of the overlap between the pre-training and pseudo-parallel corpora.
For the En-Vi language pair, we used 5 million English and 5 million Vietnamese Wiki sentences to pre-train the XLM model. We only use text from the October 2019 Wiki snapshot. We mined 300k pseudo-parallel sentence pairs using our approach (Sec. 2) from the same Wiki snapshot. We created two datasets for XLM pre-training: a 10 million-sentence corpus that is disjoint from the 600k sentences of the mined bitext, and a 10 million-sentence corpus that contains all 600k sentences of the bitext. In Table 7, we show the BLEU increase on the IWSLT En-Vi task with and without using the mined bitext as parallel data, using each of the two XLM models as the initialization.
. | w/o PP as bitext . | w/ PP as bitext . |
---|---|---|
XLM excl. PP text | 23.2 | 28.9 |
XLM incl. PP text | 23.1 | 28.3 |
. | w/o PP as bitext . | w/ PP as bitext . |
---|---|---|
XLM excl. PP text | 23.2 | 28.9 |
XLM incl. PP text | 23.1 | 28.3 |
The benefit of using pseudo-parallel text is very clear; even if the pre-trained XLM model saw the pseudo-parallel sentences during pre-training, using mined bitext still significantly improves UNMT performance (23.1 vs. 28.3 BLEU). In addition, the baseline UNMT performance without the mined bitext is similar between the two XLM initializations (23.1 vs. 23.2 BLEU), which suggests that removing some of the parallel text present during pre-training does not have a major effect on UNMT.
Finally, we trained a standard encoder-decoder model on the 300k pseudo-parallel pairs only, using the same Sockeye recipe in Sec. 4.2. This yielded a BLEU score of 27.5 on En-Vi, which is lower than the best XLM-based result (i.e., 28.9), which suggests that the XLM initialization improves unsupervised NMT. A similar outcome was also reported in Lample and Conneau (2019).
5 Related Work
5.1 Parallel Sentence Mining
Approaches to parallel sentence (or bitext) mining have been historically driven by the data requirements of statistical machine translation. Some of the earliest work in mining the Web for large-scale parallel corpora can be found in Resnik (1998) and Resnik and Smith (2003). Recent interest in the field is reflected by new shared tasks on parallel extraction and filtering (Zweigenbaum et al., 2017; Koehn et al., 2018) and the creation of massively multilingual parallel corpora mined from the Web, like WikiMatrix (Schwenk et al., 2019a) and CCMatrix (Schwenk et al., 2019b).
Existing parallel corpora have been exploited in many ways to create sentence representations for supervised bitext mining. One approach involves a joint encoder with a shared wordpiece vocabulary, trained as part of multiple encoder-decoder translation models on parallel corpora (Schwenk, 2018). Artetxe and Schwenk (2019b) apply this approach at scale, and shared a single encoder and joint vocabulary across 93 languages. Another approach uses negative sampling to align the encoders’ sentence representations for nearest-neighbor retrieval (Grégoire and Langlais, 2018; Guo et al., 2018).
However, these approaches require training with initial parallel corpora. In contrast, Hangya et al. (2018) and Hangya and Fraser (2019) proposed unsupervised methods for parallel sentence extraction that use bilingual word embeddings induced in an unsupervised manner. Our work is the first to explore using contextual representations (mBERT; Devlin et al., 2019) in an unsupervised manner to mine for bitext, and to show improvements over the latest UNMT systems (Lample and Conneau, 2019; Song et al., 2019), for which transformers and encoder/ decoder pre-training have doubled or tripled BLEU scores on unsupervised WMT’16 En-De since Artetxe et al. (2018) and Lample et al. (2018c).
5.2 Self-training Techniques
Self-training refers to techniques that use the outputs of a model to provide labels for its own training. Yarowsky (1995) proposed a semi-supervised strategy where a model is first trained on a small set of labeled data and then used to assign pseudo-labels to unlabeled data. Semi-supervised self-training has been used to improve sentence encoders that project sentences into a common semantic space. For example, Clark et al. (2018) proposed cross-view training (CVT) with labeled and unlabeled data to achieve state-of-the-art results on a set of sequence tagging, MT, and dependency parsing tasks.
Semi-supervised methods require some annotated data, even if it is not directly related to the target task. Our work is the first to apply unsupervised self-training for generating cross-lingual sentence embeddings. The most similar approach to ours is the prevailing scheme for unsupervised NMT Lample et al. (2018c), which relies on multiple iterations of backtranslation Sennrich et al. (2016) to create a sequence of pseudo-parallel sentence pairs with which to bootstrap an MT model.
6 Conclusion
In this work, we describe a novel approach for state-of-the-art unsupervised bitext mining using multilingual contextual representations. We extract pseudo-parallel sentences from unaligned corpora to create models that achieve state-of-the-art performance on unsupervised and low-resource translation tasks. Our approach is complementary to the improvements derived from initializing MT models with pre-trained encoders and decoders, and helps narrow the gap between unsupervised and supervised MT. We focused on mBERT-based embeddings in our experiments, but we expect unsupervised self-training to improve the unsupervised bitext mining and downstream UNMT performance of other forms of multilingual contextual embeddings as well.
Our findings are in line with recent work showing that multilingual embeddings are very useful for cross-lingual zero-shot and zero-resource tasks. Even without using aligned corpora, mBERT can embed sentences across different languages in a consistent fashion according to their semantic content. More work will be needed to understand how contextual embeddings discover these cross-lingual correspondences.
Acknowledgments
We would like to thank the anonymous reviewers for their thoughtful comments.
Notes
By unsupervised, we mean that no cross-lingual resources like parallel text or bilingual lexicons are used. Unsupervised techniques have been used to bootstrap MT systems for low-resource languages like Khmer and Burmese (Marie et al., 2019).
In Python, set(re.findall("[0-9]+",sent1)) == set(re.findall("[0-9]+",sent2)).