TransLLaMa: LLM-based Simultaneous Translation System

programming
architectures
Decoder-only LLMs can perform SiMT tasks with fine-tuning and wait token. GPT-4 shows promise.
Author

Roman Koshkin, Katsuhito Sudoh, Satoshi Nakamura

Published

February 7, 2024

Summary:

The article discusses the development of a simultaneous translation system based on large language models (LLMs). The study demonstrates that a pre-trained LLM can be fine-tuned to perform English-German and English-Russian simultaneous translation tasks with comparable BLEU scores to state-of-the-art baselines. The system is capable of controlling input segmentation without the need for a separate policy, and it also handles speech-to-speech translation tasks with high quality. The authors propose a policy-free SiMT system, which is fine-tuned on a dataset of causally aligned source and target sentences. The system is capable of deciding when to output translation and when to read in more of the source without requiring a separate policy.

Major Findings:

  1. Unlike conventional sequential translation, simultaneous machine translation (SiMT) aims to produce target text with minimal delay, requiring optimal decisions about when to translate.
  2. The study demonstrates that a pre-trained LLM can be fine-tuned to perform both simultaneous translation and input segmentation without a separate policy, with performance approaching or exceeding state-of-the-art.
  3. The proposed system, TRANSLLAMA, is capable of handling speech-to-speech translation tasks with quality approaching that of some recently published baselines at comparable latencies.

Analysis and Critique:

  • The study provides valuable insights into the development of a policy-free SiMT system, showcasing the potential of pre-trained LLMs for simultaneous translation tasks.
  • However, the article lacks a detailed discussion of the limitations and challenges associated with the proposed system. Further exploration of the impact of different parameters and the generalizability of the system to other language pairs would enhance the comprehensiveness of the study.
  • Additionally, the article could benefit from a more in-depth analysis of the trade-offs between quality and latency in the SiMT system, as well as a comparison with other existing SiMT systems to provide a more comprehensive evaluation of the proposed approach.

Appendix

Model gpt-3.5-turbo-1106
Date Generated 2024-02-26
Abstract https://arxiv.org/abs/2402.04636v1
HTML https://browse.arxiv.org/html/2402.04636v1
Truncated False
Word Count 15449