Comparing L2 learners’ writing against parallel machine-translated texts: Raters’ assessment, linguistic complexity and errors

Yuah V. Chon, Dongkwang Shin, Go Eun Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Recent developments in machine translation, such as in Google Translate, may help second language (L2) writers produce texts in the target language according to their intended meaning. The aim of the present study was to examine the role of machine translation (MT) in L2 writing. For this purpose, 66 Korean English as a foreign language (EFL) university learners produced compositions in which writing tasks were counterbalanced in three writing modes (i.e., Direct Writing, Self-Translated Writing, and Machine-Translated Writing). The learners’ writing products were first graded by independent markers and later submitted for computerized text analyses using BNC-COCA 25000, Coh-Metrix, and SynLex to assess linguistic complexity. The texts were also analyzed for types of errors. The results indicate that MT narrowed the difference of writing ability between the skilled and less skilled learners, facilitated learner use of lower frequency words, and produced syntactically more complex sentences. Error analysis showed a reduction in the quantity of grammatical errors when MT was used to aid L2 writing. However, MT-translated compositions contained more mistranslations and a greater number of poor word choices. The results offer pedagogical implications for using MT for L2 writing.

Original languageEnglish
Article number102408
JournalSystem
Volume96
DOIs
StatePublished - 2021 Feb

Keywords

  • Coh-metrix
  • Error analysis
  • Google translate
  • Lexical complexity
  • Machine translation
  • Second language writing
  • SynLex
  • Syntactic complexity

Fingerprint Dive into the research topics of 'Comparing L2 learners’ writing against parallel machine-translated texts: Raters’ assessment, linguistic complexity and errors'. Together they form a unique fingerprint.

Cite this