With the growing popularity of task-based and task-supported approaches to language teaching, the field of instructed second language (L2) acquisition has seen preliminary interest in how tasks might be manipulated at the pre-task stage as a means to facilitate subsequent L2 performance (see, e.g., Foster & Skehan, 2013). However, no studies to date have examined how students' expectations of how they will be evaluated might influence the quality of their written output. In addition, previous studies have primarily examined the language products (e.g., linguistic complexity and accuracy), yet little attention has been paid to online processes (Revesz, 2015). This study aims to help address these gaps.Participants (N = 76, upper-intermediate, Arabic L1) completed two writing tasks over a period of three weeks. For each writing, participants were required to write a 5 paragraph essay. For the first writing, all participants were informed that they would receive feedback only on the content of their writing. For the second and final writing, half of the participants were informed that they would receive feedback on the content of their writing, whereas the other half of the participants were informed that they would receive feedback only on the accuracy of their writing. When writing, participants’ pausing and revision behaviors were recorded using the keystroke-logging software InputLog. Accuracy was assessed with global and local accuracy measures, and metrics of linguistic complexity were obtained via Coh-metrix and Synlex tools.The results from a series of mixed effects analyses will be discussed in light of models of L2 writing and task-based performance and development. Finally, the pedagogical implications of the study will be considered, in particular how student expectations might best be managed to achieve the desired outcomes from their writings. Copyright © 2018 All Academic, Inc.
|Publication status||Published - Mar 2018|