Abstract
The involvement load hypothesis (ILH) proposed by Laufer and Hulstijn (2001) is one of the most widely acknowledged hypotheses about vocabulary learning (Hu & Nassaji, 2016). It suggests that the effectiveness of a task in promoting vocabulary learning is contingent upon the involvement load of the task, which is composed of the amount of need, search, and evaluation it imposes (Hulstijn & Laufer, 2001). Tasks with greater involvement loads tend to be more effective than tasks with lower loads, and tasks with similar involvement loads tend to promote vocabulary learning similarly (Hulstijn & Laufer, 2001). There is rich empirical evidence in support of the ILH in conventional learning environments, however, few studies have investigated the ILH in digital game environments. To fill this research gap, I conducted the present study to investigate whether the ILH can, in its present form, predict differences between digital and non-digital environments. This main research question consists of two sub-questions. (1) For the same vocabulary learning task with the same involvement load, do learners perform better in a digital game environment than in a conventional one? (2) For two different vocabulary learning tasks in the digital game environment, do learners perform differently even though the two tasks are considered to have the same involvement loads according to ILH in its present form?A total of 135 students participated in the study, and they were randomly divided into three groups. Students in Group A were asked to complete Task 1: Reading a text and answering reading comprehension questions on paper. Students in Group B were asked to complete Task 2: Reading a text and answering reading comprehension questions in a digital game. Students in Group C were asked to complete Task 3: Reading a text and inferring meanings of the underlined words in a digital game. Essentially, there were two pairs of comparisons. The first one was between Group A and Group B, which examined whether learners performed differently in a digital game environment and in a conventional one while learning with the same vocabulary learning task. The second one was between Group B and Group C, which examined whether learners performed differently on two tasks with the same involvement load in a digital game-based environment.
The participants were students from a Hong Kong university with similar educational backgrounds (first year Master of Education students), language proficiency levels (intermediate English learners), and prior knowledge of the target vocabulary (they knew no more than two target words). Their first languages were either Mandarin or Cantonese. Before the project, their prior knowledge of the 10 target words was measured using Folse’s (2006) modified vocabulary knowledge scale (MVKS). After completing the learning session, all participants were tested immediately using the MVKS. Subsequently, 10 from each group were interviewed to investigate their learning experiences and perceptions of the learning approaches. One week later, the remaining 35 students from each group were post-tested using the MVKS.
I also interviewed 30 students to investigate their learning experience. At the interviews, I asked them to reflect on their learning process. Questions included what the students found useful or useless for their vocabulary learning and what they thought about the learning task and game. The semi-structured interview was conducted to triangulate the quantitative data collected from the post-tests. It aimed to identify what features of the learning task and game motivated the students, engaged them in the learning process, directed their attention to the target vocabulary knowledge, and were considered useful by them. Analysis of these features could contribute to the better understanding of the experimental results concerning the effectiveness of the three tasks.
In the context of this study, I examined task effectiveness in terms of promoting vocabulary learning from two perspectives, the initial learning of the target vocabulary knowledge, which was evaluated by an immediate posttest, and the retention of the target vocabulary knowledge, which was evaluated by a delayed posttest. The results indicated that for the same task of reading a text and answering reading comprehension questions, learners performed better in both immediate learning and retention of vocabulary knowledge in a digital game environment than in a conventional one. The research results also indicated that, for Task 2: Reading a text and answering reading comprehension questions in a digital game and Task 3: Reading a text and inferring meanings of the underlined words in a digital game, learners performed differently. Although the two tasks induce the same involvement load (moderate need, search, and evaluation) and tend to promote vocabulary learning with similar effectiveness according to the ILH, the research results showed that Task 3 was significantly more effective than Task 2.
The interview results indicated that the superiority of digital game environments over non-game environments was likely attributable to the digital game learning environment that was conducive to the increase in learning motivation. Moreover, as reading comprehension did not specifically direct or guide students to focus on the target vocabulary, they rarely attempted to remember the forms of the target words or figure out the exact meanings of the target words. Thus, they did not spend additional efforts on building clear form-meaning links for the target words, which might be the main reason why the reading comprehension task was less effective than the reading plus inferencing task.
Thus, I suggested that the application of ILH to digital game-based vocabulary learning may consider adding one more degree of prominence to “need” when evaluating the involvement load of a task in digital game environments. I also suggested that when learners are required to infer exact meanings of target words based on the contexts, the involvement load of search should be strong, rather than moderate.
Date of Award | 22 Mar 2022 |
---|---|
Original language | English |
Awarding Institution |
|
Supervisor | Guoxing Yu (Supervisor) |