Part 2: Foretelling grades and encouraging reflection to increase motivation

by Jason Pratt
(Yokohama, Japan)

Results


The results listed compare two groups of students: students not given charts and reflection assignments (spring 2015 students, who incidentally were the source of data for the charts provided to the others) and students who were given charts and reflection assignments (fall 2015 students).

While all students were taking classes categorized as low intermediate, the university has a policy of sub-dividing, as possible, such classes and grouping students with the most similar language abilities together. This means that simply measuring the classes’ overall final grades against one another could potentially be more a reflection of the differing abilities they entered classes with rather than the progress made or any motivation-driven improved work ethic. Thus, grades have only been compared for those students who had exactly the same mid-term speaking test scores and exactly the same amount of absences up to that point. In other cases, both for any individual comparisons I list and for considering class averages overall, the rate of change in speaking test scores and absenteeism, not overall numbers, is the basis for comparison.

Before revealing the results, I would like to mention two considerations. The first is related to the calculation of scores. In cases where students dropped the class or did not attend enough classes to receive a grade, their numbers were not factored into the below. The second is a note about what will be shown. Again, in respect to my former institution, I will not provide actual scores or numbers of absences. While I feel I appropriately have already provided a reason for why overall grades may not be the best indicators of success, overall absence numbers could be of some relevance. Nonetheless, I feel that the university may not want me to make those figures public here, so I will only discuss rates of change to absences.

Grade-based comparison for those with the same indicators at the mid-term

 Cases where fall semester students had higher final grades than comparable spring semester students: 12
 Cases where fall semester students achieved the same grade as comparable spring semester students: 13
 Cases where fall semester students had lower grades than comparable spring semester students: three

Note 1: In all cases, the amount of change was one grade level, no more, no less.
Note 2: In some instances, there were multiple students in each semester that had the same absences and mid-term scores, but received a different final grade. In such cases, each fall semester grade is individually compared to each spring semester grade, meaning that the same grade can be counted more than once.


Pre-mid-term versus post-mid-term performance comparisons for those with the same indicators at the mid-term

 Fall semester students who received higher final speaking test scores than their best-performing spring counterparts: 18
 Fall semester students who received the same final speaking test score as their best-performing spring counterpart: one
 Fall semester students who received lower final speaking test scores than their best-performing spring counterparts: five

 Fall semester students who were absent less times after the mid-term than their best-attending spring counterparts: 10
 Fall semester students who were absent the same amount of times after the mid-term as their best-attending spring counterparts: five
 Fall semester students who were absent more times after the mid-term than their best-attending spring counterparts: nine

Note 1: The terms best-performing or best-attending are used because in some cases there were more than one spring semester counterpart with the same mid-term speaking test score and absence tally as fall semester students.
Note 2: While a comparison of worst-performing or worst-attending counterparts yielded different results, they were only slightly different.


Pre-mid-term versus post-mid-term performance comparisons based on class averages

 Amount of change between mid-term speaking test scores on average and final speaking test scores on average for spring semester students: 0.61 points higher
 Amount of change between mid-term speaking test scores on average and final speaking test scores on average for fall semester students: 1.89 points higher
 Amount that average increase for fall semester students surpassed spring semester students: 1.28 points

 Increase in average absences after the mid-term as compared to before the mid-term for spring semester students: 1.09 days
 Increase in average absences after the mid-term as compared to before the mid-term for fall semester students: 0.92 days
 Amount spring semester students increased average absence days after mid-term more than fall semester students: 0.17

Note: As stated earlier in this study, speaking test grades were awarded on a scale of zero to 50.


Data Analysis

Judging by the figures above, some strong assumptions can be made, while other possible findings may be suggested but are not as substantiated. Below I list my thoughts based on the data.

Firstly, as the data shows that average improvements on speaking test scores in the fall outpaced those in the spring, as did those for most individuals from the fall with the same indicators as others from the spring. It seems likely that access to the data and self-reflection did motivate a large number of students to increase efforts. If there were any who decided to decrease efforts, they were few by comparison.

Secondly, as spring semester students did receive higher scores on their final speaking tests than on their mid-term speaking tests, my method was not the cause for improved scores in general, but it seems that it contributed to greater rates of improvement.

Thirdly, the rates for improvement were not on average overly large. However, comparing fall students with spring semester counterparts possessing the same mid-term timed indicators on my chart, nearly half of the fall students received grades a full letter higher than those spring counterparts. As the amount of improvement on speaking tests implies, these same fall students also outperformed their counterparts in homework and/or listening tests. Due to the form my study took, I can not say this with any certainty, but I assume the improved dedication to study also had positive affects on homework and listening tests.

Finally, students in both semesters were absent more in the second half of the semester than in the first half. Students given the foresight of potential grades had a smaller average increase of absenteeism, but it was so minimal that it is difficult to ascertain if it is truly because of that foresight. It is certain that the data did not push students to attend more classes after the mid-term than before.


Final thoughts

The results of my study suggest that my strategy of awareness building had an overall positive effect. Having said this, I plan to when possible again employ this method with some modifications. I would also recommend this strategy to other teachers. Nonetheless, I am still left with some doubts and questions.

Firstly, as I neglected to survey the students at any point during or after the class, I cannot be totally certain that it was this strategy that resulted in greater improvement or something else that I may have done or some other unknown circumstance. It should be the case that not only am I a better teacher each semester, I hope to be better each class. Therefore, fall semester students may have simply just enjoyed better teaching than spring semester students. I am also not certain if factors such as the time of the year and associated issues such as weather affect levels of student dedication. If I had used fall 2014 student data to assist spring 2015 students, would the results have been greater or less? I cannot say until I try again.

Secondly, I am not certain about the limits of this technique. Assuming I taught a new group of students in the same level and under the same conditions once more, would providing them with the results of my fall 2015 students result in them on average surpassing them as well? Or, are these the best rates achievable through the amount they are motivated by the method I employed? Or, moreover, would the higher grades that fall 2015 students received actually seem good enough, or less attainable, and thus result in less people being motivated by my technique? If either of these last two questions can be answered yes to, perhaps always using the spring 2015 grades for any group of students would seem the most prudent. Again, without experimenting, I am not in a position yet to know these answers.

Finally, while the university’s grading system is standardized and designed to be quite effective in ensuring validity, and the training and support materials created in a way that superbly work against subjectivity, there is always the chance that I inadvertently graded final speaking tests in the fall semester more leniently than I did the mid-term tests of that semester or either of the speaking tests in the spring semester. The best way to test this is for me to replicate the study multiple times.

I hope that other instructors attempt similar strategies as I described here, addressing the areas for improvement I noted. I would greatly appreciate chances to compare results.


Profile:

Jason Pratt teaches at Toyo Gakuen University in Tokyo. He holds an MA in International Politics from Hosei University. He can be contacted by
Email







To Page One of this article





To Post TEFL article





To How to Teach English in Japan

Click here to post comments

Join in and write your own page! It's easy to do. How? Simply click here to return to Post your TEFL article.