Comparing Gamified and Traditional Assessment Environments: A Quasi-Experimental Study in a University Python Course

Authors

  • József Cserkó John von Neumann University

DOI:

https://doi.org/10.24368/jates418

Keywords:

Gamification, evaluation, exam, measurement, University

Abstract

This study examines student-performance outcomes by comparing two distinct assessment environments—a traditional paper-based exam and a complex gamified digital format—in a university-level introductory Python programming course. A quasi-experimental comparison with student self-selection was conducted at John von Neumann University (Hungary) with 63 first-year Information Technology students. Twenty-seven students took a conventional paper-based exam, while 36 completed the assessment in CodingUs, a custom-built “Among Us”-inspired web application. This gamified condition operated as a package intervention, incorporating not only game design elements but also individualized AI-generated tasks, disabled clipboard operations, and a distinct user interface. Isomorphic Python tasks were produced by an AI-assisted generation pipeline using GPT-4o-mini and GPT-4o. Performance was compared using the Mann–Whitney U test as the primary procedure, with an independent-samples t-test as a supplementary parametric analysis. The two groups did not differ significantly in mean performance (gamified: M = 63.06%, SD = 31.61; traditional: M = 68.89%, SD = 34.68; Mann–Whitney U = 423.50, p = .383; t(61) = −0.70, p = .490; Cohen’s d = −0.18; 95% CI for the mean difference [−22.61, +10.94]). While no statistically significant difference in performance was detected in this sample, the wide confidence interval and the self-selection nature of the design preclude claims of equivalence. Informal classroom observations and unsolicited student feedback offered preliminary indications of elevated engagement and favourable perceptions of the anti-cheating provisions in the gamified cohort; because no validated self-report instrument was administered, these impressions are reported as exploratory rather than confirmatory. The study contributes a replicable AI-supported pipeline for generating isomorphic programming items and motivates further research employing randomised allocation and validated measurement instruments.

Downloads

Published

2026-04-22

How to Cite

Cserkó, J. (2026). Comparing Gamified and Traditional Assessment Environments: A Quasi-Experimental Study in a University Python Course. Journal of Applied Technical and Educational Sciences, 16(1), ArtNo: 418. https://doi.org/10.24368/jates418

Issue

Section

Articles and Studies