SINGAPORE MANAGEMENT UNIVERSITY
SMU Office of Research & Tech Transfer – “Correct. Good job.” “Wrong. Try again.”
These are the typical outcome-based feedback provided by most computer programming grading tools including commercial ones like Gradescope.
“Existing grading tools, including many research prototypes, are not enough to meet the needs of instructors teaching computer programming,” Assistant Professor of Computer Science (Education) Don Ta told the Office of Research & Tech Transfer.
“While some tools are good for summative assessment, they are incapable of providing a holistic assessment on the cognitive process and approach taken by students when working on algorithm design or writing code to solve a problem,” he continued.
“Thus, to provide constructive feedback, Computing and Information Systems (CIS) instructors like myself have to review hundreds and, sometimes, thousands of lines of code. This is a long-drawn process as there can be 400-500 students enrolled in the introductory programming course at SMU,” he added.
“Based on my years of experience teaching computing, I am aware that students learn best when they are given timely, frequent, formative and personalised feedback. The more students get feedback including suggestions for relevant code samples and are given additional programming tasks to work on their previous mistakes, the faster they will improve their skills in code reading, algorithm designing and code writing, which are among the core skills of any CIS student,” he went on.
In order to develop a tool that provides instantaneous and constructive feedback to students, Professor Ta and his three collaborators, SMU Associate Professor of Computer Science Shar Lwin Khin, SMU Professor of Information Systems (Education) Venky Shankararaman, and Associate Professor Hui Siu Cheng from the School of Computer and Engineering at Nanyang Technological University, were recently awarded a Tertiary Education Research Fund (TRF) grant by the Ministry of Education. The project will realise a web-based tool named AP-Coach, which stands for Automated Programming Coach.
This research furthers Professor Ta’s prior work which focused on the accuracy and effectiveness of auto-scoring for codes and short text in natural languages.
The AP-Coach will be tested out on a pilot class comprising first year undergraduate SMU students who are enrolled in the introductory programming Python course, starting January 2023. It will be rolled out to the rest of the students in subsequent semesters if it proves to be useful for learning.
The primary objectives of the AP-Coach are to automate the code reviewing process at scale, while at the same time, to enhance learning by providing instantaneous, constructive and personalised feedback to students by showing them hints on what should be the next steps, relevant code samples, and giving them additional suitable programming tasks to hone their learning in code reading, algorithm designing and code writing.
The AP-Coach will look at the code or pseudocode submitted by the students to generate relevant and personalised feedback with the use of similarity matching algorithms based on recent advances in AI (code embedding and natural language processing models), and software engineering techniques to assess abstract syntax structures of code.
To provide more practice tasks, the AP-Coach will be designed to auto-generate diverse programming exercises and pseudocode using AI techniques such as the OpenAI GPT-3 (Generative Pre-trained Transformer 3) model, which is an auto-regressive language model capable of producing human-like text and code.
The tool is also designed to monitor student progress. Each student will be given a summary of the mistakes made throughout the 13-week course. The students can also use the AP-Coach to review past programming exercises.
To ascertain the effectiveness of AP-Coach, student proficiency in code reading, algorithm designing, and code writing will be monitored over several consecutive semesters.
Implications of the research
There are three important implications from this research.
One, it has been found that immediate and relevant feedback is highly motivating for students. It also enables independent learning.
Second, effective and automatic coaching not only scales the code reviewing process, but it also significantly reduces the workload for instructors. Thus, instructors would have more time to help and guide the weaker students.
Third, the AP-Coach can be an important step towards making AI-enabled education in Computer Science a reality.
AI-enabled education is an exciting discipline in learning and teaching, and Professor Ta looks forward to finding out how the tool can be useful to students.
EurekAlert!, the online, global news service operated by AAAS, the science society.