Generative AI in Introductory Programming

Tentative Author List:

  • Brett A. Becker, University College Dublin, Ireland
  • Michelle Craig, University of Toronto, Canada
  • Paul Denny, The University of Auckland, New Zealand
  • Hieke Keuning, Utrecht University, The Netherlands
  • Natalie Kiesler, DIPF Leibniz Institute for Research and Information in Education, Germany
  • Juho Leinonen, The University of Auckland, New Zealand
  • Andrew Luxton-Reilly, The University of Auckland, New Zealand
  • Lauri Malmi, Aalto University, Finland
  • James Prather, Abilene Christian University, Abilene, TX, USA
  • Keith Quille, School of Enterprise Computing and Digital Transformation, TU Dublin, Ireland

Scope:

Large Language Models (LLMs) such as OpenAI’s ChatGPT, and IDEs powered by them such as GitHub Copilot have demonstrated impressive performance in myriad types of programming tasks. From the student side they can often produce syntactically and logically correct code from natural language prompts that rival the performance of high performing human introductory programming students – an ability has been shown to extend beyond introductory programming. However, their impact in the classroom goes beyond producing code. For example, they could work to help level the playing field between students with and without prior experience. For instance, they have been shown to be proficient in not only explaining programming error messages, but in repairing broken code; and pair programming might evolve from two students working together into “me and my AI”. On the other hand students could become over-reliant on them and they may open up new divides. From the educator perspective, LLMs have been successful in generating novel exercises and examples including providing correct solutions and functioning test cases. They can be used to assess student work, and have the potential to act as always-available TAs that won’t judge students, easing the burden not only on the educator but on their assistants and the educational systems behind courses. They could even affect student intakes given their prominence in the media and the effect that such forces can have on who chooses to, and who chooses not to, study computing. Given that LLMs have the potential to reshape introductory programming it is possible that they will impact the entire computing curriculum, affecting what is taught, when it is taught, how it is taught, and to whom it is taught. However, the dust has not yet settled on this matter with some educators embracing LLMs and others very fearful that the challenges could outweigh the opportunities. Indeed, during this transformation from pre- to post-LLM introductory programming, several issues need to be mitigated including those of ethics, bias and academic integrity. In this paper we explore the present realities and the future possibilities in how large language models could impact introductory programming – the foundation of the computing curriculum – including learning goals, assessment, academic integrity, emerging pedagogies, and educational resources.

Contact: Brett Becker

Version: