Today we are starting a campaign to support every school and library in the UK to set up a free Code Club to make sure that all young people can develop the skills and knowledge they need to thrive in the age of AI.
Over the past decade, Code Club has provided more than 2 million young people with the opportunity to learn how to build their own apps, games, animations, websites, robots, and so much more.
We know that getting hands-on, practical experience of building real projects with technology works. Independent evaluations have shown that attending a Code Club not only helps young people develop their programming skills, but also builds wider life skills such as confidence, resilience, problem-solving, and communication. All of which we know are essential if they are going to thrive in a world where AI is ubiquitous.
Right now, there are over 2,000 Code Clubs meeting in schools and libraries all over the UK, organised by an amazing community of teachers, educators, and volunteers from all walks of life. We want to see that number grow.
You don’t need technical skills to mentor at a Code Club. The Raspberry Pi Foundation provides free, self-guided projects that help young people learn how to create with different technologies. We have over 200 Code Club Projects on our website, all of which are developed by expert educators, based on evidence of how young people learn, and rigorously tested; so we know that they are effective.
That includes a set of projects that support the safe exploration of AI technologies, helping young people understand how AI works, its possibilities and limitations.
We also provide training and support to help you set up and run your Code Club, all of which is available at no charge.
I can promise you that the hour you spend in a Code Club will be the highlight of your week. I always come away from Code Club inspired and optimistic about what young people can achieve if we give them a sense of agency over technology.
You don’t have to take my word for it: here’s Janine, a Computer Science teacher and long-time Code Club mentor from Stoke-on-Trent sharing her experience.
Janine Kirk is a Computer Science Teacher at The King’s Church of England Academy in Stoke-on-Trent, UK, who has been running a Code Club for over ten years. She has been inspired by the campaign for a Code Club in every school and library in the UK, to set up clubs in six other schools in her multi-academy trust.
Setting up a Code Club is really easy as a teacher, as you can just tag it onto the end of your school day, or during lunch. The website is clear and easy to use — and once you have signed up, you have access to additional resources to promote your club. Code Club gives time and space to explore coding in a completely different way than in a classroom. For me, it’s about seeing what programs really inspire students: it gives an insight into how students like to code, ideas of preferred coding language, and tasks they keep coming back to. Running a Code Club has also allowed me to build relationships with students outside of the classroom environment, and all of this spills into my lessons and improves my teaching practice.
For students, Code Club is a great space where they can collaborate and work on their chosen tasks. Students often comment on how they look forward to Code Club and how they have continued their projects at home. It also allows students much more variety in enrichment activity, as Code Club is often popular with students who are neurodivergent. It’s amazing to see the children grow in confidence and friendship as they find likeminded students to support each other.
My students really love the certificates they can earn. We have been inspired by the excellent activities that revamp the old ways of teaching programming and give them a really nice spin. In fact, I have used the resources in computer science lessons too, as they are often much more visual and fun for the students to create.
Since joining Code Club I have felt part of a community. I receive regular updates, and attending events such as the Clubs Conference really helps inspire creative ways to teach coding. As a computing teacher in a secondary school, you are often part of a very small team — but Code Club has allowed me to feel part of something bigger, and I know that should I need support, they are always there with friendly advice. It really is the best thing that I have done in my career.
Today we’re publishing a position paper setting out five arguments for why we think that kids still need to learn to code in the age of artificial intelligence.
Generated using ChatGPT.
Just like every wave of technological innovation that has come before, the advances in artificial intelligence (AI) are raising profound questions about the future of human work. History teaches us that technology has the potential to both automate and augment human effort, destroying some jobs and creating new ones. The only thing we know for sure is that it is impossible to predict the precise nature and pace of the changes that are coming.
One of the fastest-moving applications of generative AI technologies are the systems that can generate code. What started as the coding equivalent of autocomplete has quickly progressed to tools that can generate increasingly complex code from natural language prompts.
This has given birth to the notion of “vibe-coding” and led some commentators to predict the end of the software development industry as we know it. It shouldn’t be a surprise then that there is a vigorous debate about whether kids still need to learn to code.
In the position paper we put forward five arguments for why we think the answer is an unequivocal yes.
We need humans who are skilled programmers
First, we argue that even in a world where AI can generate code, we need skilled human programmers who can think critically, solve problems, and make ethical decisions. The large language models that underpin these tools are probabilistic systems designed to provide statistically acceptable outputs and, as any skilled software engineer will tell you, simply writing more code faster isn’t necessarily a good thing.
Learning to code is an essential part of learning to program
Learning to code is the most effective way we know for a young person to develop the mental models and fluency to become a skilled human programmer. The hard cognitive work of reading, modifying, writing, explaining, and testing code is precisely how young people develop a deep understanding of programming and computational thinking.
Learning to code will open up even more opportunities in the age of AI
While there’s no doubt that AI is going to reshape the labour market, the evidence from history suggests that it will increase the reach of programming and computational approaches across the economy and into new domains, creating demand for humans who are skilled programmers. We also argue that coding is no longer just for software engineers, it’s becoming a core skill that enables people to work effectively and think critically in a world shaped by intelligent machines. From healthcare to agriculture, we are already seeing demand for people who can combine programming with domain-specific skills and craft knowledge.
Coding is a literacy that helps young people have agency in a digital world
Alongside the arguments for coding as a route to opening up economic opportunities, we argue that coding and programming gives young people a way to express themselves, to learn, and to make sense of the world.
And perhaps most importantly, that learning to code is about power. Providing young people with a solid grounding in computational literacy, developed through coding, helps ensure that they have agency. Without it, they risk being manipulated by systems they don’t understand. As Rushkoff said: “Program, or be programmed”.
The kids who learn to code will shape the future
Finally, we argue that the power to create with technology is already concentrated in too small and homogenous a group of people. We need to open up the opportunity to learn to code to all young people because it will help us mobilise the full potential of human talent, will lead to more inclusive and effective digital solutions to the big global challenges we face, and will help ensure that everyone can share in the societal and economic benefits of technological progress.
The work we need to do
We end the paper with a call to action for all of us working in education. We need to challenge the false narrative that AI is removing the need for kids to learn to code, and redouble our efforts to ensure that all young people are equipped to take advantage of the opportunities in a world where AI is ubiquitous.
You can read the full paper here:
The cartoon image for this blog was created using ChatGPT-4o, which was prompted to produce a “whimsical cartoon that expresses some of the key ideas in the position paper”. It took several iterations.
At times, it can seem like everything is being automated with AI. However, there are some parts of learning to program that cannot (and probably should not) be automated, such as understanding errors in code and how to fix them. Manually typing code might not be necessary in the future, but it will still be crucial to understand the code that is being generated and how to improve and develop it.
As important as debugging might be for the future of programming, it’s still often the task most disliked by novice programmers. Even if program error messages can be explained in the future or tools like LitterBox can flag bugs in an engaging way, actually fixing the issues involves time, effort, and resilience — which can be hard to come by at the end of a computing lesson in the late afternoon with 30 students crammed into an IT room.
Debugging can be challenging in many different ways and it is important to understand why students struggle to be able to support them better.
But what is it about debugging that young people find so hard, even when they’re given enough time to do it? And how can we make debugging a more motivating experience for young people? These are two of the questions that Laurie Gale, a PhD student at the Raspberry Pi Computing Education Research Centre, focused on in our July seminar.
Laurie has spent the past two years talking to teachers and students and developing tools (a visualiser of students’ programming behaviour and PRIMMDebug, a teaching process and tool for debugging) to understand why many secondary school students struggle with debugging. It has quickly become clear through his research that most issues are due to problematic debugging strategies and students’ negative experiences and attitudes.
When Laurie Gale started looking into debugging research for his PhD, he noticed that the majority of studies had been with college students, so he decided to change that and find out what would make debugging easier for novice programmers at secondary school.
When students first start learning how to program, they have to remember a vast amount of new information, such as different variables, concepts, and program designs. Utilising this knowledge is often challenging because they’re already busy juggling all the content they’ve previously learnt and the challenges of the programming task at hand. When error messages inevitably appear that are confusing or misunderstood, it can become extremely difficult to debug effectively.
Program error messages are usually not tailored to the age of the programmers and can be hard to understand and overwhelming for novices.
Given this information overload, students often don’t develop efficient strategies for debugging. When Laurie analysed the debugging efforts of 12- to 14-year-old secondary school students, he noticed some interesting differences between students who were more and less successful at debugging. While successful students generally seemed to make less frequent and more intentional changes, less successful students tinkered frequently with their broken programs, making one- or two-character edits before running the program again. In addition, the less successful students often ran the program soon after beginning the debugging exercise without allowing enough time to actually read the code and understand what it was meant to do.
The issue with these behaviours was that they often resulted in students adding errors when changing the program, which then compounded and made debugging increasingly difficult with each run. 74% of students also resorted to spamming, pressing ‘run’ again and again without changing anything. This strategy resonated with many of our seminar attendees, who reported doing the same thing after becoming frustrated.
Educators need to be aware of the negative consequences of students’ exasperating and often overwhelming experiences with debugging, especially if students are less confident in their programming skills to begin with. Even though spending 15 minutes on an exercise shows a remarkable level of tenaciousness and resilience, students’ attitudes to programming — and computing as a whole — can quickly go downhill if their strategies for identifying errors prove ineffective. Debugging becomes a vicious circle: if a student has negative experiences, they are less confident when having to bug-fix again in the future, which can lead to another set of unsuccessful attempts, which can further damage their confidence, and so on. Avoiding this downward spiral is essential.
Laurie stresses the importance of understanding the cognitive challenges of debugging and using the right tools and techniques to empower students and support them in developing effective strategies.
To make debugging a less cognitively demanding activity, Laurie recommends using a range of tools and strategies in the classroom.
Some ideas of how to improve debugging skills that were mentioned by Laurie and our attendees included:
Using frame-based editing tools for novice programmers because such tools encourage students to focus on logical errors rather than accidental syntax errors, which can distract them from understanding the issues with the program. Teaching debugging should also go hand in hand with understanding programming syntax and using simple language. As one of our attendees put it, “You wouldn’t give novice readers a huge essay and ask them to find errors.”
Teaching systematic debugging processes. There are several different approaches to doing this. One of our participants suggested using the scientific method (forming a hypothesis about what is going wrong, devising an experiment that will provide information to see whether the hypothesis is right, and iterating this process) to methodically understand the program and its bugs.
Most importantly, debugging should not be a daunting or stressful experience. Everyone in the seminar agreed that creating a positive error culture is essential.
Teachers in Laurie’s study have stressed the importance of positive debugging experiences.
Some ideas you could explore in your classroom include:
Normalising errors: Stress how normal and important program errors are. Everyone encounters them — a professional software developer in our audience said that they spend about half of their time debugging.
Rewarding perseverance: Celebrate the effort, not just the outcome.
Modelling how to fix errors: Let your students write buggy programs and attempt to debug them in front of the class.
In a welcoming classroom where students are given support and encouragement, debugging can be a rewarding experience. What may at first appear to be a failure — even a spectacular one — can be embraced as a valuable opportunity for learning. As a teacher in Laurie’s study said, “If something should have gone right and went badly wrong but somebody found something interesting on the way… you celebrate it. Take the fear out of it.”
In our current seminar series, we are exploring how to teach programming with and without AI.
Join us at our next seminar on Tuesday, 12 November at 17:00–18:30 GMT to hear Nicholas Gardella (University of Virginia) discuss the effects of using tools like GitHub Copilot on the motivation, workload, emotion, and self-efficacy of novice programmers. To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.
As discussions of how artificial intelligence (AI) will impact teaching, learning, and assessment proliferate, I was thrilled to be able to add one of my own research projects to the mix. As a research scientist at the Raspberry Pi Foundation, I’ve been working on a pilot research study in collaboration with Jane Waite to explore the topic of program error messages (PEMs).
PEMs can be a significant barrier to learning for novice coders, as they are often confusing and difficult to understand. This can hinder troubleshooting and progress in coding, and lead to frustration.
Recently, various teams have been exploring how generative AI, specifically large language models (LLMs), can be used to help learners understand PEMs. My research in this area specifically explores secondary teachers’ views of the explanations of PEMs generated by a LLM, as an aid for learning and teaching programming, and I presented some of my results in our ongoing seminar series.
Understanding program error messages is hard at the start
I started the seminar by setting the scene and describing the current background of research on novices’ difficulty in using PEMs to fix their code, and the efforts made to date to improve these. The three main points I made were that:
PEMs are often difficult to decipher, especially by novices, and there’s a whole research area dedicated to identifying ways to improve them.
Recent studies have employed LLMs as a way of enhancing PEMs. However, the evidence on what makes an ‘effective’ PEM for learning is limited, variable, and contradictory.
There is limited research in the context of K–12 programming education, as well as research conducted in collaboration with teachers to better understand the practical and pedagogical implications of integrating LLMs into the classroom more generally.
My pilot study aims to fill this gap directly, by reporting K–12 teachers’ views of the potential use of LLM-generated explanations of PEMs in the classroom, and how their views fit into the wider theoretical paradigm of feedback literacy.
What did the teachers say?
To conduct the study, I interviewed eight expert secondary computing educators. The interviews were semi-structured activity-based interviews, where the educators got to experiment with a prototype version of the Foundation’s publicly available Code Editor. This version of the Code Editor was adapted to generate LLM explanations when the question mark next to the standard error message is clicked (see Figure 1 for an example of a LLM-generated explanation). The Code Editor version called the OpenAI GPT-3.5 interface to generate explanations based on the following prompt: “You are a teacher talking to a 12-year-old child. Explain the error {error} in the following Python code: {code}”.
Figure 1: The Foundation’s Code Editor with LLM feedback prototype.
Fifteen themes were derived from the educators’ responses and these were split into five groups (Figure 2). Overall, the educators’ views of the LLM feedback were that, for the most part, a sensible explanation of the error messages was produced. However, all educators experienced at least one example of invalid content (LLM “hallucination”). Also, despite not being explicitly requested in the LLM prompt, a possible code solution was always included in the explanation.
Figure 2: Themes and groups derived from teachers’ responses.
Matching the themes to PEM guidelines
Next, I investigated how the teachers’ views correlated to the research conducted to date on enhanced PEMs. I used the guidelines proposed by Brett Becker and colleagues, which consolidate a lot of the research done in this area into ten design guidelines. The guidelines offer best practices on how to enhance PEMs based on cognitive science and educational theory empirical research. For example, they outline that enhanced PEMs should provide scaffolding for the user, increase readability, reduce cognitive load, use a positive tone, and provide context to the error.
Out of the 15 themes identified in my study, 10 of these correlated closely to the guidelines. However, the 10 themes that correlated well were, for the most part, the themes related to the content of the explanations, presentation, and validity (Figure 3). On the other hand, the themes concerning the teaching and learning process did not fit as well to the guidelines.
Figure 3: Correlation between teachers’ responses and enhanced PEM design guidelines.
Does feedback literacy theory fit better?
However, when I looked at feedback literacy theory, I was able to correlate all fifteen themes — the theory fits.
Feedback literacy theory positions the feedback process (which includes explanations) as a social interaction, and accounts for the actors involved in the interaction — the student and the teacher — as well as the relationships between the student, the teacher, and the feedback. We can explain feedback literacy theory using three constructs: feedback types, student feedback literacy, and teacher feedback literacy (Figure 4).
Figure 4: Feedback literacy at the intersection between feedback types, student feedback literacy, and teacher feedback literacy.
From the feedback literacy perspective, feedback can be grouped into four types: telling, guiding, developing understanding, and opening up new perspectives. The feedback type depends on the role of the student and teacher when engaging with the feedback (Figure 5).
From the student perspective, the competencies and dispositions students need in order to use feedback effectively can be stated as: appreciating the feedback processes, making judgements, taking action, and managing affect. Finally, from a teacher perspective, teachers apply their feedback literacy skills across three dimensions: design, relational, and pragmatic.
In short, according to feedback literacy theory, effective feedback processes entail well-designed feedback with a clear pedagogical purpose, as well as the competencies students and teachers need in order to make sense of the feedback and use it effectively.
This theory therefore provided a promising lens for analysing the educators’ perspectives in my study. When the educators’ views were correlated to feedback literacy theory, I found that:
Educators prefer the LLM explanations to fulfil a guiding and developing understanding role, rather than telling. For example, educators prefer to either remove or delay the code solution from the explanation, and they like the explanations to include keywords based on concepts they are teaching in the classroom to guide and develop students’ understanding rather than tell.
Related to students’ feedback literacy, educators talked about the ways in which the LLM explanations help or hinder students to make judgements and action the feedback in the explanations. For example, they talked about how detailed, jargon-free explanations can help students make judgments about the feedback, but invalid explanations can hinder this process. Therefore, teachers talked about the need for ways to manage such invalid instances. However, for the most part, the educators didn’t talk about eradicating them altogether. They talked about ways of flagging them, using them as counter-examples, and having visibility of them to be able to address them with students.
Finally, from a teacher feedback literacy perspective, educators discussed the need for professional development to manage feedback processes inclusive of LLM feedback (design) and address issues resulting from reduced opportunities to interact with students (relational and pragmatic). For example, if using LLM explanations results in a reduction in the time teachers spend helping students debug syntax errors from a pragmatic time-saving perspective, then what does that mean for the relationship they have with their students?
Conclusion from the study
By correlating educators’ views to feedback literacy theory as well as enhanced PEM guidelines, we can take a broader perspective on how LLMs might not only shape the content of the explanations, but the whole social interaction around giving and receiving feedback. Investigating ways of supporting students and teachers to practise their feedback literacy skills matters just as much, if not more, than focusing on the content of PEM explanations.
This study was a first-step exploration of eight educators’ views on the potential impact of using LLM explanations of PEMs in the classroom. Exactly what the findings of this study mean for classroom practice remains to be investigated, and we also need to examine students’ views on the feedback and its impact on their journey of learning to program.
If you want to hear more, you can watch my seminar:
If any of these ideas resonated with you as an educator, student, or researcher, do reach out — we’d love to hear from you. You can contact me directly at veronica.cucuiat@raspberrypi.org or drop us a line in the comments below.
Join our next seminar
The focus of our ongoing seminar series is on teaching programming with or without AI. Check out the schedule of our upcoming seminars.
To take part in the next seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.
Generative artificial intelligence (AI) tools are becoming more easily accessible to learners and educators, and increasingly better at generating code solutions to programming tasks, code explanations, computing lesson plans, and other learning resources. This raises many questions for educators in terms of what and how we teach students about computing and AI, and AI’s impact on assessment, plagiarism, and learning objectives.
We were honoured to have Professor Brett Becker (University College Dublin) join us as part of our ‘Teaching programming (with or without AI)’ seminar series. He is uniquely placed to comment on teaching computing using AI tools, having been involved in many initiatives relevant to computing education at different levels, in Ireland and beyond.
Brett’s talk focused on what educators and education systems need to do to prepare all students — not just those studying Computing — so that they are equipped with sufficient knowledge about AI to make their way from primary school to secondary and beyond, whether it be university, technical qualifications, or work.
How do AI tools currently perform?
Brett began his talk by illustrating the increase in performance of large language models (LLMs) in solving first-year undergraduate programming exercises: he compared the findings from two recent studies he was involved in as part of an ITiCSE Working Group. In the first study — from 2021 — the results generated by GPT-3 were similar to those of students in the top quartile. By the second study in 2023, GPT-4’s performance matched that of a top student (Figure 1).
Figure 1: Student scores on Exam 1 and Exam 2, represented by circles. GPT-3’s 2021 score is represented by the blue ‘x’, and GPT-4’s 2023 score on the same questions is represented by the red ‘x’.
Brett also explained that the study found some models were capable of solving current undergraduate programming assessments almost error-free, and could solve the Irish Leaving Certificate and UK A level Computer Science exams.
What are challenges and opportunities for education?
This level of performance raises many questions for computing educators about what is taught and how to assess students’ learning. To address this, Brett referred to his 2023 paper, which included findings from a literature review and a survey on students’ and instructors’ attitudes towards using LLMs in computing education. This analysis has helped him identify several opportunities as well as the ethical challenges education systems face regarding generative AI.
The opportunities include:
The generation of unique content, lesson plans, programming tasks, or feedback to help educators with workload and productivity
More accessible content and tools generated by AI apps to make Computing more broadly accessible to more students
More engaging and meaningful student learning experiences, including using generative AI to enable creativity and using conversational agents to augment students’ learning
The impact on assessment practices, both in terms of automating the marking of current assessments as well as reconsidering what is assessed and how
Some of the challenges include:
The lack of reliability and accuracy of outputs from generative AI tools
The need to educate everyone about AI to create a baseline level of understanding
The legal and ethical implications of using AI in computing education and beyond
How to deal with questionable or even intentionally harmful uses of AI and mitigating the consequences of such uses
Programming as a basic skill for all subjects
Next, Brett talked about concrete actions that he thinks we need to take in response to these opportunities and challenges.
He also discussed the increased relevance of programming to all subjects, not only Computing, in a similar way to how reading and mathematics transcend the boundaries of their subjects, and the need he sees to adapt subjects and curricula to that effect.
As an example of how rapidly curricula may need to change with increasing AI use by students, Brett looked at the Irish Computer science specification for “senior cycle” (final two years of second-level, ages 16–18). This curriculum was developed in 2018 and remains a strong computing curriculum in Brett’s opinion. However, he pointed out that it only contains a single learning outcome on AI.
To help educators bridge this gap, in the book Brett wrote alongside Keith Quille to accompany the curriculum, they included two chapters dedicated to AI, machine learning, and ethics and computing. Brett believes these types of additional resources may be instrumental for teaching and learning about AI as resources are more adaptable and easier to update than curricula.
Generative AI in computing education
Taking the opportunity to use generative AI to reimagine new types of programming problems, Brett and colleagues have developed Promptly, a tool that allows students to practise prompting AI code generators. This tool provides a combined approach to learning about generative AI while learning programming with an AI tool.
Promptly is intended to help students learn how to write effective prompts. It encourages students to specify and decompose the programming problem they want to solve, read the code generated, compare it with test cases to discern why it is failing (if it is), and then update their prompt accordingly (Figure 2).
Figure 2: Example of a student’s use of Promptly.
Early undergraduate student feedback points to Promptly being a useful way to teach programming concepts and encourage metacognitive programming skills. The tool is further described in a paper, and whilst the initial evaluation was aimed at undergraduate students, Brett positioned it as a secondary school–level tool as well.
Brett hopes that by using generative AI tools like this, it will be possible to better equip a larger and more diverse pool of students to engage with computing.
Re-examining the concept of programming
Brett concluded his seminar by broadening the relevance of programming to all learners, while challenging us to expand our perspectives of what programming is. If we define programming as a way of prompting a machine to get an output, LLMs allow all of us to do so without the need for learning the syntax of traditional programming languages. Taking that view, Brett left us with a question to consider: “How do we prepare for this from an educational perspective?”
The focus of our ongoing seminar series is on teaching programming with or without AI.
For our next seminar on Tuesday 11 June at 17:00 to 18:30 GMT, we’re joined by Veronica Cucuiat (Raspberry Pi Foundation), who will talk about whether LLMs could be employed to help understand programming error messages, which can present a significant obstacle to anyone new to coding, especially young people.
To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.
I’m excited to announce that we’re developing a new set of Code Editor features to help school teachers run text-based coding lessons with their students.
New Code Editor features for teaching
Last year we released our free Code Editor and made it available as an open source project. Right now we’re developing a new set of features to help schools use the Editor to run text-based coding lessons online and in-person.
The new features will enable educators to create coding activities in the Code Editor, share them with their students, and leave feedback directly on each student’s work. In a simple and easy-to-use interface, educators will be able to give students access, group them into classes within a school account, and quickly help with resetting forgotten passwords.
Example Code Editor feedback screen from an early prototype
We’re adding these teaching features to the Code Editor because one of the key problems we’ve seen educators face over the last few months has been the lack of an ideal tool to teach text-based coding in the classroom. There are some options available, but they can be cost-prohibitive for schools and educators. Our mission is to support young people to realise their full potential through the power of computing, and we believe that to tackle educational disadvantage, we need to offer high-quality tools and make them as accessible as possible. This is why we’ll offer the Code Editor and all its features to educators and students for free, forever.
Alongside the new classroom management features, we’re also working on improved Python library support for the Code Editor, so that you and your students can get more creative and use the Editor for more advanced topics. We continue to support HTML, CSS, and JavaScript in the Editor too, so you can set website development tasks in the classroom.
Educators have already been incredibly generous in their time and feedback to help us design these new Code Editor features, and they’ve told us they’re excited to see the upcoming developments. Pete Dring, Head of Computing at Fulford School, participated in our user research and said on LinkedIn: “The class management and feedback features they’re working on at the moment look really promising.” Lee Willis, Head of ICT and Computing at Newcastle High School for Girls, also commented on the Code Editor: “We have used it and love it, the fact that it is both for HTML/CSS and then Python is great as the students have a one-stop shop for IDEs.”
Our commitment to you
Free forever: We will always provide the Code Editor and all of its features to educators and students for free.
A safe environment: Accounts for education are designed to be safe for students aged 9 and up, with safeguarding front and centre.
Privacy first: Student data collection is minimised and all collected data is handled with the utmost care, in compliance with GDPR and the ICO Children’s Code.
Best-practice pedagogy: We’ll always build with education and learning in mind, backed by our leading computing education research.
Community-led: We value and seek out feedback from the computing education community so that we can continue working to make the Code Editor even better for teachers and students.
Get started
We’re working to have the Code Editor’s new teaching features ready later this year. We’ll launch the setup journey sooner, so that you can pre-register for your school account as we continue to work on these features.
Before then, you can complete this short form to keep up to date with progress on these new features or to get involved in user testing.
The Code Editor is already being used by thousands of people each month. If you’d like to try it, you can get started writing code right in your browser today, with zero setup.
Sri Yash Tadimalla from the University of North Carolina and Dr Mary Lou Maher, Director of Research Community Initiatives at the Computing Research Association, are exploring how student identities affect their interaction with AI tools and their perceptions of the use of AI tools. They presented findings from two of their research projects in our March seminar.
How students interact with AI tools
A common approach in research is to begin with a preliminary study involving a small group of participants in order to test a hypothesis, ways of collecting data from participants, and an intervention. Yash explained that this was the approach they took with a group of 25 undergraduate students on an introductory Java programming course. The research observed the students as they performed a set of programming tasks using an AI chatbot tool (ChatGPT) or an AI code generator tool (GitHub Copilot).
Highly confident students rely heavily on AI tools and are confident about the quality of the code generated by the tool without verifying it
Cautious students are careful in their use of AI tools and verify the accuracy of the code produced
Curious students are interested in exploring the capabilities of the AI tool and are likely to experiment with different prompts
Frustrated students struggle with using the AI tool to complete the task and are likely to give up
Innovative students use the AI tool in creative ways, for example to generate code for other programming tasks
Whether these attitudes are common for other and larger groups of students requires more research. However, these preliminary groupings may be useful for educators who want to understand their students and how to support them with targeted instructional techniques. For example, highly confident students may need encouragement to check the accuracy of AI-generated code, while frustrated students may need assistance to use the AI tools to complete programming tasks.
An intersectional approach to investigating student attitudes
Yash and Mary Lou explained that their next research study took an intersectional approach to student identity. Intersectionality is a way of exploring identity using more than one defining characteristic, such as ethnicity and gender, or education and class. Intersectional approaches acknowledge that a person’s experiences are shaped by the combination of their identity characteristics, which can sometimes confer multiple privileges or lead to multiple disadvantages.
In the second research study, 50 undergraduate students participated in programming tasks and their approaches and attitudes were observed. The gathered data was analysed using intersectional groupings, such as:
Students who were from the first generation in their family to attend university and female
Students who were from an underrepresented ethnic group and female
Although the researchers observed differences amongst the groups of students, there was not enough data to determine whether these differences were statistically significant.
Who thinks using AI tools should be considered cheating?
Participating students were also asked about their views on using AI tools, such as “Did having AI help you in the process of programming?” and “Does your experience with using this AI tool motivate you to continue learning more about programming?”
The same intersectional approach was taken towards analysing students’ answers. One surprising finding stood out: when asked whether using AI tools to help with programming tasks should be considered cheating, students from more privileged backgrounds agreed that this was true, whilst students with less privilege disagreed and said it was not cheating.
This finding is only with a very small group of students at a single university, but Yash and Mary Lou called for other researchers to replicate this study with other groups of students to investigate further.
Acknowledging differences to prevent deepening divides
As researchers and educators, we often hear that we should educate students about the importance of making AI ethical, fair, and accessible to everyone. However, simply hearing this message isn’t the same as truly believing it. If students’ identities influence how they view the use of AI tools, it could affect how they engage with these tools for learning. Without recognising these differences, we risk continuing to create wider and deeper digital divides.
For our next seminar on Tuesday 16 April at 17:00 to 18:30 GMT, we’re joined by Brett A. Becker (University College Dublin), who will talk about how generative AI can be used effectively in secondary school programming education and how it can be leveraged so that students can be best prepared for continuing their education or beginning their careers. To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.
AI models for general-purpose programming, such as OpenAI Codex, which powers the AI pair programming tool GitHub Copilot, have the potential to significantly impact how we teach and learn programming.
The basis of these tools is a ‘natural language to code’ approach, also called natural language programming. This allows users to generate code using a simple text-based prompt, such as “Write a simple Python script for a number guessing game”. Programming-specific AI models are trained on vast quantities of text data, including GitHub repositories, to enable users to quickly solve coding problems using natural language.
As a computing educator, you might ask what the potential is for using these tools in your classroom. In our latest research seminar, Majeed Kazemitabaar (University of Toronto) shared his work in developing AI-assisted coding tools to support students during Python programming tasks.
Evaluating the benefits of natural language programming
Majeed argued that natural language programming can enable students to focus on the problem-solving aspects of computing, and support them in fixing and debugging their code. However, he cautioned that students might become overdependent on the use of ‘AI assistants’ and that they might not understand what code is being outputted. Nonetheless, Majeed and colleagues were interested in exploring the impact of these code generators on students who are starting to learn programming.
Using AI code generators to support novice programmers
In one study, the team Majeed works in investigated whether students’ task and learning performance was affected by an AI code generator. They split 69 students (aged 10–17) into two groups: one group used a code generator in an environment, Coding Steps, that enabled log data to be captured, and the other group did not use the code generator.
Learners who used the code generator completed significantly more authoring tasks — where students manually write all of the code — and spent less time completing them, as well as generating significantly more correct solutions. In multiple choice questions and modifying tasks — where students were asked to modify a working program — students performed similarly whether they had access to the code generator or not.
A test was administered a week later to check the groups’ performance, and both groups did similarly well. However, the ‘code generator’ group made significantly more errors in authoring tasks where no starter code was given.
Majeed’s team concluded that using the code generator significantly increased the completion rate of tasks and student performance (i.e. correctness) when authoring code, and that using code generators did not lead to decreased performance when manually modifying code.
Finally, students in the code generator group reported feeling less stressed and more eager to continue programming at the end of the study.
Student perceptions when (not) using AI code generators
Understanding how novices use AI code generators
In a related study, Majeed and his colleagues investigated how novice programmers used the code generator and whether this usage impacted their learning. Working with data from 33 learners (aged 11–17), they analysed 45 tasks completed by students to understand:
The context in which the code generator was used
What learners asked for
How prompts were written
The nature of the outputted code
How learners used the outputted code
Their analysis found that students used the code generator for the majority of task attempts (74% of cases) with far fewer tasks attempted without the code generator (26%). Of the task attempts made using the code generator, 61% involved a single prompt while only 8% involved decomposition of the task into multiple prompts for the code generator to solve subgoals; 25% used a hybrid approach — that is, some subgoal solutions being AI-generated and others manually written.
In a comparison of students against their post-test evaluation scores, there were positive though not statistically significant trends for students who used a hybrid approach (see the image below). Conversely, negative though not statistically significant trends were found for students who used a single prompt approach.
A positive correlation between hybrid programming and post-test scores
Though not statistically significant, these results suggest that the students who actively engaged with tasks — i.e. generating some subgoal solutions, manually writing others, and debugging their own written code — performed better in coding tasks.
Majeed concluded that while the data showed evidence of self-regulation, such as students writing code manually or adding to AI-generated code, students frequently used the output from single prompts in their solutions, indicating an over-reliance on the output of AI code generators.
He suggested that teachers should support novice programmers to write better quality prompts to produce better code.
If you want to learn more, you can watch Majeed’s seminar:
The focus of our ongoing seminar series is on teaching programming with or without AI.
For our next seminar on Tuesday16 April at 17:00–18:30 GMT, we’re joined by Brett Becker (University College Dublin), who will discuss how generative AI may be effectively utilised in secondary school programming education and how it can be leveraged so that students can be best prepared for whatever lies ahead. To take part in the seminar, click the button below to sign up, and we will send you information about joining. We hope to see you there.
The use of generative AI tools (e.g. ChatGPT) in education is now common among young people (see data from the UK’s Ofcom regulator). As a computing educator or researcher, you might wonder what impact generative AI tools will have on how young people learn programming. In our latest research seminar, Barbara Ericson and Xinying Hou (University of Michigan) shared insights into this topic. They presented recent studies with university student participants on using generative AI tools based on large language models (LLMs) during programming tasks.
Using Parson’s Problems to scaffold student code-writing tasks
Barbara and Xinying started their seminar with an overview of their earlier research into using Parson’s Problems to scaffold university students as they learn to program. Parson’s Problems (PPs) are a type of code completion problem where learners are given all the correct code to solve the coding task, but the individual lines are broken up into blocks and shown in the wrong order (Parsons and Haden, 2006). Distractor blocks, which are incorrect versions of some or all of the lines of code (i.e. versions with syntax or semantic errors), can also be included. This means to solve a PP, learners need to select the correct blocks as well as place them in the correct order.
In one study, the research team asked whether PPs could support university students who are struggling to complete write-code tasks. In the tasks, the 11 study participants had the option to generate a PP when they encountered a challenge trying to write code from scratch, in order to help them arrive at the complete code solution. The PPs acted as scaffolding for participants who got stuck trying to write code. Solutions used in the generated PPs were derived from past student solutions collected during previous university courses. The study had promising results: participants said the PPs were helpful in completing the write-code problems, and 6 participants stated that the PPs lowered the difficulty of the problem and speeded up the problem-solving process, reducing their debugging time. Additionally, participants said that the PPs prompted them to think more deeply.
This study provided further evidence that PPs can be useful in supporting students and keeping them engaged when writing code. However, some participants still had difficulty arriving at the correct code solution, even when prompted with a PP as support. The research team thinks that a possible reason for this could be that only one solution was given to the PP, the same one for all participants. Therefore, participants with a different approach in mind would likely have experienced a higher cognitive demand and would not have found that particular PP useful.
Supporting students with varying self-efficacy using PPs
To understand the impact of using PPs with different learners, the team then undertook a follow-up study asking whether PPs could specifically support students with lower computer science self-efficacy. The results show that study participants with low self-efficacy who were scaffolded with PPs support showed significantly higher practice performance and higher problem-solving efficiency compared to participants who had no scaffolding. These findings provide evidence that PPs can create a more supportive environment, particularly for students who have lower self-efficacy or difficulty solving code writing problems. Another finding was that participants with low self-efficacy were more likely to completely solve the PPs, whereas participants with higher self-efficacy only scanned or partly solved the PPs, indicating that scaffolding in the form of PPs may be redundant for some students.
These two studies highlighted instances where PPs are more or less relevant depending on a student’s level of expertise or self-efficacy. In addition, the best PP to solve may differ from one student to another, and so having the same PP for all students to solve may be a limitation. This prompted the team to conduct their most recent study to ask how large language models (LLMs) can be leveraged to support students in code-writing practice without hindering their learning.
Generating personalised PPs using AI tools
This recent third study focused on the development of CodeTailor, a tool that uses LLMs to generate and evaluate code solutions before generating personalised PPs to scaffold students writing code. Students are encouraged to engage actively with solving problems as, unlike other AI-assisted coding tools that merely output a correct code correct solution, students must actively construct solutions using personalised PPs. The researchers were interested in whether CodeTailor could better support students to actively engage in code-writing.
In a study with 18 undergraduate students, they found that CodeTailor could generate correct solutions based on students’ incorrect code. The CodeTailor-generated solutions were more closely aligned with students’ incorrect code than common previous student solutions were. The researchers also found that most participants (88%) preferred CodeTailor to other AI-assisted coding tools when engaging with code-writing tasks. As the correct solution in CodeTailor is generated based on individual students’ existing strategy, this boosted students’ confidence in their current ideas and progress during their practice. However, some students still reported challenges around solution comprehension, potentially due to CodeTailor not providing sufficient explanation for the details in the individual code blocks of the solution to the PP. The researchers argue that text explanations could help students fully understand a program’s components, objectives, and structure.
In future studies, the team is keen to evaluate a design of CodeTailor that generates multiple levels of natural language explanations, i.e. provides personalised explanations accompanying the PPs. They also aim to investigate the use of LLM-based AI tools to generate a self-reflection question structure that students can fill in to extend their reasoning about the solution to the PP.
Barbara and Xinying’s seminar is available to watch here:
Find examples of PPs embedded in free interactive ebooks that Barbara and her team have developed over the years, including CSAwesome and Python for Everybody. You can also read more about the CodeTailor platform in Barbara and Xinying’s paper.
Join our next seminar
The focus of our ongoing seminar series is on teaching programming with or without AI.
For our next seminar on Tuesday12 March at 17:00–18:30 GMT, we’re joined by Yash Tadimalla and Prof. Mary Lou Maher (University of North Carolina at Charlotte). The two of them will share further insights into the impact of AI tools on the student experience in programming courses. To take part in the seminar, click the button below to sign up, and we will send you information about joining. We hope to see you there.
How do you best teach programming in school? It’s one of the core questions for primary and secondary computing teachers. That’s why we’re making it the focus of our free online seminars in 2024. You’re invited to attend and hear about the newest research about the teaching and learning of programming, with or without AI tools.
Building on the success and the friendly, accessible session format of our previous seminars, this coming year we will delve into the latest trends and innovative approaches to programming education in school.
Our online seminars are for everyone interested in computing education
Our monthly online seminars are not only for computing educators but also for everyone else who is passionate about teaching young people to program computers. The seminar participants are a diverse community of teachers, technology enthusiasts, industry professionals, coding club volunteers, and researchers.
With the seminars we aim to bridge the gap between the newest research and practical teaching. Whether you are an educator in a traditional classroom setting or a mentor guiding learners in a CoderDojo or Code Club, you will gain insights from leading researchers about how school-age learners engage with programming.
What to expect from the seminars
Each online seminar begins with an expert presenter delivering their latest research findings in an accessible way. We then move into small groups to encourage discussion and idea exchange. Finally, we come back together for a Q&A session with the presenter.
Here’s what attendees had to say about our previous seminars:
“As a first-time attendee of your seminars, I was impressed by the welcoming atmosphere.”
“[…] several seminars (including this one) provided valuable insights into different approaches to teaching computing and technology.”
“I plan to use what I have learned in the creation of curriculum […] and will pass on what I learned to my team.”
“I enjoyed the fact that there were people from different countries and we had a chance to see what happens elsewhere and how that may be similar and different to what we do here.”
January seminar: AI-generated Parson’s Problems
Computing teachers know that, for some students, learning about the syntax of programming languages is very challenging. Working through Parson’s Problem activities can be a way for students to learn to make sense of the order of lines of code and how syntax is organised. But for teachers it can be hard to precisely diagnose their students’ misunderstandings, which in turn makes it hard to create activities that address these misunderstandings.
At our first 2024 seminar on 9 January, Dr Barbara Ericson and Xinying Hou (University of Michigan) will present a promising new approach to helping teachers solve this difficulty. In one of their studies, they combined Parsons Problems and generative AI to create targeted activities for students based on the errors students had made in previous tasks. Thus they were able to provide personalised activities that directly addressed gaps in the students’ learning.
Sign up now to join our seminars
All our seminars start at 17:00 UK time (18:00 CET / 12:00 noon ET / 9:00 PT) and are held online on Zoom. To ensure you don’t miss out, sign up now to receive calendar invitations, and access links for each seminar on the day.
Um dir ein optimales Erlebnis zu bieten, verwenden wir Technologien wie Cookies, um Geräteinformationen zu speichern und/oder darauf zuzugreifen. Wenn du diesen Technologien zustimmst, können wir Daten wie das Surfverhalten oder eindeutige IDs auf dieser Website verarbeiten. Wenn du deine Einwillligung nicht erteilst oder zurückziehst, können bestimmte Merkmale und Funktionen beeinträchtigt werden.
Funktional
Immer aktiv
Die technische Speicherung oder der Zugang ist unbedingt erforderlich für den rechtmäßigen Zweck, die Nutzung eines bestimmten Dienstes zu ermöglichen, der vom Teilnehmer oder Nutzer ausdrücklich gewünscht wird, oder für den alleinigen Zweck, die Übertragung einer Nachricht über ein elektronisches Kommunikationsnetz durchzuführen.
Vorlieben
Die technische Speicherung oder der Zugriff ist für den rechtmäßigen Zweck der Speicherung von Präferenzen erforderlich, die nicht vom Abonnenten oder Benutzer angefordert wurden.
Statistiken
Die technische Speicherung oder der Zugriff, der ausschließlich zu statistischen Zwecken erfolgt.Die technische Speicherung oder der Zugriff, der ausschließlich zu anonymen statistischen Zwecken verwendet wird. Ohne eine Vorladung, die freiwillige Zustimmung deines Internetdienstanbieters oder zusätzliche Aufzeichnungen von Dritten können die zu diesem Zweck gespeicherten oder abgerufenen Informationen allein in der Regel nicht dazu verwendet werden, dich zu identifizieren.
Marketing
Die technische Speicherung oder der Zugriff ist erforderlich, um Nutzerprofile zu erstellen, um Werbung zu versenden oder um den Nutzer auf einer Website oder über mehrere Websites hinweg zu ähnlichen Marketingzwecken zu verfolgen.