Schlagwort: research seminar

  • Bringing data science to life for K–12 students with the ‘API Can Code’ curriculum

    Bringing data science to life for K–12 students with the ‘API Can Code’ curriculum

    Reading Time: 7 minutes

    As data and data-driven technologies become a bigger part of everyday life, it’s more important than ever to make sure that young people are given the chance to learn data science concepts and skills.

    In our April research seminar, David Weintrop, Rotem Israel-Fishelson, and Peter Moon from the University of Maryland introduced API Can Code, a data science curriculum designed with high school students for high school students. Their talk explored how their innovative work uses real-world data and students’ own experiences and interests to create meaningful, authentic learning experiences in data science.

    Quick note for educators: Are you interested in joining our free, exploratory data science education workshop for teachers on 10 July 2025 in Cambridge, UK? Then find out the details here.

    David started by explaining the motivation behind the API Can Code project. The team’s goal was not to turn students into future data scientists, but to offer students the data literacy they need to explore and critically engage with a data-driven world. 

    The work was also guided by a shared view among leading teachers’ organisations that data science should be taught across all subjects in the K–12 curriculum. It also draws on strong research showing that when educational experiences connect with students’ own lives and interests, it leads to deeper engagement and better learning outcomes.

    Reviewing the landscape

    To prepare for the design of the curriculum, David, Rotem, and Peter wanted to understand what data science education options already exist for K–12 students. Rotem described how they compared four major K–12 data science curricula and examined different aspects, such as the topics they covered and the datasets they used. Their findings showed that many datasets were quite small in size, and that the datasets used were not always about topics that students were interested in.

    A classroom of young learners and a teacher at laptops

    The team also looked at 30 data science tools used across different K–12 platforms and analysed what each could do. They found that tools varied in how effective they were and that many lacked accessibility features to support students with diverse learning needs. 

    This analysis helped to refine the team’s objective: to create a data science curriculum that students find interesting and that is informed by their values and voices.

    Participatory design

    To work towards this goal, the team used a methodology called participatory design. This is an approach that actively involves the end users — in this case, high school students — in the design process. During several in-person sessions with 28 students aged 15 to 18 years old, the researchers facilitated low-tech, hands-on activities exploring the students’ identities and interests and how they think about data.

    One activity, Empathy Map, involved students working together to create a persona representing a student in their school. They were asked to describe the persona’s daily life, interests, and concerns about technology and data:

    The students’ involvement in the design process gave the team a better understanding of young people’s views and interests, which helped create the design of the API Can Code curriculum.

    API Can Code: three units, three key tools

    Peter provided an overview of the API Can Code curriculum. It follows a three-unit flow covering different concepts and tools in each unit:

    1. Unit 1 introduces students to different types of data and data science terminology. The unit explores the role of data in the students’ daily lives, how use and misuse of data can affect them, different ways of collecting and presenting data, and how to evaluate databases for aspects such as size, recency, and trustworthiness. It also introduces them to RapidAPI, a hub that connects to a wide range of APIs from different providers, allowing students to access real-world data such as Zillow housing prices or Spotify music data.
    2. Unit 2 covers the computing skills used in data science, including the use of programming tools to run efficient data science techniques. Students learn to use EduBlocks, a block-based programming environment where students can draw in JSON files from RapidAPI datasets, and process and filter data without needing a lot of text-based programming skills. The students also compare this approach with manual data processing, which they discover is very slow.
    3. Unit 3 focuses on data analysis, visualisation, and interpretation. Students use CODAP, a web-based interactive data science tool, to calculate summary statistics, create graphs, and perform analyses. CODAP is a user-friendly but powerful platform, making it perfect for students to analyse and visualise their data sets. Students also practise interpreting pre-made graphs and the graphs and statistics that they are creating.

    Peter described an example activity carried out by the students, showing how these three units flow together and build both technical skills and an understanding of the real-world uses of data science. Students were tasked with analysing a dataset from Zillow, a property website, to explore the question “How much does a house in my neighbourhood cost?” The images below show the process the students followed, which uses the data science skills and tools from all three units of the curriculum.

    Interest-driven learning in action

    A central tenet of API Can Code is that students should explore data that matters to them. A diverse range of student interests was identified during the design work, and the curriculum uses these areas of interest, such as music, movies, sports, and animals, throughout the lessons.

    The curriculum also features an open-ended final project, where students can choose a research question that is important to them and their lives, and answer it using data science skills.

    The team shared two examples of memorable final projects. In one, a student set out to answer the question “Is Jhené Aiko a star?” The student found a publicly available dataset through an API provided by Deezer, a music streaming platform. She wrote a program that retrieved data on the artist’s longevity and collaborations, analysed the data, and concluded that Aiko is indeed a star. What stood out about this project wasn’t just the fact that the student independently defined stardom and answered their research question using real data, but that this was a truly personal, interest-driven project. David noted that the researchers could never have come up with this activity, since they had never previously heard of Jhené Aiko!

    Jhené Aiko, an R&B singer-songwriter
    Jhené Aiko, an R&B singer-songwriter 
    (Photo by Charito Yap, licensed under CC BY-ND 2.0)

    Another student’s project analysed data about housing in Washington DC to answer the question “Which ward in DC has the most affordable houses?” Rotem explained that this student was motivated by her family thinking about moving away from the city. She wanted to use her project to persuade her parents to stay by identifying the most affordable ward in DC that they could move to. She was excited by the outcome of her project, and she presented her findings to other students and her parents.

    These projects underscore the power of personally important data science projects driven by students’ interests. When students care about the questions they are exploring, they’re more invested in the process and more likely to keep using the skills and concepts they learn.

    Resources

    API Can Code is available online and completely free to use. Teachers can access lesson plans, tutorial videos, assessment rubrics, and more from the curriculum’s website https://apicancode.umd.edu/. The site also provides resources to support students, including example programs and glossaries.

    Join our next seminar

    In our current seminar series, we’re exploring teaching about AI and data science. Join us at our next seminar on Tuesday, 17 June from 17:00 to 18:30 BST to hear Netta Iivari (University of Oulu) introduce transformative agency and its importance for children’s computing education in the age of AI.

    To sign up and take part in our research seminars, click below:

    You can also view the schedule of our upcoming seminars, and catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Research insights to help learners develop data awareness

    Research insights to help learners develop data awareness

    Reading Time: 7 minutes

    An increasing number of frameworks describe the possible contents of a K–12 artificial intelligence (AI) curriculum and suggest possible learning activities (for example, see the UNESCO competency framework for students, 2024). In our March seminar, Lukas Höper and Carsten Schulte from the Department of Computing Education at Paderborn University in Germany shared with us a unit of work they’ve developed that could inform such a curriculum. At its core, the unit enhances young people’s awareness of how their personal data is used in the data-driven technologies that form part of their everyday lives.

    Lukas Höper and Carsten Schulte are part of a larger team who are investigating how to teach school students about data science and Big Data.

    Carsten explained that Germany’s informatics (computing) curriculum includes a competency area known as Informatics, People and Society (IPS), which explores the interrelationships between technology, individuals, and society, and how computation influences and is influenced by social, ethical, and cultural factors. However, research has suggested that teachers face several problems in delivering this topic, including:

    • Lack of subject knowledge 
    • Lack of teaching material
    • Lack of integration with other topics in informatics lessons
    • A perception that IPS is the responsibility of other subjects

    Some of the findings of that 2007 research were mirrored in a more recent local study in 2025, which found that although there have been some gains in subject knowledge in the interval period, the problems of a lack of teaching material and integration with other computer science (CS) topics persist, with IPS increasingly perceived as the responsibility of the informatics subject area alone. Despite this, within the informatics curriculum, IPS is often the first topic to be dropped when educators face time constraints — and concerns with what and how to assess the topic remain. 

    Photo focused on a young person working on a computer in a classroom.

    In this context, and as part of a larger, longitudinal project to promote data science teaching in schools called ProDaBi, Carsten and Lukas have been developing, implementing, and evaluating concepts and materials on the topics of data science and AI. Lukas explained the importance of students developing data awareness in the context of the digital systems they use in their everyday lives, such as search engines, streaming services, social media apps, digital assistants, and chatbots, and emphasised the difference between being a user of these systems and a data-aware user. Using the example of image recognition and ‘I am not a robot’ Captcha services, Lukas explained how young people need to develop a data-aware perspective of the secondary purposes of the data collected by these (and other) systems, as well as the more obvious, primary purposes. 

    Lukas went on to illustrate the human interaction system model, which presents a continuum of possible different roles, from the student as the user of digital artefacts to the student as the designer of digital artefacts. 

     Figure 1. Different roles in interactions with data-driven technologies
     Figure 1. Different roles in interactions with data-driven technologies

    To become data-aware users of digital artefacts, students need to be able to understand and reflect on those digital artefacts. Only then can they proceed to become responsible designers of digital artefacts. However, when surveyed, some students were only moderately interested in engaging with the inner workings of the digital technologies they use in their everyday lives. Many students prefer to use the systems and are less interested in how they process data. 

    The explanatory model approach in computing education

    Lukas explained how students often become more interested in data-driven technologies when learning about them with explanatory models. Such models can foster data awareness, giving students a different perspective of data-driven technologies and helping them become more empowered users of them. 

    To illustrate, Lukas gave the example of an explanatory model about the role of data in digital systems. Such a model can be used to introduce the idea that data is explicitly and implicitly collected in the interaction between the user and the technology, and used for primary and secondary purposes. 

    The four parts of the explanatory model.
    Figure 2. The four parts of the explanatory model

    Lukas then introduced two teaching units that were developed for use with middle school children to evaluate the success of the explanatory model approach in computing education. The first unit explores location data collected by mobile phone networks and the second features recommendation systems used by movie streaming services such as Netflix and Amazon Prime.

    Taking the second unit as their focus, Lukas and Carsten outlined the four parts of the explanatory model approach: 

    Part 1

    The teaching unit begins by introducing recommendation systems and asking students to think about what a streaming service is, how a personalised start page is constructed, and how personal recommendations might be generated. Students then complete an unplugged activity to simulate the process of making movie recommendations for a peer:

    Task 1: Students write down movie recommendations for another student. 

    Task 2: They then ask each other questions (they collect data). 

    Task 3: They write down revised movie recommendations.

    Task 4: They share and evaluate their recommendations.  

    Task 5: Together they reflect on which collected data was helpful in this exercise and what kind of data a recommendation system might collect. This reflection introduces the concepts of explicit and implicit data collection. 

    Part 2

    In part 2, students are given a prepared Jupyter Notebook, which allows them to explore a simulation of a recommendation system. Students rate movies and receive personal recommendations. They reconstruct a data model about users, using the idea of collaborative filtering with the k-nearest neighbours algorithm (see Figure 3). 

    Figure 3. Data model of movie ratings
    Figure 3. Data model of movie ratings

    Part 3

    In part 3, the concepts of primary and secondary purposes for data collection are introduced. Students discuss examples of secondary purposes such as personalised paywalls for movies that can be purchased, and subscriptions based on the predictions of future behaviour. The discussion includes various topics about individual and societal issues (e.g. filter bubbles, behaviour engineering, information asymmetry, and responsible development of data-driven technologies). 

    Part 4

    Finally, students use the explanatory model as an ‘analytical lens’. They choose other examples from their everyday lives of technologies that implement recommendation systems and analyse these examples, assessing the data practices involved. Students present their results in class and discuss their role in these situations and possible actions they can take to become more empowered, data-aware users.

    Uses of explanatory models

    Using the explanatory model is one approach to make the Informatics, People and Society strand of the German informatics curriculum more engaging for students, and addresses some of the problems teachers identify with delivering this competency area. 

    In presenting the idea of the explanatory model, Carsten and Lukas emphasised that the model in use delivers content as well as functioning as a tool to design teaching content. In the example above, we see how the explanatory model introduces the concepts of:

    1. Explicit and implicit data collection
    2. Primary and secondary purposes of that data 
    3. Data models 

    The explanatory model framework can also be used as a focus for academic research in computing education. For example, further research is needed to evaluate if explanatory models are appropriate or ‘correct’ models and to determine the extent to which they are useful in computing education. 

    In summary, an explanatory model provides a specific perspective on and explanation of particular computing concepts and digital artefacts. In the example given here, the model focuses on the role of data in a recommender system. Explanatory models are representations of concepts, artefacts, and socio-technical systems, but can also serve as tools to support teaching and learning processes and research in computing education. 

    Figure 4. Overview of the perspectives of explanatory models
    Figure 4. Overview of the perspectives of explanatory models. Click to enlarge.

    The teaching units referred to above are published on www.prodabi.de (in German and English). 

    See the background paper to the seminar, called ‘Learning an explanatory model of data-driven technologies can lead to empowered behaviour: A mixed-methods study in K-12 Computing education’.

    You can also view the paper describing the development of the explanatory model approach, called ‘New perspectives on the future of Computing education: Teaching and learning explanatory models’.

    Join our next seminar

    In our current seminar series, we’re exploring teaching about AI and data science. Join us at our next seminar on Tuesday 13 May at 17:00–18:30 BST to hear Henriikka Vartiainen and Matti Tedre (University of Eastern Finland) discuss how to empower students by teaching them how to develop AI and machine learning (ML) apps without code in the classroom.

    To sign up and take part in our research seminars, click below:

    You can also view the schedule of our upcoming seminars, and catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Supporting teachers to integrate AI in K–12 CS education

    Supporting teachers to integrate AI in K–12 CS education

    Reading Time: 5 minutes

    Teaching about artificial intelligence (AI) is a growing challenge for educators around the world. In our current seminar series, we are gaining insights from international computing education researchers on how to teach about AI and data science in the classroom. In our second seminar, Franz Jetzinger from the Technical University of Munich, Germany, presented his work on supporting teachers to integrate AI into their classrooms. Franz brings a wealth of relevant experience to his research as an accomplished textbook author and K–12 computer science teacher.

    A photo of Franz Jetzinger in a library.

    Franz started by demonstrating how widespread AI systems and technologies are becoming. He argued that embedding lessons about AI in the classroom presents three challenges: 

    1. What to teach (defining AI and learning content)
    2. How to teach (i.e. appropriate pedagogies)
    3. How to prepare teachers (i.e. effective professional development) 

    As various models and frameworks for teaching about AI already exist, Franz’s research aims to address the second and third challenges — there is a notable lack of empirical evidence integrating AI in K–12 settings or teacher professional development (PD) to support teachers.

    Using professional development to help prepare teachers

    In Bavaria, computer science (CS) has been a compulsory high school subject for over 20 years. However, a recent update has brought compulsory CS lessons (including AI) to Year 11 students (15–16 years old). Competencies targeted in the new curriculum include defining AI, explaining the functionality of different machine learning algorithms, and understanding how artificial neurons work.

    Two students are seated at a desk, collaborating on a computing task.

    To help prepare teachers to effectively teach this new curriculum and about AI, Franz and colleagues derived a set of core competencies to be used along with existing frameworks (e.g. the Five Big Ideas of AI) and the Bavarian curriculum. The PD programme Franz and colleagues developed was shaped by a set of key design principles:

    1. Blended learning: A blended format was chosen to address the need for scalability and limited resources and to enable self-directed and active learning 
    2. Dual-level pedagogy (or ‘pedagogical double-decker’): Teachers were taught with the same materials to be used in the classroom to aid familiarity
    3. Advanced organiser: A broad overview document was created to support teachers learning new topics 
    4. Moodle: An online learning platform was used to enable collaboration and communication via a MOOC (massive open online course)

    Analysing the effectiveness of the PD programme

    Over 300 teachers attended the MOOC, which had an introductory session beforehand and a follow-up workshop. The programme’s effectiveness was evaluated with a pre/post assessment where teachers completed a survey of 15 closed, multiple-choice questions on their AI competencies and knowledge. Pre/post comparisons showed teachers’ scores improved significantly having taken part in the PD. This is surprising as a large proportion of participants achieved high pre-scores, indicating a highly motivated cohort with notable prior experience teaching about AI.

    Additionally, a group of teachers (n=9) were invited to give feedback on which aspects of the PD programme they felt contributed to the success of implementing the curriculum in the classroom. They reported that the PD programme supported content knowledge and pedagogical content knowledge well, but they required additional support to design suitable learning assessments.

    The design of the professional development programme

    Using action research to aid AI teaching 

    A separate strand of Franz’s research focuses on the other key challenge of how to effectively teach about AI. Franz engaged teachers (n=14) in action research, a method whereby teachers engage in classroom-based research projects. The project explored what topic-specific difficulties students faced during the lessons and how teachers adapted their teaching to overcome these challenges.

    The AI curriculum in Bavaria

    Findings revealed that students struggled with determining whether AI would benefit certain tasks (e.g. object recognition, text-to-speech) or not (e.g. GPS positioning, sorting data). Franz and colleagues reasoned that students were largely not aware of how AI systems deal with uncertainty and overestimated their capabilities. Therefore, an important step in teaching students about AI is defining ‘what an AI problem is’. 

    A teenager learning computer science.

    Similarly, students struggled with distinguishing between rule-based and data-driven approaches, believing in some cases that a trained model becomes ‘rule-based’ or that all data models are data-driven. Students also struggled with certain data science concepts, such as hyperparameter, overfitting and underfitting, and information gain. Franz’s team argue that the chosen tool, Orange Data Mining, did not provide an appropriate scaffold for encountering these concepts. 

    Finally, teachers found challenges in bringing real-world examples into the classroom, including the use of reinforcement learning and neural networks. Franz and colleagues reasoned that focusing on the function of neural networks, as opposed to their structure, would aid student understanding. The use of high-quality (i.e. well-prepared) real-world data sets was also suggested as a strategy for bridging theoretical ideas with practical examples. 

    Addressing the challenges of teaching AI

    Franz’s research provides important insights into the discipline-specific challenges educators face when introducing AI into the classroom. It also underscores the importance of appropriate professional development and age-appropriate and research-informed materials and tools to support students engaging with ideas about AI, data science, and machine learning.

    Students sitting in a lecture at a university.

    Further reading and resources

    If you are interested in reading more about Franz’s work on teacher professional development, you can read his paper on a scalable professional development offer for computer science teachers or you can learn more about his research group here.

    Join our next seminar

    In our current seminar series, we are exploring teaching about AI and data science. Join us at our next seminar on Tuesday 8 April at 17:00–18:30 BST to hear David Weintrop, Rotem Israel-Fishelson, and Peter F. Moon from the University of Maryland introduce ‘API Can Code’, an interest-driven data science curriculum for high-school students.

    To sign up and take part in the seminar, click the button below; we will then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Integrating generative AI into introductory programming classes

    Integrating generative AI into introductory programming classes

    Reading Time: 6 minutes

    Generative AI (GenAI) tools like GitHub Copilot and ChatGPT are rapidly changing how programming is taught and learnt. These tools can solve assignments with remarkable accuracy. GPT-4, for example, scored an impressive 99.5% on an undergraduate computer science exam, compared to Codex’s 78% just two years earlier. With such capabilities, researchers are shifting from asking, “Should we teach with AI?” to “How do we teach with AI?”

    Photo of Leo Porter (UC San Diego)
    Leo Porter from UC San Diego
    Photo of Daniel Zingaro (University of Toronto)
    Daniel Zingaro from the University of Toronto

    Leo Porter and Daniel Zingaro have spearheaded this transformation through their groundbreaking undergraduate programming course. Their innovative curriculum integrates GenAI tools to help students tackle complex programming tasks while developing critical thinking and problem-solving skills.

    Leo and Daniel presented their work at the Raspberry Pi Foundation research seminar in December 2024. During the seminar, it became clear that much could be learnt from their work, with their insights having particular relevance for teachers in secondary education thinking about using GenAI in their programming classes

    Practical applications in the classroom

    In 2023, Leo and Daniel introduced GitHub Copilot in their introductory programming  CS1-LLM course at UC San Diego with 550 students. The course included creative, open-ended projects that allowed students to explore their interests while applying the skills they’d learnt. The projects covered the following areas:

    • Data science: Students used Kaggle datasets to explore questions related to their fields of study — for example, neuroscience majors analysed stroke data. The projects encouraged interdisciplinary thinking and practical applications of programming.
    • Image manipulation: Students worked with the Python Imaging Library (PIL) to create collages and apply filters to images, showcasing their creativity and technical skills.
    • Game development: A project focused on designing text-based games encouraged students to break down problems into manageable components while using AI tools to generate and debug code.

    Students consistently reported that these projects were not only enjoyable but also responsible for deepening their understanding of programming concepts. A majority (74%) found the projects helpful or extremely helpful for their learning. One student noted that.

    Programming projects were fun and the amount of freedom that was given added to that. The projects also helped me understand how to put everything that we have learned so far into a project that I could be proud of.

    Core skills for programming with Generative AI

    Leo and Daniel emphasised that teaching programming with GenAI involves fostering a mix of traditional and AI-specific skills.

    Infographic highlighting a workflow when writing software with Copilot.
    Writing software with GenAI applications, such as Copilot, needs to be approached differently to traditional programming tasks

    Their approach centres on six core competencies:

    • Prompting and function design: Students learn to articulate precise prompts for AI tools, honing their ability to describe a function’s purpose, inputs, and outputs, for instance. This clarity improves the output from the AI tool and reinforces students’ understanding of task requirements.
    • Code reading and selection: AI tools can produce any number of solutions, and each will be different, requiring students to evaluate the options critically. Students are taught to identify which solution is most likely to solve their problem effectively.
    • Code testing and debugging: Students practise open- and closed-box testing, learning to identify edge cases and debug code using tools like doctest and the VS Code debugger.
    • Problem decomposition: Breaking down large projects into smaller functions is essential. For instance, when designing a text-based game, students might separate tasks into input handling, game state updates, and rendering functions.
    • Leveraging modules: Students explore new programming domains and identify useful libraries through interactions with Copilot. This prepares them to solve problems efficiently and creatively.

    Ethical and metacognitive skills: Students engage in discussions about responsible AI use and reflect on the decisions they make when collaborating with AI tools.

    Graphic depicting students' confidence levels regarding their programming skills and their use of Generative AI tools.

    Adapting assessments for the AI era

    The rise of GenAI has prompted educators to rethink how they assess programming skills. In the CS1-LLM course, traditional take-home assignments were de-emphasised in favour of assessments that focused on process and understanding.

    Table highlighting the different types of assessments involved in Leo and Daniel's course.
    Leo and Daniel chose several types of assessments — some involved having to complete programming tasks with the help of GenAI tools, while others had to be completed without.
    • Quizzes and exams: Students were evaluated on their ability to read, test, and debug code — skills critical for working effectively with AI tools. Final exams included both tasks that required independent coding and tasks that required use of Copilot.
    • Creative projects: Students submitted projects alongside a video explanation of their process, emphasising problem decomposition and testing. This approach highlighted the importance of critical thinking over rote memorisation.

    Challenges and lessons learnt

    While Leo and Daniel reported that the integration of AI tools into their course has been largely successful, it has also introduced challenges. Surveys revealed that some students felt overly dependent on AI tools, expressing concerns about their ability to code independently. Addressing this will require striking a balance between leveraging AI tools and reinforcing foundational skills.

    Additionally, ethical concerns around AI use, such as plagiarism and intellectual property, must be addressed. Leo and Daniel incorporated discussions about these issues into their curriculum to ensure students understand the broader implications of working with AI technologies.

    A future-oriented approach

    Leo and Daniel’s work demonstrates that GenAI can transform programming education, making it more inclusive, engaging, and relevant. Their course attracted a diverse cohort of students, as well as students traditionally underrepresented in computer science — 52% of the students were female and 66% were not majoring in computer science — highlighting the potential of AI-powered learning to broaden participation in computer science.

    A girl in a university computing classroom.

    By embracing this shift, educators can prepare students not just to write code but to also think critically, solve real-world problems, and effectively harness the AI innovations shaping the future of technology.

    If you’re an educator interested in using GenAI in your teaching, we recommend checking out Leo and Daniel’s book, Learn AI-Assisted Python Programming, as well as their course resources on GitHub. You may also be interested in our own Experience AI resources, which are designed to help educators navigate the fast-moving world of AI and machine learning technologies.

    Join us at our next online seminar on 11 March

    Our 2025 seminar series is exploring how we can teach young people about AI technologies and data science. At our next seminar on Tuesday, 11 March at 17:00–18:00 GMT, we’ll hear from Lukas Höper and Carsten Schulte from Paderborn University. They’ll be discussing how to teach school students about data-driven technologies and how to increase students’ awareness of how data is used in their daily lives.

    To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Teaching about AI in K–12 education: Thoughts from the USA

    Teaching about AI in K–12 education: Thoughts from the USA

    Reading Time: 5 minutes

    As artificial intelligence continues to shape our world, understanding how to teach about AI has never been more important. Our new research seminar series brings together educators and researchers to explore approaches to AI and data science education. In the first seminar, we welcomed Shuchi Grover, Director of AI and Education Research at Looking Glass Ventures. Shuchi began by exploring the theme of teaching using AI, then moved on to discussing teaching about AI in K–12 (primary and secondary) education. She emphasised that it is crucial to teach about AI before using it in the classroom, and this blog post will focus on her insights in this area.

    Shuchi Grover gave an insightful talk discussing how to teach about AI in K–12 education.
    Shuchi Grover gave an insightful talk discussing how to teach about AI in K–12 education.

    An AI literacy framework

    From her research, Shuchi has developed a framework for teaching about AI that is structured as four interlocking components, each representing a key area of understanding:

    • Basic understanding of AI, which refers to foundational knowledge such as what AI is, types of AI systems, and the capabilities of AI technologies
    • Ethics and human–AI relationship, which includes the role of humans in regard to AI, ethical considerations, and public perceptions of AI
    • Computational thinking/literacy, which relates to how AI works, including building AI applications and training machine learning models
    • Data literacy, which addresses the importance of data, including examining data features, data visualisation, and biases

    This framework shows the multifaceted nature of AI literacy, which involves an understanding of both technical aspects and ethical and societal considerations. 

    Shuchi’s framework for teaching about AI includes four broad areas.
    Shuchi’s framework for teaching about AI includes four broad areas.

    Shuchi emphasised the importance of learning about AI ethics, highlighting the topic of bias. There are many ways that bias can be embedded in applications of AI and machine learning, including through the data sets that are used and the design of machine learning models. Shuchi discussed supporting learners to engage with the topic through exploring bias in facial recognition software, sharing activities and resources to use in the classroom that can prompt meaningful discussion, such as this talk by Joy Buolamwini. She also highlighted the Kapor Foundation’s Responsible AI and Tech Justice: A Guide for K–12 Education, which contains questions that educators can use with learners to help them to carefully consider the ethical implications of AI for themselves and for society. 

    Computational thinking and AI

    In computer science education, computational thinking is generally associated with traditional rule-based programming — it has often been used to describe the problem-solving approaches and processes associated with writing computer programs following rule-based principles in a structured and logical way. However, with the emergence of machine learning, Shuchi described a need for computational thinking frameworks to be expanded to also encompass data-driven, probabilistic approaches, which are foundational for machine learning. This would support learners’ understanding and ability to work with the models that increasingly influence modern technology.

    A group of young people and educators smiling while engaging with a computer.

    Example activities from research studies

    Shuchi shared that a variety of pedagogies have been used in recent research projects on AI education, ranging from hands-on experiences, such as using APIs for classification, to discussions focusing on ethical aspects. You can find out more about these pedagogies in her award-winning paper Teaching AI to K-12 Learners: Lessons, Issues and Guidance. This plurality of approaches ensures that learners can engage with AI and machine learning in ways that are both accessible and meaningful to them.

    Research projects exploring teaching about AI and machine learning have involved a range of different approaches.
    Research projects exploring teaching about AI and machine learning have involved a range of different approaches.

    Shuchi shared examples of activities from two research projects that she has led:

    • CS Frontiers engaged high school students in a number of activities involving using NetsBlox and accessing real-world data sets. For example, in one activity, students participated in data science activities such as creating data visualisations to answer questions about climate change. 
    • AI & Cybersecurity for Teens explored approaches to teaching AI and machine learning to 13- to 15-year-olds through the use of cybersecurity scenarios. The project aimed to provide learners with insights into how machine learning models are designed, how they work, and how human decisions influence their development. An example activity guided students through building a classification model to analyse social media accounts to determine whether they may be bot accounts or accounts run by a human.
    A screenshot from an activity to classify social media accounts 
    A screenshot from an activity to classify social media accounts 

    Closing thoughts

    At the end of her talk, Shuchi shared some final thoughts addressing teaching about AI to K–12 learners: 

    • AI learning requires contextualisation: Think about the data sets, ethical issues, and examples of AI tools and systems you use to ensure that they are relatable to learners in your context.
    • AI should not be a solution in search of a problem: Both teachers and learners need to be educated about AI before they start to use it in the classroom, so that they are informed consumers.

    Join our next seminar

    In our current seminar series, we are exploring teaching about AI and data science. Join us at our next seminar on Tuesday 11 March at 17:00–18:30 GMT to hear Lukas Höper and Carsten Schulte from Paderborn University discuss supporting middle school students to develop their data awareness. 

    To sign up and take part in the seminar, click the button below — we will then send you information about joining. We hope to see you there.

    I want to join the next seminarThe schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • How can we teach students about AI and data science? Join our 2025 seminar series to learn more about the topic

    How can we teach students about AI and data science? Join our 2025 seminar series to learn more about the topic

    Reading Time: 4 minutes

    AI, machine learning (ML), and data science infuse our daily lives, from the recommendation functionality on music apps to technologies that influence our healthcare, transport, education, defence, and more.

    What jobs will be affected by AL, ML, and data science remains to be seen, but it is increasingly clear that students will need to learn something about these topics. There will be new concepts to be taught, new instructional approaches and assessment techniques to be used, new learning activities to be delivered, and we must not neglect the professional development required to help educators master all of this. 

    An educator is helping a young learner with a coding task.

    As AI and data science are incorporated into school curricula and teaching and learning materials worldwide, we ask: What’s the research basis for these curricula, pedagogy, and resource choices?

    In 2024, we showcased researchers who are investigating how AI can be leveraged to support the teaching and learning of programming. But in 2025, we look at what should be taught about AI, ML, and data science in schools and how we should teach this. 

    Our 2025 seminar speakers — so far!

    We are very excited that we have already secured several key researchers in the field. 

    On 21 January, Shuchi Grover will kick off the seminar series by giving an important overview of AI in the K–12 landscape, including developing both AI literacy and AI ethics. Shuchi will provide concrete examples and recently developed frameworks to give educators practical insights on the topic.

    Our second session will focus on a teacher professional development (PD) programme to support the introduction of AI in Upper Bavarian schools. Franz Jetzinger from the Technical University of Munich will summarise the PD programme and share how teachers implemented the topic in their classroom, including the difficulties they encountered.

    Again from Germany, Lukas Höper from Paderborn University, with Carsten Schulte will describe important research on data awareness and introduce a framework that is likely to be key for learning about data-driven technology. The pair will talk about the Data Awareness Framework and how it has been used to help learners explore, evaluate, and be empowered in looking at the role of data in everyday applications.  

    Our April seminar will see David Weintrop from the University of Maryland introduce, with his colleagues, a data science curriculum called API Can Code, aimed at high-school students. The group will highlight the strategies needed for integrating data science learning within students’ lived experiences and fostering authentic engagement.

    Later in the year, Jesús Moreno-Leon from the University of Seville will help us consider the  thorny but essential question of how we measure AI literacy. Jesús will present an assessment instrument that has been successfully implemented in several research studies involving thousands of primary and secondary education students across Spain, discussing both its strengths and limitations.

    What to expect from the seminars

    Our seminars are designed to be accessible to anyone interested in the latest research about AI education — whether you’re a teacher, educator, researcher, or simply curious. Each session begins with a presentation from our guest speaker about their latest research findings. We then move into small groups for a short discussion and exchange of ideas before coming back together for a Q&A session with the presenter. 

    An educator is helping two young learners with a coding task.

    Attendees of our 2024 series told us that they valued that the talks “explore a relevant topic in an informative way“, the “enthusiasm and inspiration”, and particularly the small-group discussions because they “are always filled with interesting and varied ideas and help to spark my own thoughts”. 

    The seminars usually take place on Zoom on the first Tuesday of each month at 17:00–18:30 GMT / 12:00–13:30 ET / 9:00–10:30 PT / 18:00–19:30 CET. 

    You can find out more about each seminar and the speakers on our upcoming seminar page. And if you are unable to attend one of our talks, you can watch them from our previous seminar page, where you will also find an archive of all of our previous seminars dating back to 2020.

    How to sign up

    To attend the seminars, please register here. You will receive an email with the link to join our next Zoom call. Once signed up, you will automatically be notified of upcoming seminars. You can unsubscribe from our seminar notifications at any time.

    We hope to see you at a seminar soon!

    Website: LINK

  • Does AI-assisted coding boost novice programmers’ skills or is it just a shortcut?

    Does AI-assisted coding boost novice programmers’ skills or is it just a shortcut?

    Reading Time: 6 minutes

    Artificial intelligence (AI) is transforming industries, and education is no exception. AI-driven development environments (AIDEs), like GitHub Copilot, are opening up new possibilities, and educators and researchers are keen to understand how these tools impact students learning to code. 

    In our 50th research seminar, Nicholas Gardella, a PhD candidate at the University of Virginia, shared insights from his research on the effects of AIDEs on beginner programmers’ skills.

    Headshot of Nicholas Gardella.
    Nicholas Gardella focuses his research on understanding human interactions with artificial intelligence-based code generators to inform responsible adoption in computer science education.

    Measuring AI’s impact on students

    AI tools are becoming a big part of software development, but what does that mean for students learning to code? As tools like GitHub Copilot become more common, it’s crucial to ask: Do these tools help students to learn better and work more effectively, especially when time is tight?

    This is precisely what Nicholas’s research aims to identify by examining the impact of AIDEs on four key areas:

    • Performance (how well students completed the tasks)
    • Workload (the effort required)
    • Emotion (their emotional state during the task)
    • Self-efficacy (their belief in their own abilities to succeed)

    Nicholas conducted his study with 17 undergraduate students from an introductory computer science course, who were mostly first-time programmers, with different genders and backgrounds.

    Girl in class at IT workshop at university.
    By luckybusiness

    The students completed programming tasks both with and without the assistance of GitHub Copilot. Nicholas selected the tasks from OpenAI’s human evaluation data set, ensuring they represented a range of difficulty levels. He also used a repeated measures design for the study, meaning that each student had the opportunity to program both independently and with AI assistance multiple times. This design helped him to compare individual progress and attitudes towards using AI in programming.

    Less workload, more performance and self-efficacy in learning

    The results were promising for those advocating AI’s role in education. Nicholas’s research found that participants who used GitHub Copilot performed better overall, completing tasks with less mental workload and effort compared to solo programming.

    Graphic depicting Nicholas' results.
    Nicholas used several measures to find out whether AIDEs affected students’ emotional states.

    However, the immediate impact on students’ emotional state and self-confidence was less pronounced. Initially, participants did not report feeling more confident while coding with AI. Over time, though, as they became more familiar with the tool, their confidence in their abilities improved slightly. This indicates that students need time and practice to fully integrate AI into their learning process. Students increasingly attributed their progress not to the AI doing the work for them, but to their own growing proficiency in using the tool effectively. This suggests that with sustained practice, students can gain confidence in their abilities to work with AI, rather than becoming overly reliant on it.

    Graphic depicting Nicholas' RQ1 results.
    Students who used AI tools seemed to improve more quickly than students who worked on the exercises themselves.

    A particularly important takeaway from the talk was the reduction in workload when using AI tools. Novice programmers, who often find programming challenging, reported that AI assistance lightened the workload. This reduced effort could create a more relaxed learning environment, where students feel less overwhelmed and more capable of tackling challenging tasks.

    However, while workload decreased, use of the AI tool did not significantly boost emotional satisfaction or happiness during the coding process. Nicholas explained that although students worked more efficiently, using the AI tool did not necessarily make coding a more enjoyable experience. This highlights a key challenge for educators: finding ways to make learning both effective and engaging, even when using advanced tools like AI.

    AI as a tool for collaboration, not replacement

    Nicholas’s findings raise interesting questions about how AI should be introduced in computer science education. While tools like GitHub Copilot can enhance performance, they should not be seen as shortcuts for learning. Students still need guidance in how to use these tools responsibly. Importantly, the study showed that students did not take credit for the AI tool’s work — instead, they felt responsible for their own progress, especially as they improved their interactions with the tool over time.

    Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.
    Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

    Students might become better programmers when they learn how to work alongside AI systems, using them to enhance their problem-solving skills rather than relying on them for answers. This suggests that educators should focus on teaching students how to collaborate with AI, rather than fearing that these tools will undermine the learning process.

    Bridging research and classroom realities

    Moreover, the study touched on an important point about the limits of its findings. Since the experiment was conducted in a controlled environment with only 17 participants, researchers need to conduct further studies to explore how AI tools perform in real-world classroom settings. For example, the role of internet usage plays a fundamental role. It will be relevant to understand how factors such as class size, prior varying experience, and the age of students affect their ability to integrate AI into their learning.

    In the follow-up discussion, Nicholas also demonstrated how AI tools are becoming more accessible within browsers and how teachers can integrate AI-driven development environments more easily into their courses. By making AI technology more readily available, these tools are democratising access to advanced programming aids, enabling students to build applications directly in their web browsers with minimal setup.

    The path ahead

    Nicholas’s talk provided an insightful look into the evolving relationship between AI tools and novice programmers. While AI can improve performance and reduce workload, it is not a magic solution to all the challenges of learning to code.

    Based on the discussion after the talk, educators should support students in developing the skills to use these tools effectively, shaping an environment where they can feel confident working with AI systems. The researchers and educators agreed that more research is needed to expand on these findings, particularly in more diverse and larger-scale educational settings. 

    As AI continues to shape the future of programming education, the role of educators will remain crucial in guiding students towards responsible and effective use of these technologies, as we are only at the beginning.

    Join our next seminar

    In our current seminar series, we are exploring how to teach programming with and without AI technology. Join us at our next seminar on Tuesday, 10 December at 17:00–18:30 GMT to hear Leo Porter (UC San Diego) and Daniel Zingaro (University of Toronto) discuss how they are working to create an introductory programming course for majors and non-majors that fully incorporates generative AI into the learning goals of the course. 

    To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Using generative AI to teach computing: Insights from research

    Using generative AI to teach computing: Insights from research

    Reading Time: 7 minutes

    As computing technologies continue to rapidly evolve in today’s digital world, computing education is becoming increasingly essential. Arto Hellas and Juho Leinonen, researchers at Aalto University in Finland, are exploring how innovative teaching methods can equip students with the computing skills they need to stay ahead. In particular, they are looking at how generative AI tools can enhance university-level computing education. 

    In our monthly seminar in September, Arto and Juho presented their research on using AI tools to provide personalised learning experiences and automated feedback to help requests, as well as their findings on teaching students how to write effective prompts for generative AI systems. While their research focuses primarily on undergraduate students — given that they teach such students — many of their findings have potential relevance for primary and secondary (K-12) computing education. 

    Students attend a lecture at a university.

    Generative AI consists of algorithms that can generate new content, such as text, code, and images, based on the input received. Ever since large language models (LLMs) such as ChatGPT and Copilot became widely available, there has been a great deal of attention on how to use this technology in computing education. 

    Arto and Juho described generative AI as one of the fastest-moving topics they had ever worked on, and explained that they were trying to see past the hype and find meaningful uses of LLMs in their computing courses. They presented three studies in which they used generative AI tools with students in ways that aimed to improve the learning experience. 

    Using generative AI tools to create personalised programming exercises

    An important strand of computing education research investigates how to engage students by personalising programming problems based on their interests. The first study in Arto and Juho’s research  took place within an online programming course for adult students. It involved developing a tool that used GPT-4 (the latest version of ChatGPT available at that time) to generate exercises with personalised aspects. Students could select a theme (e.g. sports, music, video games), a topic (e.g. a specific word or name), and a difficulty level for each exercise.

    A student in a computing classroom.

    Arto, Juho, and their students evaluated the personalised exercises that were generated. Arto and Juho used a rubric to evaluate the quality of the exercises and found that they were clear and had the themes and topics that had been requested. Students’ feedback indicated that they found the personalised exercises engaging and useful, and preferred these over randomly generated exercises. 

    Arto and Juho also evaluated the personalisation and found that exercises were often only shallowly personalised, however. In shallow personalisations, the personalised content was added in only one sentence, whereas in deep personalisations, the personalised content was present throughout the whole problem statement. It should be noted that in the examples taken from the seminar below, the terms ‘shallow’ and ‘deep’ were not being used to make a judgement on the worthiness of the topic itself, but were rather describing whether the personalisation was somewhat tokenistic or more meaningful within the exercise. 

    In these examples from the study, the shallow personalisation contains only one sentence to contextualise the problem, while in the deep example the whole problem statement is personalised. 

    The findings suggest that this personalised approach may be particularly effective on large university courses, where instructors might struggle to give one-on-one attention to every student. The findings further suggest that generative AI tools can be used to personalise educational content and help ensure that students remain engaged. 

    How might all this translate to K-12 settings? Learners in primary and secondary schools often have a wide range of prior knowledge, lived experiences, and abilities. Personalised programming tasks could help diverse groups of learners engage with computing, and give educators a deeper understanding of the themes and topics that are interesting for learners. 

    Responding to help requests using large language models

    Another key aspect of Alto and Juho’s work is exploring how LLMs can be used to generate responses to students’ requests for help. They conducted a study using an online platform containing programming exercises for students. Every time a student struggled with a particular exercise, they could submit a help request, which went into a queue for a teacher to review, comment on, and return to the student. 

    The study aimed to investigate whether an LLM could effectively respond to these help requests and reduce the teachers’ workloads. An important principle was that the LLM should guide the student towards the correct answer rather than provide it. 

    The study used GPT-3.5, which was the newest version at the time. The results found that the LLM was able to analyse and detect logical and syntactical errors in code, but concerningly, the responses from the LLM also addressed some non-existent problems! This is an example of hallucination, where the LLM outputs something false that does not reflect the real data that was inputted into it. 

    An example of how an LLM was able to detect a logical error in code, but also hallucinated and provided an unhelpful, false response about a non-existent syntactical error. 

    The finding that LLMs often generated both helpful and unhelpful problem-solving strategies suggests that this is not a technology to rely on in the classroom just yet. Arto and Juho intend to track the effectiveness of LLMs as newer versions are released, and explained that GPT-4 seems to detect errors more accurately, but there is no systematic analysis of this yet. 

    In primary and secondary computing classes, young learners often face similar challenges to those encountered by university students — for example, the struggle to write error-free code and debug programs. LLMs seemingly have a lot of potential to support young learners in overcoming such challenges, while also being valuable educational tools for teachers without strong computing backgrounds. Instant feedback is critical for young learners who are still developing their computational thinking skills — LLMs can provide such feedback, and could be especially useful for teachers who may lack the resources to give individualised attention to every learner. Again though, further research into LLM-based feedback systems is needed before they can be implemented en-masse in classroom settings in the future. 

    Teaching students how to prompt large language models

    Finally, Arto and Juho presented a study where they introduced the idea of ‘Prompt Problems’: programming exercises where students learn how to write effective prompts for AI code generators using a tool called Promptly. In a Prompt Problem exercise, students are presented with a visual representation of a problem that illustrates how input values will be transformed to an output. Their task is to devise a prompt (input) that will guide an LLM to generate the code (output) required to solve the problem. Prompt-generated code is evaluated automatically by the Promptly tool, helping students to refine the prompt until it produces code that solves the problem.

    Feedback from students suggested that using Prompt Problems was a good way for them to gain experience of using new programming concepts and develop their computational thinking skills. However, students were frustrated that bugs in the code had to be fixed by amending the prompt — it was not possible to edit the code directly. 

    How these findings relate to K-12 computing education is still to be explored, but they indicate that Prompt Problems with text-based programming languages could be valuable exercises for older pupils with a solid grasp of foundational programming concepts. 

    Balancing the use of AI tools with fostering a sense of community

    At the end of the presentation, Arto and Juho summarised their work and hypothesised that as society develops more and more AI tools, computing classrooms may lose some of their community aspects. They posed a very important question for all attendees to consider: “How can we foster an active community of learners in the generative AI era?” 

    In our breakout groups and the subsequent whole-group discussion, we began to think about the role of community. Some points raised highlighted the importance of working together to accurately identify and define problems, and sharing ideas about which prompts would work best to accurately solve the problems. 

    As AI technology continues to evolve, its role in education will likely expand. There was general agreement in the question and answer session that keeping a sense of community at the heart of computing classrooms will be important. 

    Arto and Juho asked seminar attendees to think about encouraging a sense of community. 

    Further resources

    The Raspberry Pi Computing Education Research Centre and Faculty of Education at the University of Cambridge have recently published a teacher guide on the use of generative AI tools in education. The guide provides practical guidance for educators who are considering using generative AI tools in their teaching. 

    Join our next seminar

    In our current seminar series, we are exploring how to teach programming with and without AI technology. Join us at our next seminar on Tuesday, 12 November at 17:00–18:30 GMT to hear Nicholas Gardella (University of Virginia) discuss the effects of using tools like GitHub Copilot on the motivation, workload, emotion, and self-efficacy of novice programmers. To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • How to make debugging a positive experience for secondary school students

    How to make debugging a positive experience for secondary school students

    Reading Time: 6 minutes

    Artificial intelligence (AI) continues to change many areas of our lives, with new AI technologies and software having the potential to significantly impact the way programming is taught at schools. In our seminar series this year, we’ve already heard about new AI code generators that can support and motivate young people when learning to code, AI tools that can create personalised Parson’s Problems, and research into how generative AI could improve young people’s understanding of program error messages.

    Two teenage girls do coding activities at their laptops in a classroom.

    At times, it can seem like everything is being automated with AI. However, there are some parts of learning to program that cannot (and probably should not) be automated, such as understanding errors in code and how to fix them. Manually typing code might not be necessary in the future, but it will still be crucial to understand the code that is being generated and how to improve and develop it. 

    As important as debugging might be for the future of programming, it’s still often the task most disliked by novice programmers. Even if program error messages can be explained in the future or tools like LitterBox can flag bugs in an engaging way, actually fixing the issues involves time, effort, and resilience — which can be hard to come by at the end of a computing lesson in the late afternoon with 30 students crammed into an IT room. 

    Debugging can be challenging in many different ways and it is important to understand why students struggle to be able to support them better.

    But what is it about debugging that young people find so hard, even when they’re given enough time to do it? And how can we make debugging a more motivating experience for young people? These are two of the questions that Laurie Gale, a PhD student at the Raspberry Pi Computing Education Research Centre, focused on in our July seminar.

    Laurie has spent the past two years talking to teachers and students and developing tools (a visualiser of students’ programming behaviour and PRIMMDebug, a teaching process and tool for debugging) to understand why many secondary school students struggle with debugging. It has quickly become clear through his research that most issues are due to problematic debugging strategies and students’ negative experiences and attitudes.

    A photograph of Laurie Gale.
    When Laurie Gale started looking into debugging research for his PhD, he noticed that the majority of studies had been with college students, so he decided to change that and find out what would make debugging easier for novice programmers at secondary school.

    When students first start learning how to program, they have to remember a vast amount of new information, such as different variables, concepts, and program designs. Utilising this knowledge is often challenging because they’re already busy juggling all the content they’ve previously learnt and the challenges of the programming task at hand. When error messages inevitably appear that are confusing or misunderstood, it can become extremely difficult to debug effectively. 

    Program error messages are usually not tailored to the age of the programmers and can be hard to understand and overwhelming for novices.

    Given this information overload, students often don’t develop efficient strategies for debugging. When Laurie analysed the debugging efforts of 12- to 14-year-old secondary school students, he noticed some interesting differences between students who were more and less successful at debugging. While successful students generally seemed to make less frequent and more intentional changes, less successful students tinkered frequently with their broken programs, making one- or two-character edits before running the program again. In addition, the less successful students often ran the program soon after beginning the debugging exercise without allowing enough time to actually read the code and understand what it was meant to do. 

    The issue with these behaviours was that they often resulted in students adding errors when changing the program, which then compounded and made debugging increasingly difficult with each run. 74% of students also resorted to spamming, pressing ‘run’ again and again without changing anything. This strategy resonated with many of our seminar attendees, who reported doing the same thing after becoming frustrated. 

    Educators need to be aware of the negative consequences of students’ exasperating and often overwhelming experiences with debugging, especially if students are less confident in their programming skills to begin with. Even though spending 15 minutes on an exercise shows a remarkable level of tenaciousness and resilience, students’ attitudes to programming — and computing as a whole — can quickly go downhill if their strategies for identifying errors prove ineffective. Debugging becomes a vicious circle: if a student has negative experiences, they are less confident when having to bug-fix again in the future, which can lead to another set of unsuccessful attempts, which can further damage their confidence, and so on. Avoiding this downward spiral is essential. 

    Laurie stresses the importance of understanding the cognitive challenges of debugging and using the right tools and techniques to empower students and support them in developing effective strategies.

    To make debugging a less cognitively demanding activity, Laurie recommends using a range of tools and strategies in the classroom.

    Some ideas of how to improve debugging skills that were mentioned by Laurie and our attendees included:

    • Using frame-based editing tools for novice programmers because such tools encourage students to focus on logical errors rather than accidental syntax errors, which can distract them from understanding the issues with the program. Teaching debugging should also go hand in hand with understanding programming syntax and using simple language. As one of our attendees put it, “You wouldn’t give novice readers a huge essay and ask them to find errors.”
    • Making error messages more understandable, for example, by explaining them to students using Large Language Models.
    • Teaching systematic debugging processes. There are several different approaches to doing this. One of our participants suggested using the scientific method (forming a hypothesis about what is going wrong, devising an experiment that will provide information to see whether the hypothesis is right, and iterating this process) to methodically understand the program and its bugs. 

    Most importantly, debugging should not be a daunting or stressful experience. Everyone in the seminar agreed that creating a positive error culture is essential. 

    Teachers in Laurie’s study have stressed the importance of positive debugging experiences.

    Some ideas you could explore in your classroom include:

    • Normalising errors: Stress how normal and important program errors are. Everyone encounters them — a professional software developer in our audience said that they spend about half of their time debugging. 
    • Rewarding perseverance: Celebrate the effort, not just the outcome.
    • Modelling how to fix errors: Let your students write buggy programs and attempt to debug them in front of the class.

    In a welcoming classroom where students are given support and encouragement, debugging can be a rewarding experience. What may at first appear to be a failure — even a spectacular one — can be embraced as a valuable opportunity for learning. As a teacher in Laurie’s study said, “If something should have gone right and went badly wrong but somebody found something interesting on the way… you celebrate it. Take the fear out of it.” 

    Watch the recording of Laurie’s presentation:

    [youtube https://www.youtube.com/watch?v=MKD5GuteMC0?feature=oembed&w=500&h=281]

    In our current seminar series, we are exploring how to teach programming with and without AI.

    Join us at our next seminar on Tuesday, 12 November at 17:00–18:30 GMT to hear Nicholas Gardella (University of Virginia) discuss the effects of using tools like GitHub Copilot on the motivation, workload, emotion, and self-efficacy of novice programmers. To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • How useful do teachers find error message explanations generated by AI? Pilot research results

    How useful do teachers find error message explanations generated by AI? Pilot research results

    Reading Time: 7 minutes

    As discussions of how artificial intelligence (AI) will impact teaching, learning, and assessment proliferate, I was thrilled to be able to add one of my own research projects to the mix. As a research scientist at the Raspberry Pi Foundation, I’ve been working on a pilot research study in collaboration with Jane Waite to explore the topic of program error messages (PEMs). 

    Computer science students at a desktop computer in a classroom.

    PEMs can be a significant barrier to learning for novice coders, as they are often confusing and difficult to understand. This can hinder troubleshooting and progress in coding, and lead to frustration. 

    Recently, various teams have been exploring how generative AI, specifically large language models (LLMs), can be used to help learners understand PEMs. My research in this area specifically explores secondary teachers’ views of the explanations of PEMs generated by a LLM, as an aid for learning and teaching programming, and I presented some of my results in our ongoing seminar series.

    Understanding program error messages is hard at the start

    I started the seminar by setting the scene and describing the current background of research on novices’ difficulty in using PEMs to fix their code, and the efforts made to date to improve these. The three main points I made were that:

    1. PEMs are often difficult to decipher, especially by novices, and there’s a whole research area dedicated to identifying ways to improve them.
    2. Recent studies have employed LLMs as a way of enhancing PEMs. However, the evidence on what makes an ‘effective’ PEM for learning is limited, variable, and contradictory.
    3. There is limited research in the context of K–12 programming education, as well as research conducted in collaboration with teachers to better understand the practical and pedagogical implications of integrating LLMs into the classroom more generally.

    My pilot study aims to fill this gap directly, by reporting K–12 teachers’ views of the potential use of LLM-generated explanations of PEMs in the classroom, and how their views fit into the wider theoretical paradigm of feedback literacy. 

    What did the teachers say?

    To conduct the study, I interviewed eight expert secondary computing educators. The interviews were semi-structured activity-based interviews, where the educators got to experiment with a prototype version of the Foundation’s publicly available Code Editor. This version of the Code Editor was adapted to generate LLM explanations when the question mark next to the standard error message is clicked (see Figure 1 for an example of a LLM-generated explanation). The Code Editor version called the OpenAI GPT-3.5 interface to generate explanations based on the following prompt: “You are a teacher talking to a 12-year-old child. Explain the error {error} in the following Python code: {code}”. 

    The Foundation’s Python Code Editor with LLM feedback prototype.
    Figure 1: The Foundation’s Code Editor with LLM feedback prototype.

    Fifteen themes were derived from the educators’ responses and these were split into five groups (Figure 2). Overall, the educators’ views of the LLM feedback were that, for the most part, a sensible explanation of the error messages was produced. However, all educators experienced at least one example of invalid content (LLM “hallucination”). Also, despite not being explicitly requested in the LLM prompt, a possible code solution was always included in the explanation.

    Themes and groups derived from teachers’ responses.
    Figure 2: Themes and groups derived from teachers’ responses.

    Matching the themes to PEM guidelines

    Next, I investigated how the teachers’ views correlated to the research conducted to date on enhanced PEMs. I used the guidelines proposed by Brett Becker and colleagues, which consolidate a lot of the research done in this area into ten design guidelines. The guidelines offer best practices on how to enhance PEMs based on cognitive science and educational theory empirical research. For example, they outline that enhanced PEMs should provide scaffolding for the user, increase readability, reduce cognitive load, use a positive tone, and provide context to the error.

    Out of the 15 themes identified in my study, 10 of these correlated closely to the guidelines. However, the 10 themes that correlated well were, for the most part, the themes related to the content of the explanations, presentation, and validity (Figure 3). On the other hand, the themes concerning the teaching and learning process did not fit as well to the guidelines.

    Correlation between teachers’ responses and enhanced PEM design guidelines.
    Figure 3: Correlation between teachers’ responses and enhanced PEM design guidelines.

    Does feedback literacy theory fit better?

    However, when I looked at feedback literacy theory, I was able to correlate all fifteen themes — the theory fits.

    Feedback literacy theory positions the feedback process (which includes explanations) as a social interaction, and accounts for the actors involved in the interaction — the student and the teacher — as well as the relationships between the student, the teacher, and the feedback. We can explain feedback literacy theory using three constructs: feedback types, student feedback literacy, and teacher feedback literacy (Figure 4). 

    Feedback literacy at the intersection between feedback types, student feedback literacy, and teacher feedback literacy.
    Figure 4: Feedback literacy at the intersection between feedback types, student feedback literacy, and teacher feedback literacy.

    From the feedback literacy perspective, feedback can be grouped into four types: telling, guiding, developing understanding, and opening up new perspectives. The feedback type depends on the role of the student and teacher when engaging with the feedback (Figure 5). 

    From the student perspective, the competencies and dispositions students need in order to use feedback effectively can be stated as: appreciating the feedback processes, making judgements, taking action, and managing affect. Finally, from a teacher perspective, teachers apply their feedback literacy skills across three dimensions: design, relational, and pragmatic. 

    In short, according to feedback literacy theory, effective feedback processes entail well-designed feedback with a clear pedagogical purpose, as well as the competencies students and teachers need in order to make sense of the feedback and use it effectively.

    A computer science teacher sits with students at computers in a classroom.

    This theory therefore provided a promising lens for analysing the educators’ perspectives in my study. When the educators’ views were correlated to feedback literacy theory, I found that:

    1. Educators prefer the LLM explanations to fulfil a guiding and developing understanding role, rather than telling. For example, educators prefer to either remove or delay the code solution from the explanation, and they like the explanations to include keywords based on concepts they are teaching in the classroom to guide and develop students’ understanding rather than tell.
    1. Related to students’ feedback literacy, educators talked about the ways in which the LLM explanations help or hinder students to make judgements and action the feedback in the explanations. For example, they talked about how detailed, jargon-free explanations can help students make judgments about the feedback, but invalid explanations can hinder this process. Therefore, teachers talked about the need for ways to manage such invalid instances. However, for the most part, the educators didn’t talk about eradicating them altogether. They talked about ways of flagging them, using them as counter-examples, and having visibility of them to be able to address them with students.
    1. Finally, from a teacher feedback literacy perspective, educators discussed the need for professional development to manage feedback processes inclusive of LLM feedback (design) and address issues resulting from reduced opportunities to interact with students (relational and pragmatic). For example, if using LLM explanations results in a reduction in the time teachers spend helping students debug syntax errors from a pragmatic time-saving perspective, then what does that mean for the relationship they have with their students? 

    Conclusion from the study

    By correlating educators’ views to feedback literacy theory as well as enhanced PEM guidelines, we can take a broader perspective on how LLMs might not only shape the content of the explanations, but the whole social interaction around giving and receiving feedback. Investigating ways of supporting students and teachers to practise their feedback literacy skills matters just as much, if not more, than focusing on the content of PEM explanations. 

    This study was a first-step exploration of eight educators’ views on the potential impact of using LLM explanations of PEMs in the classroom. Exactly what the findings of this study mean for classroom practice remains to be investigated, and we also need to examine students’ views on the feedback and its impact on their journey of learning to program. 

    If you want to hear more, you can watch my seminar:

    [youtube https://www.youtube.com/watch?v=fVD2zpGpcY0?feature=oembed&w=500&h=281]

    You can also read the associated paper, or find out more about the research instruments on this project website.

    If any of these ideas resonated with you as an educator, student, or researcher, do reach out — we’d love to hear from you. You can contact me directly at veronica.cucuiat@raspberrypi.org or drop us a line in the comments below. 

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI. Check out the schedule of our upcoming seminars

    To take part in the next seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    You can also catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • Adapting primary Computing resources for cultural responsiveness: Bringing in learners’ identity

    Adapting primary Computing resources for cultural responsiveness: Bringing in learners’ identity

    Reading Time: 6 minutes

    In recent years, the emphasis on creating culturally responsive educational practices has gained significant traction in schools worldwide. This approach aims to tailor teaching and learning experiences to better reflect and respect the diverse cultural backgrounds of students, thereby enhancing their engagement and success in school. In one of our recent research studies, we collaborated with a small group of primary school Computing teachers to adapt existing resources to be more culturally responsive to their learners.

    Teachers work together to identify adaptations to Computing lessons.
    At a workshop for the study, teachers collaborated to identify adaptations to Computing lessons

    We used a set of ten areas of opportunity to scaffold and prompt teachers to look for ways that Computing resources could be adapted, including making changes to the content or the context of lessons, and using pedagogical techniques such as collaboration and open-ended tasks. 

    Today’s blog lays out our findings about how teachers can bring students’ identities into the classroom as an entry point for culturally responsive Computing teaching.

    Collaborating with teachers

    A group of twelve primary teachers, from schools spread across England, volunteered to participate in the study. The primary objective was for our research team to collaborate with these teachers to adapt two units of work about creating digital images and vector graphics so that they better aligned with the cultural contexts of their students. The research team facilitated an in-person, one-day workshop where the teachers could discuss their experiences and work in small groups to adapt materials that they then taught in their classrooms during the following term.

    A shared focus on identity

    As the workshop progressed, an interesting pattern emerged. Despite the diversity of schools and student populations represented by the teachers, each group independently decided to focus on the theme of identity in their adaptations. This was not a directive from the researchers, but rather a spontaneous alignment of priorities among the teachers.

    An example slide from a culturally adapted activity to create a vector graphic emoji.
    An example of an adapted Computing activity to create a vector graphic emoji.

    The focus on identity manifested in various ways. For some teachers, it involved adding diverse role models so that students could see themselves represented in computing, while for others, it meant incorporating discussions about students’ own experiences into the lessons. However, the most compelling commonality across all groups was the decision to have students create a digital picture that represented something important about themselves. This digital picture could take many forms — an emoji, a digital collage, an avatar to add to a game, or even creating fantastical animals. The goal of these activities was to provide students with a platform to express aspects of their identity that were significant to them whilst also practising the skills to manipulate vector graphics or digital images.

    Funds of identity theory

    After the teachers had returned to their classrooms and taught the adapted lessons to their students, we analysed the digital pictures created by the students using funds of identity theory. This theory explains how our personal experiences and backgrounds shape who we are and what makes us unique and individual, and argues that our identities are not static but are continuously shaped and reshaped through interactions with the world around us. 

    Keywords for the funds of identity framework, drawing on work by Esteban-Guitart and Moll (2014) and Poole (2017).
    Funds of identity framework, drawing on work by Esteban-Guitart and Moll (2014) and Poole (2017).

    In the context of our study, this theory argues that students bring their funds of identity into their Computing classrooms, including their cultural heritage, family traditions, languages, values, and personal interests. Through the image editing and vector graphics activities, students were able to create what the funds of identity theory refers to as identity artefacts. This allowed them to explore and highlight the various elements that hold importance in their lives, illuminating different facets of their identities. 

    Students’ funds of identity

    The use of the funds of identity theory provided a robust framework for understanding the digital artefacts created by the students. We analysed the teachers’ descriptions of the artefacts, paying close attention to how students represented their identities in their creations.

    1. Personal interests and values 

    One significant aspect of the analysis centered around the personal interests and values reflected in the artefacts. Some students chose to draw on their practical funds of identity and create images about hobbies that were important to them, such as drawing or playing football. Others focused on existential  funds of identity and represented values that were central to their personalities, such as cool, chatty, or quiet.

    2. Family and community connections

    Many students also chose to include references to their family and community in their artefacts. Social funds of identity were displayed when students featured family members in their images. Some students also drew on their institutional funds, adding references to their school, or geographical funds, by showing places such as the local area or a particular country that held special significance for them. These references highlighted the importance of familial and communal ties in shaping the students’ identities.

    3. Cultural representation

    Another common theme was the way students represented their cultural backgrounds. Some students chose to highlight their cultural funds of identity, creating images that included their heritage, including their national flag or traditional clothing. Other students incorporated ideological aspects of their identity that were important to them because of their faith, including Catholicism and Islam. This aspect of the artefacts demonstrated how students viewed their cultural heritage as an integral part of their identity.

    Implications for culturally responsive Computing teaching

    The findings from this study have several important implications. Firstly, the spontaneous focus on identity by the teachers suggests that identity is a powerful entry point for culturally responsive Computing teaching. Secondly, the application of the funds of identity theory to the analysis of student work demonstrates the diverse cultural resources that students bring to the classroom and highlights ways to adapt Computing lessons in ways that resonate with students’ lived experiences.

    An example of an identity artefact made by one of the students in a culturally adapted lesson on vector graphics.
    An example of an identity artefact made by one of the students in the culturally adapted lesson on vector graphics. 

    However, we also found that teachers often had to carefully support students to illuminate their funds of identity. Sometimes students found it difficult to create images about their hobbies, particularly if they were from backgrounds with fewer social and economic opportunities. We also observed that when teachers modelled an identity artefact themselves, perhaps to show an example for students to aim for, students then sometimes copied the funds of identity revealed by the teacher rather than drawing on their own funds. These points need to be taken into consideration when using identity artefact activities. 

    Finally, these findings relate to lessons about image editing and vector graphics that were taught to students aged 8- to 10-years old in England, and it remains to be explored how students in other countries or of different ages might reveal their funds of identity in the Computing classroom.

    Moving forward with cultural responsiveness

    The study demonstrated that when Computing teachers are given the opportunity to collaborate and reflect on their practice, they can develop innovative ways to make their teaching more culturally responsive. The focus on identity, as seen in the creation of identity artefacts, provided students with a platform to express themselves and connect their learning to their own lives. By understanding and valuing the funds of identity that students bring to the classroom, teachers can create a more equitable and empowering educational experience for all learners.

    Two learners do physical computing in the primary school classroom.

    We’ve written about this study in more detail in a full paper and a poster paper, which will be published at the WiPSCE conference next week. 

    We would like to thank all the researchers who worked on this project, including our collaborations with Lynda Chinaka from the University of Roehampton, and Alex Hadwen-Bennett from King’s College London. Finally, we are grateful to Cognizant for funding this academic research, and to the cohort of primary Computing teachers for their enthusiasm, energy, and creativity, and their commitment to this project.

    Website: LINK

  • Empowering undergraduate computer science students to shape generative AI research

    Empowering undergraduate computer science students to shape generative AI research

    Reading Time: 6 minutes

    As use of generative artificial intelligence (or generative AI) tools such as ChatGPT, GitHub Copilot, or Gemini becomes more widespread, educators are thinking carefully about the place of these tools in their classrooms. For undergraduate education, there are concerns about the role of generative AI tools in supporting teaching and assessment practices. For undergraduate computer science (CS) students, generative AI also has implications for their future career trajectories, as it is likely to be relevant across many fields.

    Dr Stephen MacNeil, Andrew Tran, and Irene Hou (Temple University)

    In a recent seminar in our current series on teaching programming (with or without AI), we were delighted to be joined by Dr Stephen MacNeil, Andrew Tran, and Irene Hou from Temple University. Their talk showcased several research projects involving generative AI in undergraduate education, and explored how undergraduate research projects can create agency for students in navigating the implications of generative AI in their professional lives.

    Differing perceptions of generative AI

    Stephen began by discussing the media coverage around generative AI. He highlighted the binary distinction between media representations of generative AI as signalling the end of higher education — including programming in CS courses — and other representations that highlight the issues that using generative AI will solve for educators, such as improving access to high-quality help (specifically, virtual assistance) or personalised learning experiences.

    Students sitting in a lecture at a university.

    As part of a recent ITiCSE working group, Stephen and colleagues conducted a survey of undergraduate CS students and educators and found conflicting views about the perceived benefits and drawbacks of generative AI in computing education. Despite this divide, most CS educators reported that they were planning to incorporate generative AI tools into their courses. Conflicting views were also noted between students and educators on what is allowed in terms of generative AI tools and whether their universities had clear policies around their use.

    The role of generative AI tools in students’ help-seeking

    There is growing interest in how undergraduate CS students are using generative AI tools. Irene presented a study in which her team explored the effect of generative AI on undergraduate CS students’ help-seeking preferences. Help-seeking can be understood as any actions or strategies undertaken by students to receive assistance when encountering problems. Help-seeking is an important part of the learning process, as it requires metacognitive awareness to understand that a problem exists that requires external help. Previous research has indicated that instructors, teaching assistants, student peers, and online resources (such as YouTube and Stack Overflow) can assist CS students. However, as generative AI tools are now widely available to assist in some tasks (such as debugging code), Irene and her team wanted to understand which resources students valued most, and which factors influenced their preferences. Their study consisted of a survey of 47 students, and follow-up interviews with 8 additional students. 

    Undergraduate CS student use of help-seeking resources

    Responding to the survey, students stated that they used online searches or support from friends/peers more frequently than two generative AI tools, ChatGPT and GitHub Copilot; however, Irene indicated that as data collection took place at the beginning of summer 2023, it is possible that students were not familiar with these tools or had not used them yet. In terms of students’ experiences in seeking help, students found online searches and ChatGPT were faster and more convenient, though they felt these resources led to less trustworthy or lower-quality support than seeking help from instructors or teaching assistants.

    Two undergraduate students are seated at a desk, collaborating on a computing task.

    Some students felt more comfortable seeking help from ChatGPT than peers as there were fewer social pressures. Comparing generative AI tools and online searches, one student highlighted that unlike Stack Overflow, solutions generated using ChatGPT and GitHub Copilot could not be verified by experts or other users. Students who received the most value from using ChatGPT in seeking help either (i) prompted the model effectively when requesting help or (ii) viewed ChatGPT as a search engine or comprehensive resource that could point them in the right direction. Irene cautioned that some students struggled to use generative AI tools effectively as they had limited understanding of how to write effective prompts.

    Using generative AI tools to produce code explanations

    Andrew presented a study where the usefulness of different types of code explanations generated by a large language model was evaluated by students in a web software development course. Based on Likert scale data, they found that line-by-line explanations were less useful for students than high-level summary or concept explanations, but that line-by-line explanations were most popular. They also found that explanations were less useful when students already knew what the code did. Andrew and his team then qualitatively analysed code explanations that had been given a low rating and found they were overly detailed (i.e. focusing on superfluous elements of the code), the explanation given was the wrong type, or the explanation mixed code with explanatory text. Despite the flaws of some explanations, they concluded that students found explanations relevant and useful to their learning.

    Perceived usefulness of code explanation types

    Using generative AI tools to create multiple choice questions

    In a separate study, Andrew and his team investigated the use of ChatGPT to generate novel multiple choice questions for computing courses. The researchers prompted two models, GPT-3 and GPT-4, with example question stems to generate correct answers and distractors (incorrect but plausible choices). Across two data sets of example questions, GPT-4 significantly outperformed GPT-3 in generating the correct answer (75.3% and 90% vs 30.8% and 36.7% of all cases). GPT-3 performed less well at providing the correct answer when faced with negatively worded questions. Both models generated correct answers as distractors across both sets of example questions (GPT-3: 11.1% and 10% of cases; GPT-4: 9.9% and 17.8%). They concluded that educators would still need to verify whether answers were correct and distractors were appropriate.

    An undergraduate student is raising his hand up during a lecture at a university.

    Undergraduate students shaping the direction of generative AI research

    With student concerns about generative AI and its implications for the world of work, the seminar ended with a hopeful message highlighting undergraduate students being proactive in conducting their own research and shaping the direction of generative AI research in computer science education. Stephen concluded the seminar by celebrating the undergraduate students who are undertaking these research projects.

    You can watch the seminar here:

    [youtube https://www.youtube.com/watch?v=Pq-d6wipGRQ?feature=oembed&w=500&h=281]

    If you are interested to learn more about Stephen’s work on generative AI, you can read about how undergraduate students used generative AI tools to create analogies for recursion. If you would like to experiment with using generative AI tools to assist with debugging, you could try using Gemini, ChatGPT, or Copilot.

    Join our next seminar

    Our current seminar series is on teaching programming with or without AI. 

    In our next seminar, on 16 July at 17:00 to 18:30 BST, we welcome Laurie Gale (Raspberry Pi Computing Education Research Centre, University of Cambridge), who will discuss how to teach debugging to secondary school students. To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    The schedule of our upcoming seminars is available online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • Imagining students’ progression in the era of generative AI

    Imagining students’ progression in the era of generative AI

    Reading Time: 6 minutes

    Generative artificial intelligence (AI) tools are becoming more easily accessible to learners and educators, and increasingly better at generating code solutions to programming tasks, code explanations, computing lesson plans, and other learning resources. This raises many questions for educators in terms of what and how we teach students about computing and AI, and AI’s impact on assessment, plagiarism, and learning objectives.

    Brett Becker.

    We were honoured to have Professor Brett Becker (University College Dublin) join us as part of our ‘Teaching programming (with or without AI)’ seminar series. He is uniquely placed to comment on teaching computing using AI tools, having been involved in many initiatives relevant to computing education at different levels, in Ireland and beyond.

    In a computing classroom, two girls concentrate on their programming task.

    Brett’s talk focused on what educators and education systems need to do to prepare all students — not just those studying Computing — so that they are equipped with sufficient knowledge about AI to make their way from primary school to secondary and beyond, whether it be university, technical qualifications, or work.

    How do AI tools currently perform?

    Brett began his talk by illustrating the increase in performance of large language models (LLMs) in solving first-year undergraduate programming exercises: he compared the findings from two recent studies he was involved in as part of an ITiCSE Working Group. In the first study — from 2021 — the results generated by GPT-3 were similar to those of students in the top quartile. By the second study in 2023, GPT-4’s performance matched that of a top student (Figure 1).

    A graph comparing exam scores.

    Figure 1: Student scores on Exam 1 and Exam 2, represented by circles. GPT-3’s 2021 score is represented by the blue ‘x’, and GPT-4’s 2023 score on the same questions is represented by the red ‘x’.

    Brett also explained that the study found some models were capable of solving current undergraduate programming assessments almost error-free, and could solve the Irish Leaving Certificate and UK A level Computer Science exams.

    What are challenges and opportunities for education?

    This level of performance raises many questions for computing educators about what is taught and how to assess students’ learning. To address this, Brett referred to his 2023 paper, which included findings from a literature review and a survey on students’ and instructors’ attitudes towards using LLMs in computing education. This analysis has helped him identify several opportunities as well as the ethical challenges education systems face regarding generative AI. 

    The opportunities include: 

    • The generation of unique content, lesson plans, programming tasks, or feedback to help educators with workload and productivity
    • More accessible content and tools generated by AI apps to make Computing more broadly accessible to more students
    • More engaging and meaningful student learning experiences, including using generative AI to enable creativity and using conversational agents to augment students’ learning
    • The impact on assessment practices, both in terms of automating the marking of current assessments as well as reconsidering what is assessed and how

    Some of the challenges include:

    • The lack of reliability and accuracy of outputs from generative AI tools
    • The need to educate everyone about AI to create a baseline level of understanding
    • The legal and ethical implications of using AI in computing education and beyond
    • How to deal with questionable or even intentionally harmful uses of AI and mitigating the consequences of such uses

    Programming as a basic skill for all subjects

    Next, Brett talked about concrete actions that he thinks we need to take in response to these opportunities and challenges. 

    He emphasised our responsibility to keep students safe. One way to do this is to empower all students with a baseline level of knowledge about AI, at an age-appropriate level, to enable them to keep themselves safe. 

    Secondary school age learners in a computing classroom.

    He also discussed the increased relevance of programming to all subjects, not only Computing, in a similar way to how reading and mathematics transcend the boundaries of their subjects, and the need he sees to adapt subjects and curricula to that effect. 

    As an example of how rapidly curricula may need to change with increasing AI use by students, Brett looked at the Irish Computer science specification for “senior cycle” (final two years of second-level, ages 16–18). This curriculum was developed in 2018 and remains a strong computing curriculum in Brett’s opinion. However, he pointed out that it only contains a single learning outcome on AI. 

    To help educators bridge this gap, in the book Brett wrote alongside Keith Quille to accompany the curriculum, they included two chapters dedicated to AI, machine learning, and ethics and computing. Brett believes these types of additional resources may be instrumental for teaching and learning about AI as resources are more adaptable and easier to update than curricula. 

    Generative AI in computing education

    Taking the opportunity to use generative AI to reimagine new types of programming problems, Brett and colleagues have developed Promptly, a tool that allows students to practise prompting AI code generators. This tool provides a combined approach to learning about generative AI while learning programming with an AI tool. 

    Promptly is intended to help students learn how to write effective prompts. It encourages students to specify and decompose the programming problem they want to solve, read the code generated, compare it with test cases to discern why it is failing (if it is), and then update their prompt accordingly (Figure 2). 

    An example of the Promptly interface.

    Figure 2: Example of a student’s use of Promptly.

    Early undergraduate student feedback points to Promptly being a useful way to teach programming concepts and encourage metacognitive programming skills. The tool is further described in a paper, and whilst the initial evaluation was aimed at undergraduate students, Brett positioned it as a secondary school–level tool as well. 

    Brett hopes that by using generative AI tools like this, it will be possible to better equip a larger and more diverse pool of students to engage with computing.

    Re-examining the concept of programming

    Brett concluded his seminar by broadening the relevance of programming to all learners, while challenging us to expand our perspectives of what programming is. If we define programming as a way of prompting a machine to get an output, LLMs allow all of us to do so without the need for learning the syntax of traditional programming languages. Taking that view, Brett left us with a question to consider: “How do we prepare for this from an educational perspective?”

    You can watch Brett’s presentation here:

    [youtube https://www.youtube.com/watch?v=n0BZq8uRutQ?feature=oembed&w=500&h=281]

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI. 

    For our next seminar on Tuesday 11 June at 17:00 to 18:30 GMT, we’re joined by Veronica Cucuiat (Raspberry Pi Foundation), who will talk about whether LLMs could be employed to help understand programming error messages, which can present a significant obstacle to anyone new to coding, especially young people.  

    To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • Insights into students’ attitudes to using AI tools in programming education

    Insights into students’ attitudes to using AI tools in programming education

    Reading Time: 4 minutes

    Educators around the world are grappling with the problem of whether to use artificial intelligence (AI) tools in the classroom. As more and more teachers start exploring the ways to use these tools for teaching and learning computing, there is an urgent need to understand the impact of their use to make sure they do not exacerbate the digital divide and leave some students behind.

    A teenager learning computer science.

    Sri Yash Tadimalla from the University of North Carolina and Dr Mary Lou Maher, Director of Research Community Initiatives at the Computing Research Association, are exploring how student identities affect their interaction with AI tools and their perceptions of the use of AI tools. They presented findings from two of their research projects in our March seminar.

    How students interact with AI tools 

    A common approach in research is to begin with a preliminary study involving a small group of participants in order to test a hypothesis, ways of collecting data from participants, and an intervention. Yash explained that this was the approach they took with a group of 25 undergraduate students on an introductory Java programming course. The research observed the students as they performed a set of programming tasks using an AI chatbot tool (ChatGPT) or an AI code generator tool (GitHub Copilot). 

    The data analysis uncovered five emergent attitudes of students using AI tools to complete programming tasks: 

    • Highly confident students rely heavily on AI tools and are confident about the quality of the code generated by the tool without verifying it
    • Cautious students are careful in their use of AI tools and verify the accuracy of the code produced
    • Curious students are interested in exploring the capabilities of the AI tool and are likely to experiment with different prompts 
    • Frustrated students struggle with using the AI tool to complete the task and are likely to give up 
    • Innovative students use the AI tool in creative ways, for example to generate code for other programming tasks

    Whether these attitudes are common for other and larger groups of students requires more research. However, these preliminary groupings may be useful for educators who want to understand their students and how to support them with targeted instructional techniques. For example, highly confident students may need encouragement to check the accuracy of AI-generated code, while frustrated students may need assistance to use the AI tools to complete programming tasks.

    An intersectional approach to investigating student attitudes

    Yash and Mary Lou explained that their next research study took an intersectional approach to student identity. Intersectionality is a way of exploring identity using more than one defining characteristic, such as ethnicity and gender, or education and class. Intersectional approaches acknowledge that a person’s experiences are shaped by the combination of their identity characteristics, which can sometimes confer multiple privileges or lead to multiple disadvantages.

    A student in a computing classroom.

    In the second research study, 50 undergraduate students participated in programming tasks and their approaches and attitudes were observed. The gathered data was analysed using intersectional groupings, such as:

    • Students who were from the first generation in their family to attend university and female
    • Students who were from an underrepresented ethnic group and female 

    Although the researchers observed differences amongst the groups of students, there was not enough data to determine whether these differences were statistically significant.

    Who thinks using AI tools should be considered cheating? 

    Participating students were also asked about their views on using AI tools, such as “Did having AI help you in the process of programming?” and “Does your experience with using this AI tool motivate you to continue learning more about programming?”

    The same intersectional approach was taken towards analysing students’ answers. One surprising finding stood out: when asked whether using AI tools to help with programming tasks should be considered cheating, students from more privileged backgrounds agreed that this was true, whilst students with less privilege disagreed and said it was not cheating.

    This finding is only with a very small group of students at a single university, but Yash and Mary Lou called for other researchers to replicate this study with other groups of students to investigate further. 

    You can watch the full seminar here:

    [youtube https://www.youtube.com/watch?v=0oIGA7NJREI?feature=oembed&w=500&h=281]

    Acknowledging differences to prevent deepening divides

    As researchers and educators, we often hear that we should educate students about the importance of making AI ethical, fair, and accessible to everyone. However, simply hearing this message isn’t the same as truly believing it. If students’ identities influence how they view the use of AI tools, it could affect how they engage with these tools for learning. Without recognising these differences, we risk continuing to create wider and deeper digital divides. 

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI

    For our next seminar on Tuesday 16 April at 17:00 to 18:30 GMT, we’re joined by Brett A. Becker (University College Dublin), who will talk about how generative AI can be used effectively in secondary school programming education and how it can be leveraged so that students can be best prepared for continuing their education or beginning their careers. To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • Using an AI code generator with school-age beginner programmers

    Using an AI code generator with school-age beginner programmers

    Reading Time: 5 minutes

    AI models for general-purpose programming, such as OpenAI Codex, which powers the AI pair programming tool GitHub Copilot, have the potential to significantly impact how we teach and learn programming. 

    Learner in a computing classroom.

    The basis of these tools is a ‘natural language to code’ approach, also called natural language programming. This allows users to generate code using a simple text-based prompt, such as “Write a simple Python script for a number guessing game”. Programming-specific AI models are trained on vast quantities of text data, including GitHub repositories, to enable users to quickly solve coding problems using natural language. 

    As a computing educator, you might ask what the potential is for using these tools in your classroom. In our latest research seminar, Majeed Kazemitabaar (University of Toronto) shared his work in developing AI-assisted coding tools to support students during Python programming tasks.

    Evaluating the benefits of natural language programming

    Majeed argued that natural language programming can enable students to focus on the problem-solving aspects of computing, and support them in fixing and debugging their code. However, he cautioned that students might become overdependent on the use of ‘AI assistants’ and that they might not understand what code is being outputted. Nonetheless, Majeed and colleagues were interested in exploring the impact of these code generators on students who are starting to learn programming.

    Using AI code generators to support novice programmers

    In one study, the team Majeed works in investigated whether students’ task and learning performance was affected by an AI code generator. They split 69 students (aged 10–17) into two groups: one group used a code generator in an environment, Coding Steps, that enabled log data to be captured, and the other group did not use the code generator.

    A group of male students at the Coding Academy in Telangana.

    Learners who used the code generator completed significantly more authoring tasks — where students manually write all of the code — and spent less time completing them, as well as generating significantly more correct solutions. In multiple choice questions and modifying tasks — where students were asked to modify a working program — students performed similarly whether they had access to the code generator or not. 

    A test was administered a week later to check the groups’ performance, and both groups did similarly well. However, the ‘code generator’ group made significantly more errors in authoring tasks where no starter code was given. 

    Majeed’s team concluded that using the code generator significantly increased the completion rate of tasks and student performance (i.e. correctness) when authoring code, and that using code generators did not lead to decreased performance when manually modifying code. 

    Finally, students in the code generator group reported feeling less stressed and more eager to continue programming at the end of the study.

    Student perceptions when (not) using AI code generators

    Understanding how novices use AI code generators

    In a related study, Majeed and his colleagues investigated how novice programmers used the code generator and whether this usage impacted their learning. Working with data from 33 learners (aged 11–17), they analysed 45 tasks completed by students to understand:

    1. The context in which the code generator was used
    2. What learners asked for
    3. How prompts were written
    4. The nature of the outputted code
    5. How learners used the outputted code 

    Their analysis found that students used the code generator for the majority of task attempts (74% of cases) with far fewer tasks attempted without the code generator (26%). Of the task attempts made using the code generator, 61% involved a single prompt while only 8% involved decomposition of the task into multiple prompts for the code generator to solve subgoals; 25% used a hybrid approach — that is, some subgoal solutions being AI-generated and others manually written.

    In a comparison of students against their post-test evaluation scores, there were positive though not statistically significant trends for students who used a hybrid approach (see the image below). Conversely, negative though not statistically significant trends were found for students who used a single prompt approach.

    A positive correlation between hybrid programming and post-test scores

    Though not statistically significant, these results suggest that the students who actively engaged with tasks — i.e. generating some subgoal solutions, manually writing others, and debugging their own written code — performed better in coding tasks.

    Majeed concluded that while the data showed evidence of self-regulation, such as students writing code manually or adding to AI-generated code, students frequently used the output from single prompts in their solutions, indicating an over-reliance on the output of AI code generators.

    He suggested that teachers should support novice programmers to write better quality prompts to produce better code.  

    If you want to learn more, you can watch Majeed’s seminar:

    [youtube https://www.youtube.com/watch?v=ZmbQYE7hKhE?feature=oembed&w=500&h=281]

    You can read more about Majeed’s work on his personal website. You can also download and use the code generator Coding Steps yourself.

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI. 

    For our next seminar on Tuesday 16 April at 17:00–18:30 GMT, we’re joined by Brett Becker (University College Dublin), who will discuss how generative AI may be effectively utilised in secondary school programming education and how it can be leveraged so that students can be best prepared for whatever lies ahead. To take part in the seminar, click the button below to sign up, and we will send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Supporting learners with programming tasks through AI-generated Parson’s Problems

    Supporting learners with programming tasks through AI-generated Parson’s Problems

    Reading Time: 6 minutes

    The use of generative AI tools (e.g. ChatGPT) in education is now common among young people (see data from the UK’s Ofcom regulator). As a computing educator or researcher, you might wonder what impact generative AI tools will have on how young people learn programming. In our latest research seminar, Barbara Ericson and Xinying Hou (University of Michigan) shared insights into this topic. They presented recent studies with university student participants on using generative AI tools based on large language models (LLMs) during programming tasks. 

    A girl in a university computing classroom.

    Using Parson’s Problems to scaffold student code-writing tasks

    Barbara and Xinying started their seminar with an overview of their earlier research into using Parson’s Problems to scaffold university students as they learn to program. Parson’s Problems (PPs) are a type of code completion problem where learners are given all the correct code to solve the coding task, but the individual lines are broken up into blocks and shown in the wrong order (Parsons and Haden, 2006). Distractor blocks, which are incorrect versions of some or all of the lines of code (i.e. versions with syntax or semantic errors), can also be included. This means to solve a PP, learners need to select the correct blocks as well as place them in the correct order.

    A presentation slide defining Parson's Problems.

    In one study, the research team asked whether PPs could support university students who are struggling to complete write-code tasks. In the tasks, the 11 study participants had the option to generate a PP when they encountered a challenge trying to write code from scratch, in order to help them arrive at the complete code solution. The PPs acted as scaffolding for participants who got stuck trying to write code. Solutions used in the generated PPs were derived from past student solutions collected during previous university courses. The study had promising results: participants said the PPs were helpful in completing the write-code problems, and 6 participants stated that the PPs lowered the difficulty of the problem and speeded up the problem-solving process, reducing their debugging time. Additionally, participants said that the PPs prompted them to think more deeply.

    A young person codes at a Raspberry Pi computer.

    This study provided further evidence that PPs can be useful in supporting students and keeping them engaged when writing code. However, some participants still had difficulty arriving at the correct code solution, even when prompted with a PP as support. The research team thinks that a possible reason for this could be that only one solution was given to the PP, the same one for all participants. Therefore, participants with a different approach in mind would likely have experienced a higher cognitive demand and would not have found that particular PP useful.

    An example of a coding interface presenting adaptive Parson's Problems.

    Supporting students with varying self-efficacy using PPs

    To understand the impact of using PPs with different learners, the team then undertook a follow-up study asking whether PPs could specifically support students with lower computer science self-efficacy. The results show that study participants with low self-efficacy who were scaffolded with PPs support showed significantly higher practice performance and higher problem-solving efficiency compared to participants who had no scaffolding. These findings provide evidence that PPs can create a more supportive environment, particularly for students who have lower self-efficacy or difficulty solving code writing problems. Another finding was that participants with low self-efficacy were more likely to completely solve the PPs, whereas participants with higher self-efficacy only scanned or partly solved the PPs, indicating that scaffolding in the form of PPs may be redundant for some students.

    Secondary school age learners in a computing classroom.

    These two studies highlighted instances where PPs are more or less relevant depending on a student’s level of expertise or self-efficacy. In addition, the best PP to solve may differ from one student to another, and so having the same PP for all students to solve may be a limitation. This prompted the team to conduct their most recent study to ask how large language models (LLMs) can be leveraged to support students in code-writing practice without hindering their learning.

    Generating personalised PPs using AI tools

    This recent third study focused on the development of CodeTailor, a tool that uses LLMs to generate and evaluate code solutions before generating personalised PPs to scaffold students writing code. Students are encouraged to engage actively with solving problems as, unlike other AI-assisted coding tools that merely output a correct code correct solution, students must actively construct solutions using personalised PPs. The researchers were interested in whether CodeTailor could better support students to actively engage in code-writing.

    An example of the CodeTailor interface presenting adaptive Parson's Problems.

    In a study with 18 undergraduate students, they found that CodeTailor could generate correct solutions based on students’ incorrect code. The CodeTailor-generated solutions were more closely aligned with students’ incorrect code than common previous student solutions were. The researchers also found that most participants (88%) preferred CodeTailor to other AI-assisted coding tools when engaging with code-writing tasks. As the correct solution in CodeTailor is generated based on individual students’ existing strategy, this boosted students’ confidence in their current ideas and progress during their practice. However, some students still reported challenges around solution comprehension, potentially due to CodeTailor not providing sufficient explanation for the details in the individual code blocks of the solution to the PP. The researchers argue that text explanations could help students fully understand a program’s components, objectives, and structure. 

    In future studies, the team is keen to evaluate a design of CodeTailor that generates multiple levels of natural language explanations, i.e. provides personalised explanations accompanying the PPs. They also aim to investigate the use of LLM-based AI tools to generate a self-reflection question structure that students can fill in to extend their reasoning about the solution to the PP.

    Barbara and Xinying’s seminar is available to watch here: 

    [youtube https://www.youtube.com/watch?v=ieFB_C2bq2Y?feature=oembed&w=500&h=281]

    Find examples of PPs embedded in free interactive ebooks that Barbara and her team have developed over the years, including CSAwesome and Python for Everybody. You can also read more about the CodeTailor platform in Barbara and Xinying’s paper.

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI. 

    For our next seminar on Tuesday 12 March at 17:00–18:30 GMT, we’re joined by Yash Tadimalla and Prof. Mary Lou Maher (University of North Carolina at Charlotte). The two of them will share further insights into the impact of AI tools on the student experience in programming courses. To take part in the seminar, click the button below to sign up, and we will send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Grounded cognition: physical activities and learning computing

    Grounded cognition: physical activities and learning computing

    Reading Time: 4 minutes

    Everyone who has taught children before will know the excited gleam in their eyes when the lessons include something to interact with physically. Whether it’s printed and painstakingly laminated flashcards, laser-cut models, or robots, learners’ motivation to engage with the topic will increase along with the noise levels in the classroom.

    Two learners do physical computing in the primary school classroom.

    However, these hands-on activities are often seen as merely a technique to raise interest, or a nice extra project for children to do before the ‘actual learning’ can begin. But what if this is the wrong way to think about this type of activity? 

    In our 2023 online research seminar series, focused on computing education for primary-aged (K–5) learners, we delved into the most recent research aimed at enhancing learning experiences for students in the earliest stages of education. From a deep dive into teaching variables to exploring the integration of computational thinking, our series has looked at the most effective ways to engage young minds in the subject of computing.

    An adult on a plain background.

    It’s only fitting that in our final seminar in the series, Anaclara Gerosa from the University of Glasgow tackled one of the most fundamental questions in education: how do children actually learn? Beyond the conventional methods, emerging research has been shedding light on a fascinating approach — the concept of grounded cognition. This theory suggests that children don’t merely passively absorb knowledge; they physically interact with it, quite literally ‘grasping’ concepts in the process.

    Grounded cognition, also known in variations as embodied and situated cognition, offers a new perspective on how we absorb and process information. At its core, this theory suggests that all cognitive processes, including language and thought, are rooted in the body’s dynamic interactions with the environment. This notion challenges the conventional view of learning as a purely cognitive activity and highlights the impact of action and simulation.

    A group of learners do physical computing in the primary school classroom.

    There is evidence from many studies in psychology and pedagogy that using hands-on activities can enhance comprehension and abstraction. For instance, finger counting has been found to be essential in understanding numerical systems and mathematical concepts. A recent study in this field has shown that children who are taught basic computing concepts with unplugged methods can grasp abstract ideas from as young as 3. There is therefore an urgent need to understand exactly how we could use grounded cognition methods to teach children computing — which is arguably one of the most abstract subjects in formal education.

    A recent study in this field has shown that children who are taught basic computing concepts with unplugged methods can grasp abstract ideas from as young as 3.

    Anaclara is part of a group of researchers at the University of Glasgow who are currently developing a new approach to structuring computing education. Their EIFFEL (Enacted Instrumented Formal Framework for Early Learning in Computing) model suggests a progression from enacted to formal activities.

    Following this model, in the early years of computing education, learners would primarily engage with activities that allow them to work with tangible 3D objects or manipulate intangible objects, for instance in Scratch. Increasingly, students will be able to perform actions in an instrumented or virtual environment which will require the knowledge of abstract symbols but will not yet require the knowledge of programming languages. Eventually, students will have developed the knowledge and skills to engage in fully formal environments, such as writing advanced code.

    A graph illustrating the EIFFEL model for early computing.

    In a recent literature review, Anaclara and her colleagues looked at existing research into using grounded cognition theory in computing education. Although several studies report the use of grounded approaches, for instance by using block-based programming, robots, toys, or construction kits, the focus is generally on looking at how concrete objects can be used in unplugged activities due to specific contexts, such as a limited availability of computing devices.

    The next steps in this area are looking at how activities that specifically follow the EIFFEL framework can enhance children’s learning. 

    You can watch Anaclara’s seminar here: 

    [youtube https://www.youtube.com/watch?v=BgjSKhqRHDU?feature=oembed&w=500&h=281]

    You can also access the presentation slides here.

    Research into grounded cognition activities in computer science is ongoing, but we encourage you to try incorporating more hands-on activities when teaching younger learners and observing the effects yourself. Here are a few ideas on how to get started:

    In 2024, we are exploring different ways to teach and learn programming, with and without AI tools. In our next seminar, on 13 February at 17:00 GMT, Majeed Kazemi from the University of Toronto will be joining us to discuss whether AI-powered code generators can help K–12 students learn to program in Python. All of our online seminars are free and open to everyone. Sign up and we’ll send you the link to join on the day.

    Website: LINK

  • Integrating computational thinking into primary teaching

    Integrating computational thinking into primary teaching

    Reading Time: 6 minutes

    “Computational thinking is really about thinking, and sometimes about computing.” – Aman Yadav, Michigan State University

    Young people in a coding lesson.

    Computational thinking is a vital skill if you want to use a computer to solve problems that matter to you. That’s why we consider computational thinking (CT) carefully when creating learning resources here at the Raspberry Pi Foundation. However, educators are increasingly realising that CT skills don’t just apply to writing computer programs, and that CT is a fundamental approach to problem-solving that can be extended into other subject areas. To discuss how CT can be integrated beyond the computing classroom and help introduce the fundamentals of computing to primary school learners, we invited Dr Aman Yadav from Michigan State University to deliver the penultimate presentation in our seminar series on computing education for primary-aged children. 

    In his presentation, Aman gave a concise tour of CT practices for teachers, and shared his findings from recent projects around how teachers perceive and integrate CT into their lessons.

    Research in context

    Aman began his talk by placing his team’s work within the wider context of computing education in the US. The computing education landscape Aman described is dominated by the National Science Foundation’s ambitious goal, set in 2008, to train 10,000 computer science teachers. This objective has led to various initiatives designed to support computer science education at the K–12 level. However, despite some progress, only 57% of US high schools offer foundational computer science courses, only 5.8% of students enrol in these courses, and just 31% of the enrolled students are female. As a result, Aman and his team have worked in close partnership with teachers to address questions that explore ways to more meaningfully integrate CT ideas and practices into formal education, such as:

    • What kinds of experiences do students need to learn computing concepts, to be confident to pursue computing?
    • What kinds of knowledge do teachers need to have to facilitate these learning experiences?
    • What kinds of experiences do teachers need to develop these kinds of knowledge? 

    The CT4EDU project

    At the primary education level, the CT4EDU project posed the question “What does computational thinking actually look like in elementary classrooms, especially in the context of maths and science classes?” This project involved collaboration with teachers, curriculum designers, and coaches to help them conceptualise and implement CT in their core instruction.

    A child at a laptop

    During professional development workshops using both plugged and unplugged tasks, the researchers supported educators to connect their day-to-day teaching practice to four foundational CT constructs:

    1. Debugging
    2. Abstraction
    3. Decomposition
    4. Patterns

    An emerging aspect of the research team’s work has been the important relationship between vocabulary, belonging, and identity-building, with implications for equity. Actively incorporating CT vocabulary in lesson planning and classroom implementation helps students familiarise themselves with CT ideas: “If young people are using the language, they see themselves belonging in computing spaces”. 

    A main finding from the study is that teachers used CT ideas to explicitly engage students in metacognitive thinking processes, and to help them be aware of their thinking as they solve problems. Rather than teachers using CT solely to introduce their students to computing, they used CT as a way to support their students in whatever they were learning. This constituted a fundamental shift in the research team’s thinking and future work, which is detailed further in a conceptual article

    The Smithsonian Science for Computational Thinking project

    The work conducted for the CT4EDU project guided the approach taken in the Smithsonian Science for Computational Thinking project. This project entailed the development of a curriculum for grades 3 and 5 that integrates CT into science lessons.

    Teacher and young student at a laptop.

    Part of the project included surveying teachers about the value they place on CT, both before and after participating in professional development workshops focused on CT. The researchers found that even before the workshops, teachers make connections between CT and the rest of the curriculum. After the workshops, an overwhelming majority agreed that CT has value (see image below). From this survey, it seems that CT ties things together for teachers in ways not possible or not achieved with other methods they’ve tried previously.  

    A graph from Aman's seminar.

    Despite teachers valuing the CT approach, asking them to integrate coding into their practices from the start remains a big ask (see image below). Many teachers lack knowledge or experience of coding, and they may not be curriculum designers, which means that we need to develop resources that allow teachers to integrate CT and coding in natural ways. Aman proposes that this requires a longitudinal approach, working with teachers over several years, using plugged and unplugged activities, and working closely with schools’ STEAM or specialist technology teachers where applicable to facilitate more computationally rich learning experiences in classrooms.

    A graph from Aman's seminar.

    Integrated computational thinking

    Aman’s team is also engaged in a research project to integrate CT at middle school level for students aged 11 to 14. This project focuses on the question “What does CT look like in the context of social studies, English language, and art classrooms?”

    For this project, the team conducted three Delphi studies, and consequently created learning pathways for each subject, which teachers can use to bring CT into their classrooms. The pathways specify practices and sub-practices to engage students with CT, and are available on the project website. The image below exemplifies the CT integration pathways developed for the arts subject, where the relationship between art and data is explored from both directions: by using CT and data to understand and create art, and using art and artistic principles to represent and communicate data. 

    Computational thinking in the primary classroom

    Aman’s work highlights the broad value of CT in education. However, to meaningfully integrate CT into the classroom, Aman suggests that we have to take a longitudinal view of the time and methods required to build teachers’ understanding and confidence with the fundamentals of CT, in a way that is aligned with their values and objectives. Aman argues that CT is really about thinking, and sometimes about computing, to support disciplinary learning in primary classrooms. Therefore, rather than focusing on integrating coding into the classroom, he proposes that we should instead talk about using CT practices as the building blocks that provide the foundation for incorporating computationally rich experiences in the classroom. 

    Watch the recording of Aman’s presentation:

    [youtube https://www.youtube.com/watch?v=za77zvoth5E?feature=oembed&w=500&h=281]

    You can access Aman’s seminar slides as well.

    You can find out more about connecting research to practice for primary computing education by watching the recordings of the other seminars in our series on primary (K–5) teaching and learning. In particular, Bobby Whyte discusses similar concepts to Aman in his talk on integrating primary computing and literacy through multimodal storytelling

    Sign up for our seminars

    Our 2024 seminar series is on the theme of teaching programming, with or without AI. In this series, we explore the latest research on how teachers can best support school-age learners to develop their programming skills.

    On 13 February, we’ll hear from Majeed Kazemi (University of Toronto) about his work investigating whether AI code generator tools can support K-12 students to learn Python programming.

    Sign up now to join the seminar:

    Website: LINK

  • Engaging primary Computing teachers in culturally relevant pedagogy through professional development

    Engaging primary Computing teachers in culturally relevant pedagogy through professional development

    Reading Time: 8 minutes

    Underrepresentation in computing is a widely known issue, in industry and in education. To cite some statistics from the UK: a Black British Voices report from August 2023 noted that 95% of respondents believe the UK curriculum neglects black lives and experiences; fewer students from working class backgrounds study GCSE Computer Science; when they leave formal education, fewer female, BAME, and white working class people are employed in the field of computer science (Kemp 2021); only 21% of GCSE Computer Science students, 15% at A level, and 22% at undergraduate level are female (JCQ 2020, Ofqual 2020, UCAS 2020); students with additional needs are also underrepresented.

    In a computing classroom, two girls concentrate on their programming task.

    Such statistics have been the status quo for too long. Many Computing teachers already endeavour to bring about positive change where they can and engage learners by including their interests in the lessons they deliver, so how can we support them to do this more effectively? Extending the reach of computing so that it is accessible to all also means that we need to consider what formal and informal values predominate in the field of computing. What is the ‘hidden’ curriculum in computing that might be excluding some learners? Who is and who isn’t represented?

    Katharine Childs.
    Katharine Childs (Raspberry Pi Foundation)

    In a recent research seminar, Katharine Childs from our team outlined a research project we conducted, which included a professional development workshop to increase primary teachers’ awareness of and confidence in culturally relevant pedagogy. In the workshop, teachers considered how to effectively adapt curriculum materials to make them culturally relevant and engaging for the learners in their classrooms. Katharine described the practical steps teachers took to adapt two graphics-related units, and invited seminar participants to apply their learning to a graphics activity themselves.

    What is culturally relevant pedagogy?

    Culturally relevant pedagogy is a teaching framework which values students’ identities, backgrounds, knowledge, and ways of learning. By drawing on students’ own interests, experiences and cultural knowledge educators can increase the likelihood that the curriculum they deliver is more relevant, engaging and accessible to all.

    The idea of culturally relevant pedagogy was first introduced in the US in the 1990s by African-American academic Gloria Ladson-Billings (Ladson-Billings 1995). Its aim was threefold: to raise students’ academic achievement, to develop students’ cultural competence and to promote students’ critical consciousness. The idea of culturally responsive teaching was later advanced by Geneva Gay (2000) and more recently  brought into focus in US computer science education by Kimberly Scott and colleagues (2015). The approach has been localised for England by Hayley Leonard and Sue Sentance (2021) in work they undertook here at the Foundation.

    Ten areas of opportunity

    Katharine began her presentation by explaining that the professional development workshop in the Primary culturally adapted resources for computing project built on two of our previous research projects to develop guidelines for culturally relevant and responsive computing and understand how teachers used them in practice. This third project ran as a pilot study funded by Cognizant, starting in Autumn 2022 with a one-day, in-person workshop for 13 primary computing teachers

    The research structure was a workshop followed by research adaption, then delivery of resources, and evaluation through a parent survey, teacher interviews, and student focus groups.

    Katharine then introduced us to the 10 areas of opportunity (AO) our research at the Raspberry Pi Computing Education Research Centre had identified for culturally relevant pedagogy. These 10 areas were used as practical prompts to frame the workshop discussions:

    1. Find out about learners
    2. Find out about ourselves as teachers
    3. Review the content
    4. Review the context
    5. Make the learning accessible to all
    6. Provide opportunities for open-ended and problem solving activities
    7. Promote collaboration and structured group discussion
    8. Promote student agency through choice
    9. Review the learning environment
    10. Review related policies, processes, and training in your school and department

    At first glance it is easy to think that you do most of those things already, or to disregard some items as irrelevant to the computing curriculum. What would your own cultural identity (see AO2) have to do with computing, you might wonder. But taking a less complacent perspective might lead you to consider all the different facets that make up your identity and then to think about the same for the students you teach. You may discover that there are many areas which you have left untapped in your lesson planning.

    Two young people learning together at a laptop.

    Katharine explained how this is where the professional development workshop showed itself as beneficial for the participants. It gave teachers the opportunity to reflect on how their cultural identity impacted on their teaching practices — as a starting point to learning more about other aspects of the culturally relevant pedagogy approach.

    Our researchers were interested in how they could work alongside teachers to adapt two computing units to make them more culturally relevant for teachers’ specific contexts. They used the Computing Curriculum units on Photo Editing (Year 4) and Vector Graphics (Year 5).

    A slide about adapting an emoji teaching activity to make it culturally relevant.

    Katharine illustrated some of the adaptations teachers and researchers working together had made to the emoji activity above, and which areas of opportunity (AO) had been addressed; this aspect of the research will be reported in later publications.

    Results after the workshop

    Although the numbers of participants in this pilot study was small, the findings show that the professional development workshop significantly increased teachers’ awareness of culturally relevant pedagogy and their confidence in adapting resources to take account of local contexts:

    • After the workshop, 10/13 teachers felt more confident to adapt resources to be culturally relevant for their own contexts, and 8/13 felt more confident in adapting resources for others.
    • Before the workshop, 5/13 teachers strongly agreed that it was an important part of being a computing teacher to examine one’s own attitudes and beliefs about race, gender, disabilities, sexual orientation. After the workshop, the number in agreement rose to 12/13.
    • After the workshop, 13/13 strongly agreed that part of a computing teacher’s responsibility is to challenge teaching practices which maintain social inequities (compared to 7/13 previously).
    • Before the workshop, 4/13 teachers strongly agreed that it is important to allow student choice when designing computing activities; this increased to 9/13 after the workshop.

    These quantitative shifts in perspective indicate a positive effect of the professional development pilot. 

    Katharine described that in our qualitative interviews with the participating teachers, they expressed feeling that their understanding of culturally relevant pedagogy had increased and they recognized the many benefits to learners of the approach. They valued the opportunity to discuss their contexts and to adapt materials they currently used with other teachers, because it made it a more ‘authentic’ and practical professional development experience.

    The seminar ended with breakout sessions inviting viewers to consider possible adaptations that could be made to the graphics activities which had been the focus of the workshop.

    In the breakout sessions, attendees also discussed specific examples of culturally relevant teaching practices that had been successful in their own classrooms, and they considered how schools and computing educational initiatives could support teachers in their efforts to integrate culturally relevant pedagogy into their practice. Some attendees observed that it was not always possible to change schemes of work without a ‘whole-school’ approach, senior leadership team support, and commitment to a research-based professional development programme.

    Where do you see opportunities for your teaching?

    The seminar reminds us that the education system is not culture neutral and that teachers generally transmit the dominant culture (which may be very different from their students’) in their settings (Vrieler et al, 2022). Culturally relevant pedagogy is an attempt to address the inequities and biases that exist, which result in many students feeling marginalised, disenfranchised, or underachieving. It urges us to incorporate learners’ cultures and experiences in our endeavours  to create a more inclusive computing curriculum; to adopt an intersectional lens so that all can thrive.

    Secondary school age learners in a computing classroom.

    As a pilot study, the workshop was offered to a small cohort of 13, yet the findings show that the intervention significantly increased participants’ awareness of culturally relevant pedagogy and their confidence in adapting resources to take account of local contexts.

    Of course there are many ways in which teachers already adapt resources to make them interesting and accessible to their pupils. Further examples of the sort of adaptations you might make using these areas of opportunity include:

    • AO1: You could find out to what extent learners feel like they ‘belong’ or are included in a particular computing-related career. This is sure to yield valuable insights into learners’ knowledge and/or preconceptions of computing-related careers. 
    • AO3: You could introduce topics such as the ethics of AI, data bias, investigations of accessibility and user interface design. 
    • AO4: You might change the context of a unit of work on the use of conditional statements in programming, from creating a quiz about ‘Vikings’ to focus on, for example, aspects of youth culture which are more engaging to some learners such as football or computer games, or to focus on religious celebrations, which may be more meaningful to others.
    • AO5: You could experiment with a particular pedagogical approach to maximise the accessibility of a unit of work. For example, you could structure a programming unit by using the PRIMM model, or follow the Universal Design for Learning framework to differentiate for diversity.
    • AO6/7: You could offer more open-ended and collaborative activities once in a while, to promote engagement and to allow learners to express themselves autonomously.
    • AO8: By allowing learners to choose topics which are relevant or familiar to their individual contexts and identities, you can increase their feeling of agency. 
    • AO9: You could review both your learning materials and your classroom to ensure that all your students are fully represented.
    • AO10: You can bring colleagues on board too; the whole enterprise of embedding culturally relevant pedagogy will be more successful when school- as well as department-level policies are reviewed and prioritised.

    Can you see an opportunity for integrating culturally relevant pedagogy in your classroom? We would love to hear about examples of culturally relevant teaching practices that you have found successful. Let us know your thoughts or questions in the comments below.

    You can watch Katharine’s seminar here:

    [youtube https://www.youtube.com/watch?v=qYAzmLDiwa8?feature=oembed&w=500&h=281]

    You can download her presentation slides on our ‘previous seminars’ page, and you can read her research paper.

    To get a practical overview of culturally relevant pedagogy, read our 2-page Quick Read on the topic and download the guidelines we created with a group of teachers and academic specialists.

    Tomorrow we’ll be sharing a blog about how the learners who engaged with the culturally adapted units found the experience, and how it affected their views of computing. Follow us on social media to not miss it!

    Join our upcoming seminars live

    On 12 December we’ll host the last seminar session in our series on primary (K-5) computing. Anaclara Gerosa will share her work on how to design and structure early computing activities that promote and scaffold students’ conceptual understanding. As always, the seminar is free and takes place online at 17:00–18:30 GMT / 12:00–13:30 ET / 9:00–10:30 PT / 18:00–19:30 CET. Sign up and we’ll send you the link to join on the day.

    In 2024, our new seminar series will be about teaching and learning programming, with and without AI tools. If you’re signed up to our seminars, you’ll receive the link to join every monthly seminar.

    Website: LINK

  • Spotlight on teaching programming with and without AI in our 2024 seminar series

    Spotlight on teaching programming with and without AI in our 2024 seminar series

    Reading Time: 3 minutes

    How do you best teach programming in school? It’s one of the core questions for primary and secondary computing teachers. That’s why we’re making it the focus of our free online seminars in 2024. You’re invited to attend and hear about the newest research about the teaching and learning of programming, with or without AI tools.

    Two smiling adults learn about computing at desktop computers.

    Building on the success and the friendly, accessible session format of our previous seminars, this coming year we will delve into the latest trends and innovative approaches to programming education in school.

    Secondary school age learners in a computing classroom.

    Our online seminars are for everyone interested in computing education

    Our monthly online seminars are not only for computing educators but also for everyone else who is passionate about teaching young people to program computers. The seminar participants are a diverse community of teachers, technology enthusiasts, industry professionals, coding club volunteers, and researchers.

    Two adults learn about computing at desktop computers.

    With the seminars we aim to bridge the gap between the newest research and practical teaching. Whether you are an educator in a traditional classroom setting or a mentor guiding learners in a CoderDojo or Code Club, you will gain insights from leading researchers about how school-age learners engage with programming. 

    What to expect from the seminars

    Each online seminar begins with an expert presenter delivering their latest research findings in an accessible way. We then move into small groups to encourage discussion and idea exchange. Finally, we come back together for a Q&A session with the presenter.

    Here’s what attendees had to say about our previous seminars:

    “As a first-time attendee of your seminars, I was impressed by the welcoming atmosphere.”

    “[…] several seminars (including this one) provided valuable insights into different approaches to teaching computing and technology.”

    “I plan to use what I have learned in the creation of curriculum […] and will pass on what I learned to my team.”

    “I enjoyed the fact that there were people from different countries and we had a chance to see what happens elsewhere and how that may be similar and different to what we do here.”

    January seminar: AI-generated Parson’s Problems

    Computing teachers know that, for some students, learning about the syntax of programming languages is very challenging. Working through Parson’s Problem activities can be a way for students to learn to make sense of the order of lines of code and how syntax is organised. But for teachers it can be hard to precisely diagnose their students’ misunderstandings, which in turn makes it hard to create activities that address these misunderstandings.

    A group of students and a teacher at the Coding Academy in Telangana.

    At our first 2024 seminar on 9 January, Dr Barbara Ericson and Xinying Hou (University of Michigan) will present a promising new approach to helping teachers solve this difficulty. In one of their studies, they combined Parsons Problems and generative AI to create targeted activities for students based on the errors students had made in previous tasks. Thus they were able to provide personalised activities that directly addressed gaps in the students’ learning.

    Sign up now to join our seminars

    All our seminars start at 17:00 UK time (18:00 CET / 12:00 noon ET / 9:00 PT) and are held online on Zoom. To ensure you don’t miss out, sign up now to receive calendar invitations, and access links for each seminar on the day.

    If you sign up today, we’ll also invite you to our 12 December seminar with Anaclara Gerosa (University of Glasgow) about how to design and structure of computing activities for young learners, the final session in our 2023 series about primary (K-5) computing education.

    Website: LINK

  • Support for new computing teachers: A tool to find Scratch programming errors

    Support for new computing teachers: A tool to find Scratch programming errors

    Reading Time: 5 minutes

    We all know that learning to program, and specifically learning how to debug or fix code, can be frustrating and leave beginners overwhelmed and disheartened. In a recent blog article, our PhD student Lauria at the Raspberry Pi Computing Education Research Centre highlighted the pivotal role that teachers play in shaping students’ attitudes towards debugging. But what about teachers who are coding novices themselves?

    Two adults learn about computing at desktop computers.

    In many countries, primary school teachers are holistic educators and often find themselves teaching computing despite having little or no experience in the field. In a recent seminar of our series on computing education for primary-aged children, Luisa Greifenstein told attendees that struggling with debugging and negative attitudes towards programming were among the top ten challenges mentioned by teachers.

    Luisa Greifenstein.

    Luisa is a researcher at the University of Passau, Germany, and has been working closely with both teacher trainees and experienced primary school teachers in Germany. She’s found that giving feedback to students can be difficult for primary school teachers, and especially for teacher trainees, as programming is still new to them. Luisa’s seminar introduced a tool to help.

    A unique approach: Visualising debugging with LitterBox

    To address this issue, the University of Passau has initiated the primary::programming project. One of its flagship tools, LitterBox, offers a unique solution to debugging and is specifically designed for Scratch, a beginners’ programming language widely used in primary schools.

    A screenshot from the LitterBox tool.
    You can upload Scratch program files to LitterBox to analyse them. Click to enlarge.

    LitterBox serves as a static code debugging tool that transforms code examination into an engaging experience. With a nod to the Scratch cat, the tool visualises the debugging of Scratch code as checking the ‘litterbox’, categorising issues into ‘bugs’ and ‘smells’:

    • Bugs represent code patterns that have gone wrong, such as missing loops or specific blocks
    • Smells indicate that the code couldn’t be processed correctly because of duplications or unnecessary elements
    A screenshot from the LitterBox tool.
    The code patterns LitterBox recognises. Click to enlarge.

    What sets LitterBox apart is that it also rewards correct code by displaying ‘perfumes’. For instance, it will praise correct broadcasting or the use of custom blocks. For every identified problem or achievement, the tool provides short and direct feedback.

    A screenshot from the LitterBox tool.
    LitterBox also identifies good programming practice. Click to enlarge.

    Luisa and her team conducted a study to gauge the effectiveness of LitterBox. In the study, teachers were given fictitious student code with bugs and were asked to first debug the code themselves and then explain in a manner appropriate to a student how to do the debugging.

    The results were promising: teachers using LitterBox outperformed a control group with no access to the tool. However, the team also found that not all hints proved equally helpful. When hints lacked direct relevance to the code at hand, teachers found them confusing, which highlighted the importance of refining the tool’s feedback mechanisms.

    A bar chart showing that LitterBox helps computing teachers.

    Despite its limitations, LitterBox proved helpful in another important aspect of the teachers’ work: coding task creation. Novice students require structured tasks and help sheets when learning to code, and teachers often invest substantial time in developing these resources. While LitterBox does not guide educators in generating new tasks or adapting them to their students’ needs, in a second study conducted by Luisa’s team, teachers who had access to LitterBox not only received support in debugging their own code but also provided more scaffolding in task instructions they created for their students compared to teachers without LitterBox.

    How to maximise the impact of new tools: use existing frameworks and materials

    One important realisation that we had in the Q&A phase of Luisa’s seminar was that many different research teams are working on solutions for similar challenges, and that the impact of this research can be maximised by integrating new findings and resources. For instance, what the LitterBox tool cannot offer could be filled by:

    • Pedagogical frameworks to enhance teachers’ lessons and feedback structures. Frameworks such as PRIMM (Predict, Run, Investigate, Modify, and Make) or TIPP&SEE for Scratch projects (Title, Instructions, Purpose, Play & Sprites, Events, Explore) can serve as valuable resources. These frameworks provide a structured approach to lesson design and teaching methodologies, making it easier for teachers to create engaging and effective programming tasks. Additionally, by adopting semantic waves in the feedback for teachers and students, a deeper understanding of programming concepts can be fostered. 
    • Existing courses and materials to aid task creation and adaptation. Our expert educators at the Raspberry Pi Foundation have not only created free lesson plans and courses for teachers and educators, but also dedicated non-formal learning paths for Scratch, Python, Unity, web design, and physical computing that can serve as a starting point for classroom tasks.

    Exploring innovative ideas in computing education

    As we navigate the evolving landscape of programming education, it’s clear that innovative tools like LitterBox can make a significant difference in the journey of both educators and students. By equipping educators with effective debugging and task creation solutions, we can create a more positive and engaging learning experience for students.

    If you’re an educator, consider exploring how such tools can enhance your teaching and empower your students in their coding endeavours.

    You can watch the recording of Luisa’s seminar here:

    [youtube https://www.youtube.com/watch?v=ITbIVUc2Rxc?feature=oembed&w=500&h=281]

    Sign up now to join our next seminar

    If you’re interested in the latest developments in computing education, join us at one of our free, monthly seminars. In these sessions, researchers from all over the world share their innovative ideas and are eager to discuss them with educators and students. In our December seminar, Anaclara Gerosa (University of Edinburgh) will share her findings about how to design and structure early-years computing activities.

    This will be the final seminar in our series about primary computing education. Look out for news about the theme of our 2024 seminar series, which are coming soon.

    Website: LINK

  • Young children’s ScratchJr coding projects: Assessment and support

    Young children’s ScratchJr coding projects: Assessment and support

    Reading Time: 5 minutes

    Block-based programming applications like Scratch and ScratchJr provide millions of children with an introduction to programming; they are a fun and accessible way for beginners to explore programming concepts and start making with code. ScratchJr, in particular, is designed specifically for children between the ages of 5 and 7, enabling them to create their own interactive stories and games. So it’s no surprise that they are popular tools for primary-level (K–5) computing teachers and learners. But how can teachers assess coding projects built in ScratchJr, where the possibilities are many and children are invited to follow their imagination?

    Aim Unahalekhala
    Aim Unahalekhala

    In the latest seminar of our series on computing education for primary-aged children, attendees heard about two research studies that explore the use of ScratchJr in K–2 education. The speaker, Apittha (Aim) Unahalekhala, is a graduate researcher at the DevTech Research Group at Tufts University. The two studies looked at assessing young children’s ScratchJr coding projects and understanding how they create projects. Both of the studies were part of the Coding as Another Language project, which sees computer science as a new literacy for the 21st century, and is developing a literacy-based coding curriculum for K–2.

    How to evaluate children’s ScratchJr projects

    ScratchJr offers children 28 blocks to choose from when creating a coding project. Some of these are simple, such as blocks that determine the look of a character or setting, while others are more complex, such as messaging blocks and loops. Children can combine the blocks in many different ways to create projects of different levels of complexity.

    A child select blocks for a ScratchJr project on a tablet.
    Selecting blocks for a ScratchJr project

    At the start of her presentation, Aim described a rubric that she and her colleagues at DevTech have developed to assess three key aspects of a ScratchJr coding project. These aspects are coding concepts, project design, and purposefulness.

    • Coding concepts in ScratchJr are sequencing, repeats, events, parallelism, coordination, and the number parameter
    • Project design includes elaboration (number of settings and characters, use of speech bubbles) and originality (character and background customisation, animated looks, sounds)

    The rubric lets educators or researchers:

    • Assess learners’ ability to use their coding knowledge to create purposeful and creative ScratchJr projects
    • Identify the level of mastery of each of the three key aspects demonstrated within the project
    • Identify where learners might need more guidance and support
    The elements covered by the ScratchJr project evaluation rubric.
    The elements covered by the ScratchJr project evaluation rubric. Click to enlarge.

    As part of the study, Aim and her colleagues collected coding projects from two schools at the start, middle, and end of a curriculum unit. They used the rubric to evaluate the coding projects and found that project scores increased over the course of the unit.

    They also found that, overall, the scores for the project design elements were higher than those for coding concepts: many learners enjoyed spending lots of time designing their characters and settings, but made less use of other features. However, the two scores were correlated, meaning that learners who devoted a lot of time to the design of their project also got higher scores on coding concepts.

    The rubric is a useful tool for any teachers using ScratchJr with their students. If you want to try it in your classroom, the validated rubric is free to download from the DevTech research group’s website.

    How do young children create a project?

    The rubric assesses the output created by a learner using ScratchJr. But learning is a process, not just an end outcome, and the final project might not always be an accurate reflection of a child’s understanding.

    By understanding more about how young children create coding projects, we can improve teaching and curriculum design for early childhood computing education.

    In the second study Aim presented, she set out to explore this question. She conducted a qualitative observation of children as they created coding projects at different stages of a curriculum unit, and used Google Analytics data to conduct a quantitative analysis of the steps the children took.

    A Scratch project creation process involving iteration.
    A project creation process involving iteration

    Her findings highlighted the importance of encouraging young learners to explore the full variety of blocks available, both by guiding them in how to find and use different blocks, and by giving them the time and tools they need to explore on their own.

    She also found that different teaching strategies are needed at different stages of the curriculum unit to support learners. This helps them to develop their understanding of both basic and advanced blocks, and to explore, customise, and iterate their projects.

    Early-unit strategy:

    • Encourage free play to self-discover different functions, especially basic blocks

    Mid-unit strategy:

    • Set plans on how long children will need on customising vs coding
    • More guidance on the advanced blocks, then let children explore

    End-of-unit strategy:

    • Provide multiple sessions to work
    • Promote iteration by encouraging children to keep improving code and adding details
    Teaching strategies for different stages of a ScratchJr curriculum.
    Teaching strategies for different stages of the curriculum

    You can watch Aim’s full presentation here:

    [youtube https://www.youtube.com/watch?v=–8mzzrZ0ME?feature=oembed&w=500&h=281]

    You can also access the seminar slides here.

    Join our next seminar on primary computing education

    At our next seminar, we welcome Aman Yadav (Michigan State University), who will present research on computational thinking in primary school. The session will take place online on Tuesday 7 November at 17:00 UK time. Don’t miss out and sign up now:

    To find out more about connecting research to practice for primary computing education, you can find the rest of our upcoming monthly seminars on primary (K–5) teaching and learning and watch the recordings of previous seminars in this series.

    Website: LINK