Schlagwort: Research

  • Bringing data science to life for K–12 students with the ‘API Can Code’ curriculum

    Bringing data science to life for K–12 students with the ‘API Can Code’ curriculum

    Reading Time: 7 minutes

    As data and data-driven technologies become a bigger part of everyday life, it’s more important than ever to make sure that young people are given the chance to learn data science concepts and skills.

    In our April research seminar, David Weintrop, Rotem Israel-Fishelson, and Peter Moon from the University of Maryland introduced API Can Code, a data science curriculum designed with high school students for high school students. Their talk explored how their innovative work uses real-world data and students’ own experiences and interests to create meaningful, authentic learning experiences in data science.

    Quick note for educators: Are you interested in joining our free, exploratory data science education workshop for teachers on 10 July 2025 in Cambridge, UK? Then find out the details here.

    David started by explaining the motivation behind the API Can Code project. The team’s goal was not to turn students into future data scientists, but to offer students the data literacy they need to explore and critically engage with a data-driven world. 

    The work was also guided by a shared view among leading teachers’ organisations that data science should be taught across all subjects in the K–12 curriculum. It also draws on strong research showing that when educational experiences connect with students’ own lives and interests, it leads to deeper engagement and better learning outcomes.

    Reviewing the landscape

    To prepare for the design of the curriculum, David, Rotem, and Peter wanted to understand what data science education options already exist for K–12 students. Rotem described how they compared four major K–12 data science curricula and examined different aspects, such as the topics they covered and the datasets they used. Their findings showed that many datasets were quite small in size, and that the datasets used were not always about topics that students were interested in.

    A classroom of young learners and a teacher at laptops

    The team also looked at 30 data science tools used across different K–12 platforms and analysed what each could do. They found that tools varied in how effective they were and that many lacked accessibility features to support students with diverse learning needs. 

    This analysis helped to refine the team’s objective: to create a data science curriculum that students find interesting and that is informed by their values and voices.

    Participatory design

    To work towards this goal, the team used a methodology called participatory design. This is an approach that actively involves the end users — in this case, high school students — in the design process. During several in-person sessions with 28 students aged 15 to 18 years old, the researchers facilitated low-tech, hands-on activities exploring the students’ identities and interests and how they think about data.

    One activity, Empathy Map, involved students working together to create a persona representing a student in their school. They were asked to describe the persona’s daily life, interests, and concerns about technology and data:

    The students’ involvement in the design process gave the team a better understanding of young people’s views and interests, which helped create the design of the API Can Code curriculum.

    API Can Code: three units, three key tools

    Peter provided an overview of the API Can Code curriculum. It follows a three-unit flow covering different concepts and tools in each unit:

    1. Unit 1 introduces students to different types of data and data science terminology. The unit explores the role of data in the students’ daily lives, how use and misuse of data can affect them, different ways of collecting and presenting data, and how to evaluate databases for aspects such as size, recency, and trustworthiness. It also introduces them to RapidAPI, a hub that connects to a wide range of APIs from different providers, allowing students to access real-world data such as Zillow housing prices or Spotify music data.
    2. Unit 2 covers the computing skills used in data science, including the use of programming tools to run efficient data science techniques. Students learn to use EduBlocks, a block-based programming environment where students can draw in JSON files from RapidAPI datasets, and process and filter data without needing a lot of text-based programming skills. The students also compare this approach with manual data processing, which they discover is very slow.
    3. Unit 3 focuses on data analysis, visualisation, and interpretation. Students use CODAP, a web-based interactive data science tool, to calculate summary statistics, create graphs, and perform analyses. CODAP is a user-friendly but powerful platform, making it perfect for students to analyse and visualise their data sets. Students also practise interpreting pre-made graphs and the graphs and statistics that they are creating.

    Peter described an example activity carried out by the students, showing how these three units flow together and build both technical skills and an understanding of the real-world uses of data science. Students were tasked with analysing a dataset from Zillow, a property website, to explore the question “How much does a house in my neighbourhood cost?” The images below show the process the students followed, which uses the data science skills and tools from all three units of the curriculum.

    Interest-driven learning in action

    A central tenet of API Can Code is that students should explore data that matters to them. A diverse range of student interests was identified during the design work, and the curriculum uses these areas of interest, such as music, movies, sports, and animals, throughout the lessons.

    The curriculum also features an open-ended final project, where students can choose a research question that is important to them and their lives, and answer it using data science skills.

    The team shared two examples of memorable final projects. In one, a student set out to answer the question “Is Jhené Aiko a star?” The student found a publicly available dataset through an API provided by Deezer, a music streaming platform. She wrote a program that retrieved data on the artist’s longevity and collaborations, analysed the data, and concluded that Aiko is indeed a star. What stood out about this project wasn’t just the fact that the student independently defined stardom and answered their research question using real data, but that this was a truly personal, interest-driven project. David noted that the researchers could never have come up with this activity, since they had never previously heard of Jhené Aiko!

    Jhené Aiko, an R&B singer-songwriter
    Jhené Aiko, an R&B singer-songwriter 
    (Photo by Charito Yap, licensed under CC BY-ND 2.0)

    Another student’s project analysed data about housing in Washington DC to answer the question “Which ward in DC has the most affordable houses?” Rotem explained that this student was motivated by her family thinking about moving away from the city. She wanted to use her project to persuade her parents to stay by identifying the most affordable ward in DC that they could move to. She was excited by the outcome of her project, and she presented her findings to other students and her parents.

    These projects underscore the power of personally important data science projects driven by students’ interests. When students care about the questions they are exploring, they’re more invested in the process and more likely to keep using the skills and concepts they learn.

    Resources

    API Can Code is available online and completely free to use. Teachers can access lesson plans, tutorial videos, assessment rubrics, and more from the curriculum’s website https://apicancode.umd.edu/. The site also provides resources to support students, including example programs and glossaries.

    Join our next seminar

    In our current seminar series, we’re exploring teaching about AI and data science. Join us at our next seminar on Tuesday, 17 June from 17:00 to 18:30 BST to hear Netta Iivari (University of Oulu) introduce transformative agency and its importance for children’s computing education in the age of AI.

    To sign up and take part in our research seminars, click below:

    You can also view the schedule of our upcoming seminars, and catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • How well do you know our I/O 2025 announcements?How well do you know our I/O 2025 announcements?Contributor

    How well do you know our I/O 2025 announcements?How well do you know our I/O 2025 announcements?Contributor

    Reading Time: < 1 minute

    No, but you can still be an I/O pro. Originally, the name I/O was based on the first two digits in a googol (a one, followed by 100 zeroes), the number that lends our company its name. According to lore, I/O has evolved to also nod to “input / output,” referencing the computational concept of interfacing between a computer system and the outside world, and “innovation in the open.” Pretty fitting, don’t you think?

    Website: LINK

  • Research insights to help learners develop data awareness

    Research insights to help learners develop data awareness

    Reading Time: 7 minutes

    An increasing number of frameworks describe the possible contents of a K–12 artificial intelligence (AI) curriculum and suggest possible learning activities (for example, see the UNESCO competency framework for students, 2024). In our March seminar, Lukas Höper and Carsten Schulte from the Department of Computing Education at Paderborn University in Germany shared with us a unit of work they’ve developed that could inform such a curriculum. At its core, the unit enhances young people’s awareness of how their personal data is used in the data-driven technologies that form part of their everyday lives.

    Lukas Höper and Carsten Schulte are part of a larger team who are investigating how to teach school students about data science and Big Data.

    Carsten explained that Germany’s informatics (computing) curriculum includes a competency area known as Informatics, People and Society (IPS), which explores the interrelationships between technology, individuals, and society, and how computation influences and is influenced by social, ethical, and cultural factors. However, research has suggested that teachers face several problems in delivering this topic, including:

    • Lack of subject knowledge 
    • Lack of teaching material
    • Lack of integration with other topics in informatics lessons
    • A perception that IPS is the responsibility of other subjects

    Some of the findings of that 2007 research were mirrored in a more recent local study in 2025, which found that although there have been some gains in subject knowledge in the interval period, the problems of a lack of teaching material and integration with other computer science (CS) topics persist, with IPS increasingly perceived as the responsibility of the informatics subject area alone. Despite this, within the informatics curriculum, IPS is often the first topic to be dropped when educators face time constraints — and concerns with what and how to assess the topic remain. 

    Photo focused on a young person working on a computer in a classroom.

    In this context, and as part of a larger, longitudinal project to promote data science teaching in schools called ProDaBi, Carsten and Lukas have been developing, implementing, and evaluating concepts and materials on the topics of data science and AI. Lukas explained the importance of students developing data awareness in the context of the digital systems they use in their everyday lives, such as search engines, streaming services, social media apps, digital assistants, and chatbots, and emphasised the difference between being a user of these systems and a data-aware user. Using the example of image recognition and ‘I am not a robot’ Captcha services, Lukas explained how young people need to develop a data-aware perspective of the secondary purposes of the data collected by these (and other) systems, as well as the more obvious, primary purposes. 

    Lukas went on to illustrate the human interaction system model, which presents a continuum of possible different roles, from the student as the user of digital artefacts to the student as the designer of digital artefacts. 

     Figure 1. Different roles in interactions with data-driven technologies
     Figure 1. Different roles in interactions with data-driven technologies

    To become data-aware users of digital artefacts, students need to be able to understand and reflect on those digital artefacts. Only then can they proceed to become responsible designers of digital artefacts. However, when surveyed, some students were only moderately interested in engaging with the inner workings of the digital technologies they use in their everyday lives. Many students prefer to use the systems and are less interested in how they process data. 

    The explanatory model approach in computing education

    Lukas explained how students often become more interested in data-driven technologies when learning about them with explanatory models. Such models can foster data awareness, giving students a different perspective of data-driven technologies and helping them become more empowered users of them. 

    To illustrate, Lukas gave the example of an explanatory model about the role of data in digital systems. Such a model can be used to introduce the idea that data is explicitly and implicitly collected in the interaction between the user and the technology, and used for primary and secondary purposes. 

    The four parts of the explanatory model.
    Figure 2. The four parts of the explanatory model

    Lukas then introduced two teaching units that were developed for use with middle school children to evaluate the success of the explanatory model approach in computing education. The first unit explores location data collected by mobile phone networks and the second features recommendation systems used by movie streaming services such as Netflix and Amazon Prime.

    Taking the second unit as their focus, Lukas and Carsten outlined the four parts of the explanatory model approach: 

    Part 1

    The teaching unit begins by introducing recommendation systems and asking students to think about what a streaming service is, how a personalised start page is constructed, and how personal recommendations might be generated. Students then complete an unplugged activity to simulate the process of making movie recommendations for a peer:

    Task 1: Students write down movie recommendations for another student. 

    Task 2: They then ask each other questions (they collect data). 

    Task 3: They write down revised movie recommendations.

    Task 4: They share and evaluate their recommendations.  

    Task 5: Together they reflect on which collected data was helpful in this exercise and what kind of data a recommendation system might collect. This reflection introduces the concepts of explicit and implicit data collection. 

    Part 2

    In part 2, students are given a prepared Jupyter Notebook, which allows them to explore a simulation of a recommendation system. Students rate movies and receive personal recommendations. They reconstruct a data model about users, using the idea of collaborative filtering with the k-nearest neighbours algorithm (see Figure 3). 

    Figure 3. Data model of movie ratings
    Figure 3. Data model of movie ratings

    Part 3

    In part 3, the concepts of primary and secondary purposes for data collection are introduced. Students discuss examples of secondary purposes such as personalised paywalls for movies that can be purchased, and subscriptions based on the predictions of future behaviour. The discussion includes various topics about individual and societal issues (e.g. filter bubbles, behaviour engineering, information asymmetry, and responsible development of data-driven technologies). 

    Part 4

    Finally, students use the explanatory model as an ‘analytical lens’. They choose other examples from their everyday lives of technologies that implement recommendation systems and analyse these examples, assessing the data practices involved. Students present their results in class and discuss their role in these situations and possible actions they can take to become more empowered, data-aware users.

    Uses of explanatory models

    Using the explanatory model is one approach to make the Informatics, People and Society strand of the German informatics curriculum more engaging for students, and addresses some of the problems teachers identify with delivering this competency area. 

    In presenting the idea of the explanatory model, Carsten and Lukas emphasised that the model in use delivers content as well as functioning as a tool to design teaching content. In the example above, we see how the explanatory model introduces the concepts of:

    1. Explicit and implicit data collection
    2. Primary and secondary purposes of that data 
    3. Data models 

    The explanatory model framework can also be used as a focus for academic research in computing education. For example, further research is needed to evaluate if explanatory models are appropriate or ‘correct’ models and to determine the extent to which they are useful in computing education. 

    In summary, an explanatory model provides a specific perspective on and explanation of particular computing concepts and digital artefacts. In the example given here, the model focuses on the role of data in a recommender system. Explanatory models are representations of concepts, artefacts, and socio-technical systems, but can also serve as tools to support teaching and learning processes and research in computing education. 

    Figure 4. Overview of the perspectives of explanatory models
    Figure 4. Overview of the perspectives of explanatory models. Click to enlarge.

    The teaching units referred to above are published on www.prodabi.de (in German and English). 

    See the background paper to the seminar, called ‘Learning an explanatory model of data-driven technologies can lead to empowered behaviour: A mixed-methods study in K-12 Computing education’.

    You can also view the paper describing the development of the explanatory model approach, called ‘New perspectives on the future of Computing education: Teaching and learning explanatory models’.

    Join our next seminar

    In our current seminar series, we’re exploring teaching about AI and data science. Join us at our next seminar on Tuesday 13 May at 17:00–18:30 BST to hear Henriikka Vartiainen and Matti Tedre (University of Eastern Finland) discuss how to empower students by teaching them how to develop AI and machine learning (ML) apps without code in the classroom.

    To sign up and take part in our research seminars, click below:

    You can also view the schedule of our upcoming seminars, and catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Supporting teachers to integrate AI in K–12 CS education

    Supporting teachers to integrate AI in K–12 CS education

    Reading Time: 5 minutes

    Teaching about artificial intelligence (AI) is a growing challenge for educators around the world. In our current seminar series, we are gaining insights from international computing education researchers on how to teach about AI and data science in the classroom. In our second seminar, Franz Jetzinger from the Technical University of Munich, Germany, presented his work on supporting teachers to integrate AI into their classrooms. Franz brings a wealth of relevant experience to his research as an accomplished textbook author and K–12 computer science teacher.

    A photo of Franz Jetzinger in a library.

    Franz started by demonstrating how widespread AI systems and technologies are becoming. He argued that embedding lessons about AI in the classroom presents three challenges: 

    1. What to teach (defining AI and learning content)
    2. How to teach (i.e. appropriate pedagogies)
    3. How to prepare teachers (i.e. effective professional development) 

    As various models and frameworks for teaching about AI already exist, Franz’s research aims to address the second and third challenges — there is a notable lack of empirical evidence integrating AI in K–12 settings or teacher professional development (PD) to support teachers.

    Using professional development to help prepare teachers

    In Bavaria, computer science (CS) has been a compulsory high school subject for over 20 years. However, a recent update has brought compulsory CS lessons (including AI) to Year 11 students (15–16 years old). Competencies targeted in the new curriculum include defining AI, explaining the functionality of different machine learning algorithms, and understanding how artificial neurons work.

    Two students are seated at a desk, collaborating on a computing task.

    To help prepare teachers to effectively teach this new curriculum and about AI, Franz and colleagues derived a set of core competencies to be used along with existing frameworks (e.g. the Five Big Ideas of AI) and the Bavarian curriculum. The PD programme Franz and colleagues developed was shaped by a set of key design principles:

    1. Blended learning: A blended format was chosen to address the need for scalability and limited resources and to enable self-directed and active learning 
    2. Dual-level pedagogy (or ‘pedagogical double-decker’): Teachers were taught with the same materials to be used in the classroom to aid familiarity
    3. Advanced organiser: A broad overview document was created to support teachers learning new topics 
    4. Moodle: An online learning platform was used to enable collaboration and communication via a MOOC (massive open online course)

    Analysing the effectiveness of the PD programme

    Over 300 teachers attended the MOOC, which had an introductory session beforehand and a follow-up workshop. The programme’s effectiveness was evaluated with a pre/post assessment where teachers completed a survey of 15 closed, multiple-choice questions on their AI competencies and knowledge. Pre/post comparisons showed teachers’ scores improved significantly having taken part in the PD. This is surprising as a large proportion of participants achieved high pre-scores, indicating a highly motivated cohort with notable prior experience teaching about AI.

    Additionally, a group of teachers (n=9) were invited to give feedback on which aspects of the PD programme they felt contributed to the success of implementing the curriculum in the classroom. They reported that the PD programme supported content knowledge and pedagogical content knowledge well, but they required additional support to design suitable learning assessments.

    The design of the professional development programme

    Using action research to aid AI teaching 

    A separate strand of Franz’s research focuses on the other key challenge of how to effectively teach about AI. Franz engaged teachers (n=14) in action research, a method whereby teachers engage in classroom-based research projects. The project explored what topic-specific difficulties students faced during the lessons and how teachers adapted their teaching to overcome these challenges.

    The AI curriculum in Bavaria

    Findings revealed that students struggled with determining whether AI would benefit certain tasks (e.g. object recognition, text-to-speech) or not (e.g. GPS positioning, sorting data). Franz and colleagues reasoned that students were largely not aware of how AI systems deal with uncertainty and overestimated their capabilities. Therefore, an important step in teaching students about AI is defining ‘what an AI problem is’. 

    A teenager learning computer science.

    Similarly, students struggled with distinguishing between rule-based and data-driven approaches, believing in some cases that a trained model becomes ‘rule-based’ or that all data models are data-driven. Students also struggled with certain data science concepts, such as hyperparameter, overfitting and underfitting, and information gain. Franz’s team argue that the chosen tool, Orange Data Mining, did not provide an appropriate scaffold for encountering these concepts. 

    Finally, teachers found challenges in bringing real-world examples into the classroom, including the use of reinforcement learning and neural networks. Franz and colleagues reasoned that focusing on the function of neural networks, as opposed to their structure, would aid student understanding. The use of high-quality (i.e. well-prepared) real-world data sets was also suggested as a strategy for bridging theoretical ideas with practical examples. 

    Addressing the challenges of teaching AI

    Franz’s research provides important insights into the discipline-specific challenges educators face when introducing AI into the classroom. It also underscores the importance of appropriate professional development and age-appropriate and research-informed materials and tools to support students engaging with ideas about AI, data science, and machine learning.

    Students sitting in a lecture at a university.

    Further reading and resources

    If you are interested in reading more about Franz’s work on teacher professional development, you can read his paper on a scalable professional development offer for computer science teachers or you can learn more about his research group here.

    Join our next seminar

    In our current seminar series, we are exploring teaching about AI and data science. Join us at our next seminar on Tuesday 8 April at 17:00–18:30 BST to hear David Weintrop, Rotem Israel-Fishelson, and Peter F. Moon from the University of Maryland introduce ‘API Can Code’, an interest-driven data science curriculum for high-school students.

    To sign up and take part in the seminar, click the button below; we will then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Integrating generative AI into introductory programming classes

    Integrating generative AI into introductory programming classes

    Reading Time: 6 minutes

    Generative AI (GenAI) tools like GitHub Copilot and ChatGPT are rapidly changing how programming is taught and learnt. These tools can solve assignments with remarkable accuracy. GPT-4, for example, scored an impressive 99.5% on an undergraduate computer science exam, compared to Codex’s 78% just two years earlier. With such capabilities, researchers are shifting from asking, “Should we teach with AI?” to “How do we teach with AI?”

    Photo of Leo Porter (UC San Diego)
    Leo Porter from UC San Diego
    Photo of Daniel Zingaro (University of Toronto)
    Daniel Zingaro from the University of Toronto

    Leo Porter and Daniel Zingaro have spearheaded this transformation through their groundbreaking undergraduate programming course. Their innovative curriculum integrates GenAI tools to help students tackle complex programming tasks while developing critical thinking and problem-solving skills.

    Leo and Daniel presented their work at the Raspberry Pi Foundation research seminar in December 2024. During the seminar, it became clear that much could be learnt from their work, with their insights having particular relevance for teachers in secondary education thinking about using GenAI in their programming classes

    Practical applications in the classroom

    In 2023, Leo and Daniel introduced GitHub Copilot in their introductory programming  CS1-LLM course at UC San Diego with 550 students. The course included creative, open-ended projects that allowed students to explore their interests while applying the skills they’d learnt. The projects covered the following areas:

    • Data science: Students used Kaggle datasets to explore questions related to their fields of study — for example, neuroscience majors analysed stroke data. The projects encouraged interdisciplinary thinking and practical applications of programming.
    • Image manipulation: Students worked with the Python Imaging Library (PIL) to create collages and apply filters to images, showcasing their creativity and technical skills.
    • Game development: A project focused on designing text-based games encouraged students to break down problems into manageable components while using AI tools to generate and debug code.

    Students consistently reported that these projects were not only enjoyable but also responsible for deepening their understanding of programming concepts. A majority (74%) found the projects helpful or extremely helpful for their learning. One student noted that.

    Programming projects were fun and the amount of freedom that was given added to that. The projects also helped me understand how to put everything that we have learned so far into a project that I could be proud of.

    Core skills for programming with Generative AI

    Leo and Daniel emphasised that teaching programming with GenAI involves fostering a mix of traditional and AI-specific skills.

    Infographic highlighting a workflow when writing software with Copilot.
    Writing software with GenAI applications, such as Copilot, needs to be approached differently to traditional programming tasks

    Their approach centres on six core competencies:

    • Prompting and function design: Students learn to articulate precise prompts for AI tools, honing their ability to describe a function’s purpose, inputs, and outputs, for instance. This clarity improves the output from the AI tool and reinforces students’ understanding of task requirements.
    • Code reading and selection: AI tools can produce any number of solutions, and each will be different, requiring students to evaluate the options critically. Students are taught to identify which solution is most likely to solve their problem effectively.
    • Code testing and debugging: Students practise open- and closed-box testing, learning to identify edge cases and debug code using tools like doctest and the VS Code debugger.
    • Problem decomposition: Breaking down large projects into smaller functions is essential. For instance, when designing a text-based game, students might separate tasks into input handling, game state updates, and rendering functions.
    • Leveraging modules: Students explore new programming domains and identify useful libraries through interactions with Copilot. This prepares them to solve problems efficiently and creatively.

    Ethical and metacognitive skills: Students engage in discussions about responsible AI use and reflect on the decisions they make when collaborating with AI tools.

    Graphic depicting students' confidence levels regarding their programming skills and their use of Generative AI tools.

    Adapting assessments for the AI era

    The rise of GenAI has prompted educators to rethink how they assess programming skills. In the CS1-LLM course, traditional take-home assignments were de-emphasised in favour of assessments that focused on process and understanding.

    Table highlighting the different types of assessments involved in Leo and Daniel's course.
    Leo and Daniel chose several types of assessments — some involved having to complete programming tasks with the help of GenAI tools, while others had to be completed without.
    • Quizzes and exams: Students were evaluated on their ability to read, test, and debug code — skills critical for working effectively with AI tools. Final exams included both tasks that required independent coding and tasks that required use of Copilot.
    • Creative projects: Students submitted projects alongside a video explanation of their process, emphasising problem decomposition and testing. This approach highlighted the importance of critical thinking over rote memorisation.

    Challenges and lessons learnt

    While Leo and Daniel reported that the integration of AI tools into their course has been largely successful, it has also introduced challenges. Surveys revealed that some students felt overly dependent on AI tools, expressing concerns about their ability to code independently. Addressing this will require striking a balance between leveraging AI tools and reinforcing foundational skills.

    Additionally, ethical concerns around AI use, such as plagiarism and intellectual property, must be addressed. Leo and Daniel incorporated discussions about these issues into their curriculum to ensure students understand the broader implications of working with AI technologies.

    A future-oriented approach

    Leo and Daniel’s work demonstrates that GenAI can transform programming education, making it more inclusive, engaging, and relevant. Their course attracted a diverse cohort of students, as well as students traditionally underrepresented in computer science — 52% of the students were female and 66% were not majoring in computer science — highlighting the potential of AI-powered learning to broaden participation in computer science.

    A girl in a university computing classroom.

    By embracing this shift, educators can prepare students not just to write code but to also think critically, solve real-world problems, and effectively harness the AI innovations shaping the future of technology.

    If you’re an educator interested in using GenAI in your teaching, we recommend checking out Leo and Daniel’s book, Learn AI-Assisted Python Programming, as well as their course resources on GitHub. You may also be interested in our own Experience AI resources, which are designed to help educators navigate the fast-moving world of AI and machine learning technologies.

    Join us at our next online seminar on 11 March

    Our 2025 seminar series is exploring how we can teach young people about AI technologies and data science. At our next seminar on Tuesday, 11 March at 17:00–18:00 GMT, we’ll hear from Lukas Höper and Carsten Schulte from Paderborn University. They’ll be discussing how to teach school students about data-driven technologies and how to increase students’ awareness of how data is used in their daily lives.

    To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Teaching about AI – Teacher symposium

    Teaching about AI – Teacher symposium

    Reading Time: 5 minutes

    AI has become a pervasive term that is heard with trepidation, excitement, and often a furrowed brow in school staffrooms. For educators, there is pressure to use AI applications for productivity — to save time, to help create lesson plans, to write reports, to answer emails, etc. There is also a lot of interest in using AI tools in the classroom, for example, to personalise or augment teaching and learning. However, without understanding AI technology, neither productivity nor personalisation are likely to be successful as teachers and students alike must be critical consumers of these new ways of working to be able to use them productively. 

    Fifty teachers and researchers posing for a photo at the AI Symposium, held at the Raspberry Pi Foundation office.
    Fifty teachers and researchers share knowledge about teaching about AI.

    In both England and globally, there are few new AI-based curricula being introduced and the drive for teachers and students to learn about AI in schools is lagging, with limited initiatives supporting teachers in what to teach and how to teach it. At the Raspberry Pi Foundation and Raspberry Pi Computing Education Research Centre, we decided it was time to investigate this missing link of teaching about AI, and specifically to discover what the teachers who are leading the way in this topic are doing in their classrooms.  

    A day of sharing and activities in Cambridge

    We organised a day-long, face-to-face symposium with educators who have already started to think deeply about teaching about AI, have started to create teaching resources, and are starting to teach about AI in their classrooms. The event was held in Cambridge, England, on 1 February 2025, at the head office of the Raspberry Pi Foundation. 

    Photo of educators and researchers collaborating at the AI symposium.
    Teachers collaborated and shared their knowledge about teaching about AI.

    Over 150 educators and researchers applied to take part in the symposium. With only 50 places available, we followed a detailed protocol, whereby those who had the most experience teaching about AI in schools were selected. We also made sure that educators and researchers from different teaching contexts were selected so that there was a good mix of primary to further education phases represented. Educators and researchers from England, Scotland, and the Republic of Ireland were invited and gathered to share about their experiences. One of our main aims was to build a community of early adopters who have started along the road of classroom-based AI curriculum design and delivery.

    Inspiration, examples, and expertise

    To inspire the attendees with an international perspective of the topics being discussed, Professor Matti Tedre, a visiting academic from Finland, gave a brief overview of the approach to teaching about AI and resources that his research team have developed. In Finland, there is no compulsory distinct computing topic taught, so AI is taught about in other subjects, such as history. Matti showcased tools and approaches developed from the Generation AI research programme in Finland. You can read about the Finnish research programme and Matti’s two month visit to the Raspberry Pi Computing Education Research Centre in our blog

    Photo of a researcher presenting at the AI Symposium.
    A Finnish perspective to teaching about AI.

    Attendees were asked to talk about, share, and analyse their teaching materials. To model how to analyse resources, Ben Garside from the Raspberry Pi Foundation modelled how to complete the activities using the Experience AI resources as an example. The Experience AI materials have been co-created with Google DeepMind and are a suite of free classroom resources, teacher professional development, and hands-on activities designed to help teachers confidently deliver AI lessons. Aimed at learners aged 11 to 14, the materials are informed by the AI education framework developed at the Raspberry Pi Computing Education Research Centre and are grounded in real-world contexts. We’ve recently released new lessons on AI safety, and we’ve localised the resources for use in many countries including Africa, Asia, Europe, and North America.

    In the morning session, Ben exemplified how to talk about and share learning objectives, concepts, and research underpinning materials using the Experience AI resources and in the afternoon he discussed how he had mapped the Experience AI materials to the UNESCO AI competency framework for students.

    Photo of an adult presenting at the AI Symposium.
    UNESCO provide important expertise.

    Kelly Shiohira, from UNESCO, kindly attended our session, and gave an invaluable insight into the UNESCO AI competency framework for students. Kelly is one of the framework’s authors and her presentation helped teachers understand how the materials had been developed. The attendees then used the framework to analyse their resources, to identify gaps and to explore what progression might look like in the teaching of AI.

    Photo of a whiteboard featuring different coloured post-it notes displayed featuring teachers' and researchers' ideas.
    Teachers shared their knowledge about teaching about AI.

    Throughout the day, the teachers worked together to share their experience of teaching about AI. They considered the concepts and learning objectives taught, what progression might look like, what the challenges and opportunities were of teaching about AI, what research informed the resources and what research needs to be done to help improve the teaching and learning of AI.

    What next?

    We are now analysing the vast amount of data that we gathered from the day and we will share this with the symposium participants before we share it with a wider audience. What is clear from our symposium is that teachers have crucial insights into what should be taught to students about AI, and how, and we are greatly looking forward to continuing this journey with them.

    As well as the symposium, we are also conducting academic research in this area, you can read more about this in our Annual Report and on our research webpages. We will also be consulting with teachers and AI experts. If you’d like to ensure you are sent links to these blog posts, then sign up to our newsletter. If you’d like to take part in our research and potentially be interviewed about your perspectives on curriculum in AI, then contact us at: rpcerc-enquiries@cst.cam.ac.uk 

    We also are sharing the research being done by ourselves and other researchers in the field at our research seminars. This year, our seminar series is on teaching about AI and data science in schools. Please do sign up and come along, or watch some of the presentations that have already been delivered by the amazing research teams who are endeavouring to discover what we should be teaching about AI and how in schools

    Website: LINK

  • Teaching about AI in K–12 education: Thoughts from the USA

    Teaching about AI in K–12 education: Thoughts from the USA

    Reading Time: 5 minutes

    As artificial intelligence continues to shape our world, understanding how to teach about AI has never been more important. Our new research seminar series brings together educators and researchers to explore approaches to AI and data science education. In the first seminar, we welcomed Shuchi Grover, Director of AI and Education Research at Looking Glass Ventures. Shuchi began by exploring the theme of teaching using AI, then moved on to discussing teaching about AI in K–12 (primary and secondary) education. She emphasised that it is crucial to teach about AI before using it in the classroom, and this blog post will focus on her insights in this area.

    Shuchi Grover gave an insightful talk discussing how to teach about AI in K–12 education.
    Shuchi Grover gave an insightful talk discussing how to teach about AI in K–12 education.

    An AI literacy framework

    From her research, Shuchi has developed a framework for teaching about AI that is structured as four interlocking components, each representing a key area of understanding:

    • Basic understanding of AI, which refers to foundational knowledge such as what AI is, types of AI systems, and the capabilities of AI technologies
    • Ethics and human–AI relationship, which includes the role of humans in regard to AI, ethical considerations, and public perceptions of AI
    • Computational thinking/literacy, which relates to how AI works, including building AI applications and training machine learning models
    • Data literacy, which addresses the importance of data, including examining data features, data visualisation, and biases

    This framework shows the multifaceted nature of AI literacy, which involves an understanding of both technical aspects and ethical and societal considerations. 

    Shuchi’s framework for teaching about AI includes four broad areas.
    Shuchi’s framework for teaching about AI includes four broad areas.

    Shuchi emphasised the importance of learning about AI ethics, highlighting the topic of bias. There are many ways that bias can be embedded in applications of AI and machine learning, including through the data sets that are used and the design of machine learning models. Shuchi discussed supporting learners to engage with the topic through exploring bias in facial recognition software, sharing activities and resources to use in the classroom that can prompt meaningful discussion, such as this talk by Joy Buolamwini. She also highlighted the Kapor Foundation’s Responsible AI and Tech Justice: A Guide for K–12 Education, which contains questions that educators can use with learners to help them to carefully consider the ethical implications of AI for themselves and for society. 

    Computational thinking and AI

    In computer science education, computational thinking is generally associated with traditional rule-based programming — it has often been used to describe the problem-solving approaches and processes associated with writing computer programs following rule-based principles in a structured and logical way. However, with the emergence of machine learning, Shuchi described a need for computational thinking frameworks to be expanded to also encompass data-driven, probabilistic approaches, which are foundational for machine learning. This would support learners’ understanding and ability to work with the models that increasingly influence modern technology.

    A group of young people and educators smiling while engaging with a computer.

    Example activities from research studies

    Shuchi shared that a variety of pedagogies have been used in recent research projects on AI education, ranging from hands-on experiences, such as using APIs for classification, to discussions focusing on ethical aspects. You can find out more about these pedagogies in her award-winning paper Teaching AI to K-12 Learners: Lessons, Issues and Guidance. This plurality of approaches ensures that learners can engage with AI and machine learning in ways that are both accessible and meaningful to them.

    Research projects exploring teaching about AI and machine learning have involved a range of different approaches.
    Research projects exploring teaching about AI and machine learning have involved a range of different approaches.

    Shuchi shared examples of activities from two research projects that she has led:

    • CS Frontiers engaged high school students in a number of activities involving using NetsBlox and accessing real-world data sets. For example, in one activity, students participated in data science activities such as creating data visualisations to answer questions about climate change. 
    • AI & Cybersecurity for Teens explored approaches to teaching AI and machine learning to 13- to 15-year-olds through the use of cybersecurity scenarios. The project aimed to provide learners with insights into how machine learning models are designed, how they work, and how human decisions influence their development. An example activity guided students through building a classification model to analyse social media accounts to determine whether they may be bot accounts or accounts run by a human.
    A screenshot from an activity to classify social media accounts 
    A screenshot from an activity to classify social media accounts 

    Closing thoughts

    At the end of her talk, Shuchi shared some final thoughts addressing teaching about AI to K–12 learners: 

    • AI learning requires contextualisation: Think about the data sets, ethical issues, and examples of AI tools and systems you use to ensure that they are relatable to learners in your context.
    • AI should not be a solution in search of a problem: Both teachers and learners need to be educated about AI before they start to use it in the classroom, so that they are informed consumers.

    Join our next seminar

    In our current seminar series, we are exploring teaching about AI and data science. Join us at our next seminar on Tuesday 11 March at 17:00–18:30 GMT to hear Lukas Höper and Carsten Schulte from Paderborn University discuss supporting middle school students to develop their data awareness. 

    To sign up and take part in the seminar, click the button below — we will then send you information about joining. We hope to see you there.

    I want to join the next seminarThe schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • The latest AI news we announced in JanuaryThe latest AI news we announced in January

    The latest AI news we announced in JanuaryThe latest AI news we announced in January

    Reading Time: < 1 minute

    We announced that Google Cloud’s Automotive AI Agent is arriving for Mercedes-Benz. Google Cloud unveiled Automotive AI Agent, a new way for automakers to create helpful agentic experiences for drivers. Mercedes-Benz is among the first automakers planning to implement the product, which goes beyond current vehicle voice control to allow people to have natural conversations and ask queries while driving, like “Is there an Italian restaurant nearby?”

    We shared five ways NotebookLM Plus can help your business. NotebookLM is a tool for understanding anything — including synthesizing complex ideas buried in deep research. This month we made our premium NotebookLM Plus available in more Google Workspace plans to help businesses and their employees with everything from sharing team notebooks and centralizing projects, to streamlining onboarding and making learning more engaging with Audio Overviews.

    We announced new AI tools to help retailers build gen AI search and agents. The National Retail Federation kicked off the year with their annual NRF conference, where Google Cloud showed how AI agents and AI-powered search are already helping retailers operate more efficiently, create personalized shopping experiences and use AI to get the latest products and experiences to their customers.

    Website: LINK

  • 60 of our biggest AI announcements in 202460 of our biggest AI announcements in 2024

    60 of our biggest AI announcements in 202460 of our biggest AI announcements in 2024

    Reading Time: 6 minutes

    It’s been a big year for Google AI. It may seem as though features like Circle to Search and NotebookLM’s Audio Overviews have been around for as long as you can remember, but they only launched in 2024. Joining them were a slew of other product releases and updates meant to make your day-to-day life even a little bit easier. So, as we say goodbye to 2024 (and prepare for the exciting AI news that’s sure to come in 2025), take a look at some of the top Google AI news stories that resonated with readers this year.

    January

    2024 began, quite fittingly, with fresh updates across a host of products and tools, including Gemini, Chrome, Pixel and Search. The announcement of our Circle to Search feature made a particular splash with readers. Here were some of the top Google AI news stories of the month:

    1. The power of Google AI comes to the new Samsung Galaxy S24 series
    2. New ways to search in 2024
    3. Circle (or highlight or scribble) to Search
    4. Chrome is getting 3 new generative AI features
    5. New Pixel features for a minty fresh start to the year

    February

    February brought a new chapter of our Gemini era, including the debut of Gemini 1.5; the news that Bard was becoming Gemini; the launch of Gemini Advanced; and more. We also announced new generative AI tools in Labs and tech to help developers and researchers build AI responsibly. Here were some of the top Google AI news stories of the month:

    1. Our next-generation model: Gemini 1.5
    2. Bard becomes Gemini: Try Ultra 1.0 and a new mobile app today
    3. The next chapter of our Gemini era
    4. Gemma: Introducing new state-of-the-art open models
    5. Try ImageFX and MusicFX, our newest generative AI tools in Labs

    March

    Health took center stage in March, with our annual Google Health Check Up event to show how AI is helping us connect people to health information and insights that matter to them. Stories about how we’re using AI for good also made the top-news cut, along with AI-based travel tools coverage as readers looked toward summer. Here were some of the top Google AI news stories of the month:

    1. Our progress on generative AI in health
    2. How we’re using AI to connect people to health information
    3. 6 ways to travel smarter this summer using Google tools
    4. How we are using AI for reliable flood forecasting at a global scale
    5. 21 nonprofits join our first generative AI accelerator

    April

    Spring showers bring…generative AI? Many of April’s top stories focused on how helpful generative AI can be to different groups of people, including developers, business owners, advertisers and Google Photos users. It was also a big month for AI skills-building, thanks to our AI Opportunity Fund and AI Essentials course. Here were some of the top Google AI news stories of the month:

    1. AI editing tools are coming to all Google Photos users
    2. Cloud Next 2024: More momentum with generative AI
    3. Grow with Google launches new AI Essentials course to help everyone learn to use AI
    4. Enhance visual storytelling in Demand Gen with generative AI
    5. Our newest investments in infrastructure and AI skills

    May

    May is synonymous with Google I/O around these parts, so it’s no wonder that much of the month’s top news was from our annual developer conference. At this year’s event, we shared how we’re building more helpful products and features with AI. But even amid all the I/O chatter, Googlers were working on other launches, like that of our AlphaFold 3 model, which holds big promise for science and medicine. Here were some of the top Google AI news stories of the month:

    1. Google I/O 2024: An I/O for a new generation
    2. Generative AI in Search: Let Google do the searching for you
    3. 100 things we announced at I/O 2024
    4. Ask Photos: A new way to search your photos with Gemini
    5. AlphaFold 3 predicts the structure and interactions of all of life’s molecules

    June

    In June, much of our AI news emphasized how this technology can help people in ways big and small. Stories covered both land (how Google Translate is helping people connect with one another all around the world, even if they don’t speak the same language) and sea (how a first-of-its-kind global map of ocean infrastructure is creating a better understanding of things like biodiversity). Here were some of the top Google AI news stories of the month:

    1. 110 new languages are coming to Google Translate
    2. Gemma 2 is now available to researchers and developers
    3. NotebookLM goes global with Slides support and better ways to fact-check
    4. New AI tools for Google Workspace for Education
    5. Mapping human activity at sea with AI

    July

    July was one of those months that makes clear how many things Googlers are working on at once with major announcements for Gemini, Google AI features on Samsung devices, our focus on secure AI and our Olympics partnership with Team USA and NBCUniversal. Here were some of the top Google AI news stories of the month:

    1. 4 Google updates coming to Samsung devices
    2. Gemini’s big upgrade: Faster responses with 1.5 Flash, expanded access and more
    3. 4 ways Google will show up in NBCUniversal’s Olympic Games Paris 2024 coverage
    4. Introducing the Coalition for Secure AI (CoSAI) and founding member organizations
    5. 3 things parents and students told us about how generative AI can support learning

    August

    August was a key moment for Google hardware, thanks to our Made by Google event, along with our Nest Learning Thermostat and Google TV Streamer releases. But software was in the mix, too — we’re looking at you, Chrome, Android and Gemini. Here were some of the top Google AI news stories of the month:

    1. The new Pixel 9 phones bring you the best of Google AI
    2. Gemini makes your mobile device a powerful AI assistant
    3. Your smart home is getting smarter, with help from Gemini
    4. 3 new Chrome AI features for even more helpful browsing
    5. Android is reimagining your phone with Gemini

    September

    Then came another month that underscored our mission to make AI helpful for everyone. Highlights included the launch of Audio Overviews in NotebookLM; the news of a new satellite constellation designed to detect wildfires more quickly; and tips on using Gemini features in Gmail. But that wasn’t all! Here were some of the top Google AI news stories of the month:

    1. NotebookLM now lets you listen to a conversation about your sources
    2. A breakthrough in wildfire detection: How a new constellation of satellites can detect smaller wildfires earlier
    3. Customers are putting Gemini to work
    4. How to use Gemini in Gmail to manage your inbox like a pro
    5. 5 new Android features to help you explore, search for music and more

    October

    October saw a slate of additional AI updates across products including Pixel, NotebookLM, Search and Shopping. Plus, we announced updates to the Search ads experiences at Google Marketing Live — helping advertisers use AI to reach their customers. Here were some of the top Google AI news stories of the month:

    1. October Pixel Drop: Helpful enhancements for your devices
    2. New in NotebookLM: Customizing your Audio Overviews and introducing NotebookLM Business
    3. Ask questions in new ways with AI in Search
    4. Google Shopping’s getting a big transformation
    5. New ways for marketers to reach customers with AI Overviews and Lens

    November

    This month was a time for both work and play, with news including how developers are using Gemini API and how chess-lovers can use AI to reimagine their sets. Plus, holiday prep was afoot with new updates to Google Lens and Shopping. Here were some of the top Google AI news stories of the month:

    1. 5 ways to explore chess during the 2024 World Chess Championship
    2. The Gemini app is now available on iPhone
    3. New ways to holiday shop with Google Lens, Maps and more
    4. How developers are using Gemini API
    5. Our Machine Learning Crash Course goes in depth on generative AI

    December

    We celebrated the one-year anniversary of our Gemini era by introducing our next, agentic era in AI — brought to life by our newest, most capable model, Gemini 2.0. We also shared landmark quantum chip news, and a whole raft of new generative AI offerings in Android, Pixel, Gemini, and our developer platforms AI Studio and Vertex AI. It’s certainly been a December to remember. Here were some of the top Google AI news stories of the month:

    1. Introducing Gemini 2.0: our new AI model for the agentic era
    2. Meet Willow, our state-of-the-art quantum chip
    3. Android XR: The Gemini era comes to headsets and glasses
    4. Try Deep Research and our new experimental model in Gemini, your AI assistant
    5. December Pixel Drop: New features for your Pixel phone, Tablet and more

    There you have it! Twelve months of top Google AI news in a flash. And the best part: Teams at Google are hard at work to keep the momentum going in 2025.

    Website: LINK

  • The latest AI news we announced in DecemberThe latest AI news we announced in December

    The latest AI news we announced in DecemberThe latest AI news we announced in December

    Reading Time: < 1 minute

    For more than 20 years, we’ve invested in machine learning and AI research, tools and infrastructure to build products that make everyday life better for more people. 2024 has been another big year for AI, and we ended on a high note with updates across AI models, consumer products, and research and science.

    Here’s a look back at just some of our AI announcements from December.

    Website: LINK

  • How can we teach students about AI and data science? Join our 2025 seminar series to learn more about the topic

    How can we teach students about AI and data science? Join our 2025 seminar series to learn more about the topic

    Reading Time: 4 minutes

    AI, machine learning (ML), and data science infuse our daily lives, from the recommendation functionality on music apps to technologies that influence our healthcare, transport, education, defence, and more.

    What jobs will be affected by AL, ML, and data science remains to be seen, but it is increasingly clear that students will need to learn something about these topics. There will be new concepts to be taught, new instructional approaches and assessment techniques to be used, new learning activities to be delivered, and we must not neglect the professional development required to help educators master all of this. 

    An educator is helping a young learner with a coding task.

    As AI and data science are incorporated into school curricula and teaching and learning materials worldwide, we ask: What’s the research basis for these curricula, pedagogy, and resource choices?

    In 2024, we showcased researchers who are investigating how AI can be leveraged to support the teaching and learning of programming. But in 2025, we look at what should be taught about AI, ML, and data science in schools and how we should teach this. 

    Our 2025 seminar speakers — so far!

    We are very excited that we have already secured several key researchers in the field. 

    On 21 January, Shuchi Grover will kick off the seminar series by giving an important overview of AI in the K–12 landscape, including developing both AI literacy and AI ethics. Shuchi will provide concrete examples and recently developed frameworks to give educators practical insights on the topic.

    Our second session will focus on a teacher professional development (PD) programme to support the introduction of AI in Upper Bavarian schools. Franz Jetzinger from the Technical University of Munich will summarise the PD programme and share how teachers implemented the topic in their classroom, including the difficulties they encountered.

    Again from Germany, Lukas Höper from Paderborn University, with Carsten Schulte will describe important research on data awareness and introduce a framework that is likely to be key for learning about data-driven technology. The pair will talk about the Data Awareness Framework and how it has been used to help learners explore, evaluate, and be empowered in looking at the role of data in everyday applications.  

    Our April seminar will see David Weintrop from the University of Maryland introduce, with his colleagues, a data science curriculum called API Can Code, aimed at high-school students. The group will highlight the strategies needed for integrating data science learning within students’ lived experiences and fostering authentic engagement.

    Later in the year, Jesús Moreno-Leon from the University of Seville will help us consider the  thorny but essential question of how we measure AI literacy. Jesús will present an assessment instrument that has been successfully implemented in several research studies involving thousands of primary and secondary education students across Spain, discussing both its strengths and limitations.

    What to expect from the seminars

    Our seminars are designed to be accessible to anyone interested in the latest research about AI education — whether you’re a teacher, educator, researcher, or simply curious. Each session begins with a presentation from our guest speaker about their latest research findings. We then move into small groups for a short discussion and exchange of ideas before coming back together for a Q&A session with the presenter. 

    An educator is helping two young learners with a coding task.

    Attendees of our 2024 series told us that they valued that the talks “explore a relevant topic in an informative way“, the “enthusiasm and inspiration”, and particularly the small-group discussions because they “are always filled with interesting and varied ideas and help to spark my own thoughts”. 

    The seminars usually take place on Zoom on the first Tuesday of each month at 17:00–18:30 GMT / 12:00–13:30 ET / 9:00–10:30 PT / 18:00–19:30 CET. 

    You can find out more about each seminar and the speakers on our upcoming seminar page. And if you are unable to attend one of our talks, you can watch them from our previous seminar page, where you will also find an archive of all of our previous seminars dating back to 2020.

    How to sign up

    To attend the seminars, please register here. You will receive an email with the link to join our next Zoom call. Once signed up, you will automatically be notified of upcoming seminars. You can unsubscribe from our seminar notifications at any time.

    We hope to see you at a seminar soon!

    Website: LINK

  • Does AI-assisted coding boost novice programmers’ skills or is it just a shortcut?

    Does AI-assisted coding boost novice programmers’ skills or is it just a shortcut?

    Reading Time: 6 minutes

    Artificial intelligence (AI) is transforming industries, and education is no exception. AI-driven development environments (AIDEs), like GitHub Copilot, are opening up new possibilities, and educators and researchers are keen to understand how these tools impact students learning to code. 

    In our 50th research seminar, Nicholas Gardella, a PhD candidate at the University of Virginia, shared insights from his research on the effects of AIDEs on beginner programmers’ skills.

    Headshot of Nicholas Gardella.
    Nicholas Gardella focuses his research on understanding human interactions with artificial intelligence-based code generators to inform responsible adoption in computer science education.

    Measuring AI’s impact on students

    AI tools are becoming a big part of software development, but what does that mean for students learning to code? As tools like GitHub Copilot become more common, it’s crucial to ask: Do these tools help students to learn better and work more effectively, especially when time is tight?

    This is precisely what Nicholas’s research aims to identify by examining the impact of AIDEs on four key areas:

    • Performance (how well students completed the tasks)
    • Workload (the effort required)
    • Emotion (their emotional state during the task)
    • Self-efficacy (their belief in their own abilities to succeed)

    Nicholas conducted his study with 17 undergraduate students from an introductory computer science course, who were mostly first-time programmers, with different genders and backgrounds.

    Girl in class at IT workshop at university.
    By luckybusiness

    The students completed programming tasks both with and without the assistance of GitHub Copilot. Nicholas selected the tasks from OpenAI’s human evaluation data set, ensuring they represented a range of difficulty levels. He also used a repeated measures design for the study, meaning that each student had the opportunity to program both independently and with AI assistance multiple times. This design helped him to compare individual progress and attitudes towards using AI in programming.

    Less workload, more performance and self-efficacy in learning

    The results were promising for those advocating AI’s role in education. Nicholas’s research found that participants who used GitHub Copilot performed better overall, completing tasks with less mental workload and effort compared to solo programming.

    Graphic depicting Nicholas' results.
    Nicholas used several measures to find out whether AIDEs affected students’ emotional states.

    However, the immediate impact on students’ emotional state and self-confidence was less pronounced. Initially, participants did not report feeling more confident while coding with AI. Over time, though, as they became more familiar with the tool, their confidence in their abilities improved slightly. This indicates that students need time and practice to fully integrate AI into their learning process. Students increasingly attributed their progress not to the AI doing the work for them, but to their own growing proficiency in using the tool effectively. This suggests that with sustained practice, students can gain confidence in their abilities to work with AI, rather than becoming overly reliant on it.

    Graphic depicting Nicholas' RQ1 results.
    Students who used AI tools seemed to improve more quickly than students who worked on the exercises themselves.

    A particularly important takeaway from the talk was the reduction in workload when using AI tools. Novice programmers, who often find programming challenging, reported that AI assistance lightened the workload. This reduced effort could create a more relaxed learning environment, where students feel less overwhelmed and more capable of tackling challenging tasks.

    However, while workload decreased, use of the AI tool did not significantly boost emotional satisfaction or happiness during the coding process. Nicholas explained that although students worked more efficiently, using the AI tool did not necessarily make coding a more enjoyable experience. This highlights a key challenge for educators: finding ways to make learning both effective and engaging, even when using advanced tools like AI.

    AI as a tool for collaboration, not replacement

    Nicholas’s findings raise interesting questions about how AI should be introduced in computer science education. While tools like GitHub Copilot can enhance performance, they should not be seen as shortcuts for learning. Students still need guidance in how to use these tools responsibly. Importantly, the study showed that students did not take credit for the AI tool’s work — instead, they felt responsible for their own progress, especially as they improved their interactions with the tool over time.

    Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.
    Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

    Students might become better programmers when they learn how to work alongside AI systems, using them to enhance their problem-solving skills rather than relying on them for answers. This suggests that educators should focus on teaching students how to collaborate with AI, rather than fearing that these tools will undermine the learning process.

    Bridging research and classroom realities

    Moreover, the study touched on an important point about the limits of its findings. Since the experiment was conducted in a controlled environment with only 17 participants, researchers need to conduct further studies to explore how AI tools perform in real-world classroom settings. For example, the role of internet usage plays a fundamental role. It will be relevant to understand how factors such as class size, prior varying experience, and the age of students affect their ability to integrate AI into their learning.

    In the follow-up discussion, Nicholas also demonstrated how AI tools are becoming more accessible within browsers and how teachers can integrate AI-driven development environments more easily into their courses. By making AI technology more readily available, these tools are democratising access to advanced programming aids, enabling students to build applications directly in their web browsers with minimal setup.

    The path ahead

    Nicholas’s talk provided an insightful look into the evolving relationship between AI tools and novice programmers. While AI can improve performance and reduce workload, it is not a magic solution to all the challenges of learning to code.

    Based on the discussion after the talk, educators should support students in developing the skills to use these tools effectively, shaping an environment where they can feel confident working with AI systems. The researchers and educators agreed that more research is needed to expand on these findings, particularly in more diverse and larger-scale educational settings. 

    As AI continues to shape the future of programming education, the role of educators will remain crucial in guiding students towards responsible and effective use of these technologies, as we are only at the beginning.

    Join our next seminar

    In our current seminar series, we are exploring how to teach programming with and without AI technology. Join us at our next seminar on Tuesday, 10 December at 17:00–18:30 GMT to hear Leo Porter (UC San Diego) and Daniel Zingaro (University of Toronto) discuss how they are working to create an introductory programming course for majors and non-majors that fully incorporates generative AI into the learning goals of the course. 

    To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Using generative AI to teach computing: Insights from research

    Using generative AI to teach computing: Insights from research

    Reading Time: 7 minutes

    As computing technologies continue to rapidly evolve in today’s digital world, computing education is becoming increasingly essential. Arto Hellas and Juho Leinonen, researchers at Aalto University in Finland, are exploring how innovative teaching methods can equip students with the computing skills they need to stay ahead. In particular, they are looking at how generative AI tools can enhance university-level computing education. 

    In our monthly seminar in September, Arto and Juho presented their research on using AI tools to provide personalised learning experiences and automated feedback to help requests, as well as their findings on teaching students how to write effective prompts for generative AI systems. While their research focuses primarily on undergraduate students — given that they teach such students — many of their findings have potential relevance for primary and secondary (K-12) computing education. 

    Students attend a lecture at a university.

    Generative AI consists of algorithms that can generate new content, such as text, code, and images, based on the input received. Ever since large language models (LLMs) such as ChatGPT and Copilot became widely available, there has been a great deal of attention on how to use this technology in computing education. 

    Arto and Juho described generative AI as one of the fastest-moving topics they had ever worked on, and explained that they were trying to see past the hype and find meaningful uses of LLMs in their computing courses. They presented three studies in which they used generative AI tools with students in ways that aimed to improve the learning experience. 

    Using generative AI tools to create personalised programming exercises

    An important strand of computing education research investigates how to engage students by personalising programming problems based on their interests. The first study in Arto and Juho’s research  took place within an online programming course for adult students. It involved developing a tool that used GPT-4 (the latest version of ChatGPT available at that time) to generate exercises with personalised aspects. Students could select a theme (e.g. sports, music, video games), a topic (e.g. a specific word or name), and a difficulty level for each exercise.

    A student in a computing classroom.

    Arto, Juho, and their students evaluated the personalised exercises that were generated. Arto and Juho used a rubric to evaluate the quality of the exercises and found that they were clear and had the themes and topics that had been requested. Students’ feedback indicated that they found the personalised exercises engaging and useful, and preferred these over randomly generated exercises. 

    Arto and Juho also evaluated the personalisation and found that exercises were often only shallowly personalised, however. In shallow personalisations, the personalised content was added in only one sentence, whereas in deep personalisations, the personalised content was present throughout the whole problem statement. It should be noted that in the examples taken from the seminar below, the terms ‘shallow’ and ‘deep’ were not being used to make a judgement on the worthiness of the topic itself, but were rather describing whether the personalisation was somewhat tokenistic or more meaningful within the exercise. 

    In these examples from the study, the shallow personalisation contains only one sentence to contextualise the problem, while in the deep example the whole problem statement is personalised. 

    The findings suggest that this personalised approach may be particularly effective on large university courses, where instructors might struggle to give one-on-one attention to every student. The findings further suggest that generative AI tools can be used to personalise educational content and help ensure that students remain engaged. 

    How might all this translate to K-12 settings? Learners in primary and secondary schools often have a wide range of prior knowledge, lived experiences, and abilities. Personalised programming tasks could help diverse groups of learners engage with computing, and give educators a deeper understanding of the themes and topics that are interesting for learners. 

    Responding to help requests using large language models

    Another key aspect of Alto and Juho’s work is exploring how LLMs can be used to generate responses to students’ requests for help. They conducted a study using an online platform containing programming exercises for students. Every time a student struggled with a particular exercise, they could submit a help request, which went into a queue for a teacher to review, comment on, and return to the student. 

    The study aimed to investigate whether an LLM could effectively respond to these help requests and reduce the teachers’ workloads. An important principle was that the LLM should guide the student towards the correct answer rather than provide it. 

    The study used GPT-3.5, which was the newest version at the time. The results found that the LLM was able to analyse and detect logical and syntactical errors in code, but concerningly, the responses from the LLM also addressed some non-existent problems! This is an example of hallucination, where the LLM outputs something false that does not reflect the real data that was inputted into it. 

    An example of how an LLM was able to detect a logical error in code, but also hallucinated and provided an unhelpful, false response about a non-existent syntactical error. 

    The finding that LLMs often generated both helpful and unhelpful problem-solving strategies suggests that this is not a technology to rely on in the classroom just yet. Arto and Juho intend to track the effectiveness of LLMs as newer versions are released, and explained that GPT-4 seems to detect errors more accurately, but there is no systematic analysis of this yet. 

    In primary and secondary computing classes, young learners often face similar challenges to those encountered by university students — for example, the struggle to write error-free code and debug programs. LLMs seemingly have a lot of potential to support young learners in overcoming such challenges, while also being valuable educational tools for teachers without strong computing backgrounds. Instant feedback is critical for young learners who are still developing their computational thinking skills — LLMs can provide such feedback, and could be especially useful for teachers who may lack the resources to give individualised attention to every learner. Again though, further research into LLM-based feedback systems is needed before they can be implemented en-masse in classroom settings in the future. 

    Teaching students how to prompt large language models

    Finally, Arto and Juho presented a study where they introduced the idea of ‘Prompt Problems’: programming exercises where students learn how to write effective prompts for AI code generators using a tool called Promptly. In a Prompt Problem exercise, students are presented with a visual representation of a problem that illustrates how input values will be transformed to an output. Their task is to devise a prompt (input) that will guide an LLM to generate the code (output) required to solve the problem. Prompt-generated code is evaluated automatically by the Promptly tool, helping students to refine the prompt until it produces code that solves the problem.

    Feedback from students suggested that using Prompt Problems was a good way for them to gain experience of using new programming concepts and develop their computational thinking skills. However, students were frustrated that bugs in the code had to be fixed by amending the prompt — it was not possible to edit the code directly. 

    How these findings relate to K-12 computing education is still to be explored, but they indicate that Prompt Problems with text-based programming languages could be valuable exercises for older pupils with a solid grasp of foundational programming concepts. 

    Balancing the use of AI tools with fostering a sense of community

    At the end of the presentation, Arto and Juho summarised their work and hypothesised that as society develops more and more AI tools, computing classrooms may lose some of their community aspects. They posed a very important question for all attendees to consider: “How can we foster an active community of learners in the generative AI era?” 

    In our breakout groups and the subsequent whole-group discussion, we began to think about the role of community. Some points raised highlighted the importance of working together to accurately identify and define problems, and sharing ideas about which prompts would work best to accurately solve the problems. 

    As AI technology continues to evolve, its role in education will likely expand. There was general agreement in the question and answer session that keeping a sense of community at the heart of computing classrooms will be important. 

    Arto and Juho asked seminar attendees to think about encouraging a sense of community. 

    Further resources

    The Raspberry Pi Computing Education Research Centre and Faculty of Education at the University of Cambridge have recently published a teacher guide on the use of generative AI tools in education. The guide provides practical guidance for educators who are considering using generative AI tools in their teaching. 

    Join our next seminar

    In our current seminar series, we are exploring how to teach programming with and without AI technology. Join us at our next seminar on Tuesday, 12 November at 17:00–18:30 GMT to hear Nicholas Gardella (University of Virginia) discuss the effects of using tools like GitHub Copilot on the motivation, workload, emotion, and self-efficacy of novice programmers. To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Teaching about AI in schools: Take part in our Research and Educator Community Symposium

    Teaching about AI in schools: Take part in our Research and Educator Community Symposium

    Reading Time: 4 minutes

    Worldwide, the use of generative AI systems and related technologies is transforming our lives. From marketing and social media to education and industry, these technologies are being used everywhere, even if it isn’t obvious. Yet, despite the growing availability and use of generative AI tools, governments are still working out how and when to regulate such technologies to ensure they don’t cause unforeseen negative consequences.

    How, then, do we equip our young people to deal with the opportunities and challenges that they are faced with from generative AI applications and associated systems? Teaching them about AI technologies seems an important first step. But what should we teach, when, and how?

    A teacher aids children in the classroom

    Researching AI curriculum design

    The researchers at the Raspberry Pi Foundation have been looking at research that will help inform curriculum design and resource development to teach about AI in school. As part of this work, a number of research themes have been established, which we would like to explore with educators at a face-to-face symposium. 

    These research themes include the SEAME model, a simple way to analyse learning experiences about AI technology, as well as anthropomorphisation and how this might influence the formation of mental models about AI products. These research themes have become the cornerstone of the Experience AI resources we’ve co-developed with Google DeepMind. We will be using these materials to exemplify how the research themes can be used in practice as we review the recently published UNESCO AI competencies.

    A group of educators at a workshop.

    Most importantly, we will also review how we can help teachers and learners move from a rule-based view of problem solving to a data-driven view, from computational thinking 1.0 to computational thinking 2.0.

    A call for teacher input on the AI curriculum

    Over ten years ago, teachers in England experienced a large-scale change in what they needed to teach in computing lessons when programming was more formally added to the curriculum. As we enter a similar period of change — this time to introduce teaching about AI technologies — we want to hear from teachers as we collectively start to rethink our subject and curricula. 

    We think it is imperative that educators’ voices are heard as we reimagine computer science and add data-driven technologies into an already densely packed learning context. 

    Educators at a workshop.

    Join our Research and Educator Community Symposium

    On Saturday, 1 February 2025, we are running a Research and Educator Community Symposium in collaboration with the Raspberry Pi Computing Education Research Centre

    In this symposium, we will bring together UK educators and researchers to review research themes, competency frameworks, and early international AI curricula and to reflect on how to advance approaches to teaching about AI. This will be a practical day of collaboration to produce suggested key concepts and pedagogical approaches and highlight research needs. 

    Educators and researchers at an event.

    This symposium focuses on teaching about AI technologies, so we will not be looking at which AI tools might be used in general teaching and learning or how they may change teacher productivity. 

    It is vitally important for young people to learn how to use AI technologies in their daily lives so they can become discerning consumers of AI applications. But how should we teach them? Please help us start to consider the best approach by signing up for our Research and Educator Community Symposium by 9 December 2024.

    Information at a glance

    When:  Saturday, 1 February 2025 (10am to 5pm) 

    Where: Raspberry Pi Foundation Offices, Cambridge

    Who: If you have started teaching about AI, are creating related resources, are providing professional development about AI technologies, or if you are planning to do so, please apply to attend our symposium. Travel funding is available for teachers in England.

    Please note we expect to be oversubscribed, so book early and tell us about why you are interested in taking part. We will notify all applicants of the outcome of their application by 11 December.

    Website: LINK

  • How to make debugging a positive experience for secondary school students

    How to make debugging a positive experience for secondary school students

    Reading Time: 6 minutes

    Artificial intelligence (AI) continues to change many areas of our lives, with new AI technologies and software having the potential to significantly impact the way programming is taught at schools. In our seminar series this year, we’ve already heard about new AI code generators that can support and motivate young people when learning to code, AI tools that can create personalised Parson’s Problems, and research into how generative AI could improve young people’s understanding of program error messages.

    Two teenage girls do coding activities at their laptops in a classroom.

    At times, it can seem like everything is being automated with AI. However, there are some parts of learning to program that cannot (and probably should not) be automated, such as understanding errors in code and how to fix them. Manually typing code might not be necessary in the future, but it will still be crucial to understand the code that is being generated and how to improve and develop it. 

    As important as debugging might be for the future of programming, it’s still often the task most disliked by novice programmers. Even if program error messages can be explained in the future or tools like LitterBox can flag bugs in an engaging way, actually fixing the issues involves time, effort, and resilience — which can be hard to come by at the end of a computing lesson in the late afternoon with 30 students crammed into an IT room. 

    Debugging can be challenging in many different ways and it is important to understand why students struggle to be able to support them better.

    But what is it about debugging that young people find so hard, even when they’re given enough time to do it? And how can we make debugging a more motivating experience for young people? These are two of the questions that Laurie Gale, a PhD student at the Raspberry Pi Computing Education Research Centre, focused on in our July seminar.

    Laurie has spent the past two years talking to teachers and students and developing tools (a visualiser of students’ programming behaviour and PRIMMDebug, a teaching process and tool for debugging) to understand why many secondary school students struggle with debugging. It has quickly become clear through his research that most issues are due to problematic debugging strategies and students’ negative experiences and attitudes.

    A photograph of Laurie Gale.
    When Laurie Gale started looking into debugging research for his PhD, he noticed that the majority of studies had been with college students, so he decided to change that and find out what would make debugging easier for novice programmers at secondary school.

    When students first start learning how to program, they have to remember a vast amount of new information, such as different variables, concepts, and program designs. Utilising this knowledge is often challenging because they’re already busy juggling all the content they’ve previously learnt and the challenges of the programming task at hand. When error messages inevitably appear that are confusing or misunderstood, it can become extremely difficult to debug effectively. 

    Program error messages are usually not tailored to the age of the programmers and can be hard to understand and overwhelming for novices.

    Given this information overload, students often don’t develop efficient strategies for debugging. When Laurie analysed the debugging efforts of 12- to 14-year-old secondary school students, he noticed some interesting differences between students who were more and less successful at debugging. While successful students generally seemed to make less frequent and more intentional changes, less successful students tinkered frequently with their broken programs, making one- or two-character edits before running the program again. In addition, the less successful students often ran the program soon after beginning the debugging exercise without allowing enough time to actually read the code and understand what it was meant to do. 

    The issue with these behaviours was that they often resulted in students adding errors when changing the program, which then compounded and made debugging increasingly difficult with each run. 74% of students also resorted to spamming, pressing ‘run’ again and again without changing anything. This strategy resonated with many of our seminar attendees, who reported doing the same thing after becoming frustrated. 

    Educators need to be aware of the negative consequences of students’ exasperating and often overwhelming experiences with debugging, especially if students are less confident in their programming skills to begin with. Even though spending 15 minutes on an exercise shows a remarkable level of tenaciousness and resilience, students’ attitudes to programming — and computing as a whole — can quickly go downhill if their strategies for identifying errors prove ineffective. Debugging becomes a vicious circle: if a student has negative experiences, they are less confident when having to bug-fix again in the future, which can lead to another set of unsuccessful attempts, which can further damage their confidence, and so on. Avoiding this downward spiral is essential. 

    Laurie stresses the importance of understanding the cognitive challenges of debugging and using the right tools and techniques to empower students and support them in developing effective strategies.

    To make debugging a less cognitively demanding activity, Laurie recommends using a range of tools and strategies in the classroom.

    Some ideas of how to improve debugging skills that were mentioned by Laurie and our attendees included:

    • Using frame-based editing tools for novice programmers because such tools encourage students to focus on logical errors rather than accidental syntax errors, which can distract them from understanding the issues with the program. Teaching debugging should also go hand in hand with understanding programming syntax and using simple language. As one of our attendees put it, “You wouldn’t give novice readers a huge essay and ask them to find errors.”
    • Making error messages more understandable, for example, by explaining them to students using Large Language Models.
    • Teaching systematic debugging processes. There are several different approaches to doing this. One of our participants suggested using the scientific method (forming a hypothesis about what is going wrong, devising an experiment that will provide information to see whether the hypothesis is right, and iterating this process) to methodically understand the program and its bugs. 

    Most importantly, debugging should not be a daunting or stressful experience. Everyone in the seminar agreed that creating a positive error culture is essential. 

    Teachers in Laurie’s study have stressed the importance of positive debugging experiences.

    Some ideas you could explore in your classroom include:

    • Normalising errors: Stress how normal and important program errors are. Everyone encounters them — a professional software developer in our audience said that they spend about half of their time debugging. 
    • Rewarding perseverance: Celebrate the effort, not just the outcome.
    • Modelling how to fix errors: Let your students write buggy programs and attempt to debug them in front of the class.

    In a welcoming classroom where students are given support and encouragement, debugging can be a rewarding experience. What may at first appear to be a failure — even a spectacular one — can be embraced as a valuable opportunity for learning. As a teacher in Laurie’s study said, “If something should have gone right and went badly wrong but somebody found something interesting on the way… you celebrate it. Take the fear out of it.” 

    Watch the recording of Laurie’s presentation:

    [youtube https://www.youtube.com/watch?v=MKD5GuteMC0?feature=oembed&w=500&h=281]

    In our current seminar series, we are exploring how to teach programming with and without AI.

    Join us at our next seminar on Tuesday, 12 November at 17:00–18:30 GMT to hear Nicholas Gardella (University of Virginia) discuss the effects of using tools like GitHub Copilot on the motivation, workload, emotion, and self-efficacy of novice programmers. To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • How useful do teachers find error message explanations generated by AI? Pilot research results

    How useful do teachers find error message explanations generated by AI? Pilot research results

    Reading Time: 7 minutes

    As discussions of how artificial intelligence (AI) will impact teaching, learning, and assessment proliferate, I was thrilled to be able to add one of my own research projects to the mix. As a research scientist at the Raspberry Pi Foundation, I’ve been working on a pilot research study in collaboration with Jane Waite to explore the topic of program error messages (PEMs). 

    Computer science students at a desktop computer in a classroom.

    PEMs can be a significant barrier to learning for novice coders, as they are often confusing and difficult to understand. This can hinder troubleshooting and progress in coding, and lead to frustration. 

    Recently, various teams have been exploring how generative AI, specifically large language models (LLMs), can be used to help learners understand PEMs. My research in this area specifically explores secondary teachers’ views of the explanations of PEMs generated by a LLM, as an aid for learning and teaching programming, and I presented some of my results in our ongoing seminar series.

    Understanding program error messages is hard at the start

    I started the seminar by setting the scene and describing the current background of research on novices’ difficulty in using PEMs to fix their code, and the efforts made to date to improve these. The three main points I made were that:

    1. PEMs are often difficult to decipher, especially by novices, and there’s a whole research area dedicated to identifying ways to improve them.
    2. Recent studies have employed LLMs as a way of enhancing PEMs. However, the evidence on what makes an ‘effective’ PEM for learning is limited, variable, and contradictory.
    3. There is limited research in the context of K–12 programming education, as well as research conducted in collaboration with teachers to better understand the practical and pedagogical implications of integrating LLMs into the classroom more generally.

    My pilot study aims to fill this gap directly, by reporting K–12 teachers’ views of the potential use of LLM-generated explanations of PEMs in the classroom, and how their views fit into the wider theoretical paradigm of feedback literacy. 

    What did the teachers say?

    To conduct the study, I interviewed eight expert secondary computing educators. The interviews were semi-structured activity-based interviews, where the educators got to experiment with a prototype version of the Foundation’s publicly available Code Editor. This version of the Code Editor was adapted to generate LLM explanations when the question mark next to the standard error message is clicked (see Figure 1 for an example of a LLM-generated explanation). The Code Editor version called the OpenAI GPT-3.5 interface to generate explanations based on the following prompt: “You are a teacher talking to a 12-year-old child. Explain the error {error} in the following Python code: {code}”. 

    The Foundation’s Python Code Editor with LLM feedback prototype.
    Figure 1: The Foundation’s Code Editor with LLM feedback prototype.

    Fifteen themes were derived from the educators’ responses and these were split into five groups (Figure 2). Overall, the educators’ views of the LLM feedback were that, for the most part, a sensible explanation of the error messages was produced. However, all educators experienced at least one example of invalid content (LLM “hallucination”). Also, despite not being explicitly requested in the LLM prompt, a possible code solution was always included in the explanation.

    Themes and groups derived from teachers’ responses.
    Figure 2: Themes and groups derived from teachers’ responses.

    Matching the themes to PEM guidelines

    Next, I investigated how the teachers’ views correlated to the research conducted to date on enhanced PEMs. I used the guidelines proposed by Brett Becker and colleagues, which consolidate a lot of the research done in this area into ten design guidelines. The guidelines offer best practices on how to enhance PEMs based on cognitive science and educational theory empirical research. For example, they outline that enhanced PEMs should provide scaffolding for the user, increase readability, reduce cognitive load, use a positive tone, and provide context to the error.

    Out of the 15 themes identified in my study, 10 of these correlated closely to the guidelines. However, the 10 themes that correlated well were, for the most part, the themes related to the content of the explanations, presentation, and validity (Figure 3). On the other hand, the themes concerning the teaching and learning process did not fit as well to the guidelines.

    Correlation between teachers’ responses and enhanced PEM design guidelines.
    Figure 3: Correlation between teachers’ responses and enhanced PEM design guidelines.

    Does feedback literacy theory fit better?

    However, when I looked at feedback literacy theory, I was able to correlate all fifteen themes — the theory fits.

    Feedback literacy theory positions the feedback process (which includes explanations) as a social interaction, and accounts for the actors involved in the interaction — the student and the teacher — as well as the relationships between the student, the teacher, and the feedback. We can explain feedback literacy theory using three constructs: feedback types, student feedback literacy, and teacher feedback literacy (Figure 4). 

    Feedback literacy at the intersection between feedback types, student feedback literacy, and teacher feedback literacy.
    Figure 4: Feedback literacy at the intersection between feedback types, student feedback literacy, and teacher feedback literacy.

    From the feedback literacy perspective, feedback can be grouped into four types: telling, guiding, developing understanding, and opening up new perspectives. The feedback type depends on the role of the student and teacher when engaging with the feedback (Figure 5). 

    From the student perspective, the competencies and dispositions students need in order to use feedback effectively can be stated as: appreciating the feedback processes, making judgements, taking action, and managing affect. Finally, from a teacher perspective, teachers apply their feedback literacy skills across three dimensions: design, relational, and pragmatic. 

    In short, according to feedback literacy theory, effective feedback processes entail well-designed feedback with a clear pedagogical purpose, as well as the competencies students and teachers need in order to make sense of the feedback and use it effectively.

    A computer science teacher sits with students at computers in a classroom.

    This theory therefore provided a promising lens for analysing the educators’ perspectives in my study. When the educators’ views were correlated to feedback literacy theory, I found that:

    1. Educators prefer the LLM explanations to fulfil a guiding and developing understanding role, rather than telling. For example, educators prefer to either remove or delay the code solution from the explanation, and they like the explanations to include keywords based on concepts they are teaching in the classroom to guide and develop students’ understanding rather than tell.
    1. Related to students’ feedback literacy, educators talked about the ways in which the LLM explanations help or hinder students to make judgements and action the feedback in the explanations. For example, they talked about how detailed, jargon-free explanations can help students make judgments about the feedback, but invalid explanations can hinder this process. Therefore, teachers talked about the need for ways to manage such invalid instances. However, for the most part, the educators didn’t talk about eradicating them altogether. They talked about ways of flagging them, using them as counter-examples, and having visibility of them to be able to address them with students.
    1. Finally, from a teacher feedback literacy perspective, educators discussed the need for professional development to manage feedback processes inclusive of LLM feedback (design) and address issues resulting from reduced opportunities to interact with students (relational and pragmatic). For example, if using LLM explanations results in a reduction in the time teachers spend helping students debug syntax errors from a pragmatic time-saving perspective, then what does that mean for the relationship they have with their students? 

    Conclusion from the study

    By correlating educators’ views to feedback literacy theory as well as enhanced PEM guidelines, we can take a broader perspective on how LLMs might not only shape the content of the explanations, but the whole social interaction around giving and receiving feedback. Investigating ways of supporting students and teachers to practise their feedback literacy skills matters just as much, if not more, than focusing on the content of PEM explanations. 

    This study was a first-step exploration of eight educators’ views on the potential impact of using LLM explanations of PEMs in the classroom. Exactly what the findings of this study mean for classroom practice remains to be investigated, and we also need to examine students’ views on the feedback and its impact on their journey of learning to program. 

    If you want to hear more, you can watch my seminar:

    [youtube https://www.youtube.com/watch?v=fVD2zpGpcY0?feature=oembed&w=500&h=281]

    You can also read the associated paper, or find out more about the research instruments on this project website.

    If any of these ideas resonated with you as an educator, student, or researcher, do reach out — we’d love to hear from you. You can contact me directly at veronica.cucuiat@raspberrypi.org or drop us a line in the comments below. 

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI. Check out the schedule of our upcoming seminars

    To take part in the next seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    You can also catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • Adapting primary Computing resources for cultural responsiveness: Bringing in learners’ identity

    Adapting primary Computing resources for cultural responsiveness: Bringing in learners’ identity

    Reading Time: 6 minutes

    In recent years, the emphasis on creating culturally responsive educational practices has gained significant traction in schools worldwide. This approach aims to tailor teaching and learning experiences to better reflect and respect the diverse cultural backgrounds of students, thereby enhancing their engagement and success in school. In one of our recent research studies, we collaborated with a small group of primary school Computing teachers to adapt existing resources to be more culturally responsive to their learners.

    Teachers work together to identify adaptations to Computing lessons.
    At a workshop for the study, teachers collaborated to identify adaptations to Computing lessons

    We used a set of ten areas of opportunity to scaffold and prompt teachers to look for ways that Computing resources could be adapted, including making changes to the content or the context of lessons, and using pedagogical techniques such as collaboration and open-ended tasks. 

    Today’s blog lays out our findings about how teachers can bring students’ identities into the classroom as an entry point for culturally responsive Computing teaching.

    Collaborating with teachers

    A group of twelve primary teachers, from schools spread across England, volunteered to participate in the study. The primary objective was for our research team to collaborate with these teachers to adapt two units of work about creating digital images and vector graphics so that they better aligned with the cultural contexts of their students. The research team facilitated an in-person, one-day workshop where the teachers could discuss their experiences and work in small groups to adapt materials that they then taught in their classrooms during the following term.

    A shared focus on identity

    As the workshop progressed, an interesting pattern emerged. Despite the diversity of schools and student populations represented by the teachers, each group independently decided to focus on the theme of identity in their adaptations. This was not a directive from the researchers, but rather a spontaneous alignment of priorities among the teachers.

    An example slide from a culturally adapted activity to create a vector graphic emoji.
    An example of an adapted Computing activity to create a vector graphic emoji.

    The focus on identity manifested in various ways. For some teachers, it involved adding diverse role models so that students could see themselves represented in computing, while for others, it meant incorporating discussions about students’ own experiences into the lessons. However, the most compelling commonality across all groups was the decision to have students create a digital picture that represented something important about themselves. This digital picture could take many forms — an emoji, a digital collage, an avatar to add to a game, or even creating fantastical animals. The goal of these activities was to provide students with a platform to express aspects of their identity that were significant to them whilst also practising the skills to manipulate vector graphics or digital images.

    Funds of identity theory

    After the teachers had returned to their classrooms and taught the adapted lessons to their students, we analysed the digital pictures created by the students using funds of identity theory. This theory explains how our personal experiences and backgrounds shape who we are and what makes us unique and individual, and argues that our identities are not static but are continuously shaped and reshaped through interactions with the world around us. 

    Keywords for the funds of identity framework, drawing on work by Esteban-Guitart and Moll (2014) and Poole (2017).
    Funds of identity framework, drawing on work by Esteban-Guitart and Moll (2014) and Poole (2017).

    In the context of our study, this theory argues that students bring their funds of identity into their Computing classrooms, including their cultural heritage, family traditions, languages, values, and personal interests. Through the image editing and vector graphics activities, students were able to create what the funds of identity theory refers to as identity artefacts. This allowed them to explore and highlight the various elements that hold importance in their lives, illuminating different facets of their identities. 

    Students’ funds of identity

    The use of the funds of identity theory provided a robust framework for understanding the digital artefacts created by the students. We analysed the teachers’ descriptions of the artefacts, paying close attention to how students represented their identities in their creations.

    1. Personal interests and values 

    One significant aspect of the analysis centered around the personal interests and values reflected in the artefacts. Some students chose to draw on their practical funds of identity and create images about hobbies that were important to them, such as drawing or playing football. Others focused on existential  funds of identity and represented values that were central to their personalities, such as cool, chatty, or quiet.

    2. Family and community connections

    Many students also chose to include references to their family and community in their artefacts. Social funds of identity were displayed when students featured family members in their images. Some students also drew on their institutional funds, adding references to their school, or geographical funds, by showing places such as the local area or a particular country that held special significance for them. These references highlighted the importance of familial and communal ties in shaping the students’ identities.

    3. Cultural representation

    Another common theme was the way students represented their cultural backgrounds. Some students chose to highlight their cultural funds of identity, creating images that included their heritage, including their national flag or traditional clothing. Other students incorporated ideological aspects of their identity that were important to them because of their faith, including Catholicism and Islam. This aspect of the artefacts demonstrated how students viewed their cultural heritage as an integral part of their identity.

    Implications for culturally responsive Computing teaching

    The findings from this study have several important implications. Firstly, the spontaneous focus on identity by the teachers suggests that identity is a powerful entry point for culturally responsive Computing teaching. Secondly, the application of the funds of identity theory to the analysis of student work demonstrates the diverse cultural resources that students bring to the classroom and highlights ways to adapt Computing lessons in ways that resonate with students’ lived experiences.

    An example of an identity artefact made by one of the students in a culturally adapted lesson on vector graphics.
    An example of an identity artefact made by one of the students in the culturally adapted lesson on vector graphics. 

    However, we also found that teachers often had to carefully support students to illuminate their funds of identity. Sometimes students found it difficult to create images about their hobbies, particularly if they were from backgrounds with fewer social and economic opportunities. We also observed that when teachers modelled an identity artefact themselves, perhaps to show an example for students to aim for, students then sometimes copied the funds of identity revealed by the teacher rather than drawing on their own funds. These points need to be taken into consideration when using identity artefact activities. 

    Finally, these findings relate to lessons about image editing and vector graphics that were taught to students aged 8- to 10-years old in England, and it remains to be explored how students in other countries or of different ages might reveal their funds of identity in the Computing classroom.

    Moving forward with cultural responsiveness

    The study demonstrated that when Computing teachers are given the opportunity to collaborate and reflect on their practice, they can develop innovative ways to make their teaching more culturally responsive. The focus on identity, as seen in the creation of identity artefacts, provided students with a platform to express themselves and connect their learning to their own lives. By understanding and valuing the funds of identity that students bring to the classroom, teachers can create a more equitable and empowering educational experience for all learners.

    Two learners do physical computing in the primary school classroom.

    We’ve written about this study in more detail in a full paper and a poster paper, which will be published at the WiPSCE conference next week. 

    We would like to thank all the researchers who worked on this project, including our collaborations with Lynda Chinaka from the University of Roehampton, and Alex Hadwen-Bennett from King’s College London. Finally, we are grateful to Cognizant for funding this academic research, and to the cohort of primary Computing teachers for their enthusiasm, energy, and creativity, and their commitment to this project.

    Website: LINK

  • Empowering undergraduate computer science students to shape generative AI research

    Empowering undergraduate computer science students to shape generative AI research

    Reading Time: 6 minutes

    As use of generative artificial intelligence (or generative AI) tools such as ChatGPT, GitHub Copilot, or Gemini becomes more widespread, educators are thinking carefully about the place of these tools in their classrooms. For undergraduate education, there are concerns about the role of generative AI tools in supporting teaching and assessment practices. For undergraduate computer science (CS) students, generative AI also has implications for their future career trajectories, as it is likely to be relevant across many fields.

    Dr Stephen MacNeil, Andrew Tran, and Irene Hou (Temple University)

    In a recent seminar in our current series on teaching programming (with or without AI), we were delighted to be joined by Dr Stephen MacNeil, Andrew Tran, and Irene Hou from Temple University. Their talk showcased several research projects involving generative AI in undergraduate education, and explored how undergraduate research projects can create agency for students in navigating the implications of generative AI in their professional lives.

    Differing perceptions of generative AI

    Stephen began by discussing the media coverage around generative AI. He highlighted the binary distinction between media representations of generative AI as signalling the end of higher education — including programming in CS courses — and other representations that highlight the issues that using generative AI will solve for educators, such as improving access to high-quality help (specifically, virtual assistance) or personalised learning experiences.

    Students sitting in a lecture at a university.

    As part of a recent ITiCSE working group, Stephen and colleagues conducted a survey of undergraduate CS students and educators and found conflicting views about the perceived benefits and drawbacks of generative AI in computing education. Despite this divide, most CS educators reported that they were planning to incorporate generative AI tools into their courses. Conflicting views were also noted between students and educators on what is allowed in terms of generative AI tools and whether their universities had clear policies around their use.

    The role of generative AI tools in students’ help-seeking

    There is growing interest in how undergraduate CS students are using generative AI tools. Irene presented a study in which her team explored the effect of generative AI on undergraduate CS students’ help-seeking preferences. Help-seeking can be understood as any actions or strategies undertaken by students to receive assistance when encountering problems. Help-seeking is an important part of the learning process, as it requires metacognitive awareness to understand that a problem exists that requires external help. Previous research has indicated that instructors, teaching assistants, student peers, and online resources (such as YouTube and Stack Overflow) can assist CS students. However, as generative AI tools are now widely available to assist in some tasks (such as debugging code), Irene and her team wanted to understand which resources students valued most, and which factors influenced their preferences. Their study consisted of a survey of 47 students, and follow-up interviews with 8 additional students. 

    Undergraduate CS student use of help-seeking resources

    Responding to the survey, students stated that they used online searches or support from friends/peers more frequently than two generative AI tools, ChatGPT and GitHub Copilot; however, Irene indicated that as data collection took place at the beginning of summer 2023, it is possible that students were not familiar with these tools or had not used them yet. In terms of students’ experiences in seeking help, students found online searches and ChatGPT were faster and more convenient, though they felt these resources led to less trustworthy or lower-quality support than seeking help from instructors or teaching assistants.

    Two undergraduate students are seated at a desk, collaborating on a computing task.

    Some students felt more comfortable seeking help from ChatGPT than peers as there were fewer social pressures. Comparing generative AI tools and online searches, one student highlighted that unlike Stack Overflow, solutions generated using ChatGPT and GitHub Copilot could not be verified by experts or other users. Students who received the most value from using ChatGPT in seeking help either (i) prompted the model effectively when requesting help or (ii) viewed ChatGPT as a search engine or comprehensive resource that could point them in the right direction. Irene cautioned that some students struggled to use generative AI tools effectively as they had limited understanding of how to write effective prompts.

    Using generative AI tools to produce code explanations

    Andrew presented a study where the usefulness of different types of code explanations generated by a large language model was evaluated by students in a web software development course. Based on Likert scale data, they found that line-by-line explanations were less useful for students than high-level summary or concept explanations, but that line-by-line explanations were most popular. They also found that explanations were less useful when students already knew what the code did. Andrew and his team then qualitatively analysed code explanations that had been given a low rating and found they were overly detailed (i.e. focusing on superfluous elements of the code), the explanation given was the wrong type, or the explanation mixed code with explanatory text. Despite the flaws of some explanations, they concluded that students found explanations relevant and useful to their learning.

    Perceived usefulness of code explanation types

    Using generative AI tools to create multiple choice questions

    In a separate study, Andrew and his team investigated the use of ChatGPT to generate novel multiple choice questions for computing courses. The researchers prompted two models, GPT-3 and GPT-4, with example question stems to generate correct answers and distractors (incorrect but plausible choices). Across two data sets of example questions, GPT-4 significantly outperformed GPT-3 in generating the correct answer (75.3% and 90% vs 30.8% and 36.7% of all cases). GPT-3 performed less well at providing the correct answer when faced with negatively worded questions. Both models generated correct answers as distractors across both sets of example questions (GPT-3: 11.1% and 10% of cases; GPT-4: 9.9% and 17.8%). They concluded that educators would still need to verify whether answers were correct and distractors were appropriate.

    An undergraduate student is raising his hand up during a lecture at a university.

    Undergraduate students shaping the direction of generative AI research

    With student concerns about generative AI and its implications for the world of work, the seminar ended with a hopeful message highlighting undergraduate students being proactive in conducting their own research and shaping the direction of generative AI research in computer science education. Stephen concluded the seminar by celebrating the undergraduate students who are undertaking these research projects.

    You can watch the seminar here:

    [youtube https://www.youtube.com/watch?v=Pq-d6wipGRQ?feature=oembed&w=500&h=281]

    If you are interested to learn more about Stephen’s work on generative AI, you can read about how undergraduate students used generative AI tools to create analogies for recursion. If you would like to experiment with using generative AI tools to assist with debugging, you could try using Gemini, ChatGPT, or Copilot.

    Join our next seminar

    Our current seminar series is on teaching programming with or without AI. 

    In our next seminar, on 16 July at 17:00 to 18:30 BST, we welcome Laurie Gale (Raspberry Pi Computing Education Research Centre, University of Cambridge), who will discuss how to teach debugging to secondary school students. To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    The schedule of our upcoming seminars is available online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • Imagining students’ progression in the era of generative AI

    Imagining students’ progression in the era of generative AI

    Reading Time: 6 minutes

    Generative artificial intelligence (AI) tools are becoming more easily accessible to learners and educators, and increasingly better at generating code solutions to programming tasks, code explanations, computing lesson plans, and other learning resources. This raises many questions for educators in terms of what and how we teach students about computing and AI, and AI’s impact on assessment, plagiarism, and learning objectives.

    Brett Becker.

    We were honoured to have Professor Brett Becker (University College Dublin) join us as part of our ‘Teaching programming (with or without AI)’ seminar series. He is uniquely placed to comment on teaching computing using AI tools, having been involved in many initiatives relevant to computing education at different levels, in Ireland and beyond.

    In a computing classroom, two girls concentrate on their programming task.

    Brett’s talk focused on what educators and education systems need to do to prepare all students — not just those studying Computing — so that they are equipped with sufficient knowledge about AI to make their way from primary school to secondary and beyond, whether it be university, technical qualifications, or work.

    How do AI tools currently perform?

    Brett began his talk by illustrating the increase in performance of large language models (LLMs) in solving first-year undergraduate programming exercises: he compared the findings from two recent studies he was involved in as part of an ITiCSE Working Group. In the first study — from 2021 — the results generated by GPT-3 were similar to those of students in the top quartile. By the second study in 2023, GPT-4’s performance matched that of a top student (Figure 1).

    A graph comparing exam scores.

    Figure 1: Student scores on Exam 1 and Exam 2, represented by circles. GPT-3’s 2021 score is represented by the blue ‘x’, and GPT-4’s 2023 score on the same questions is represented by the red ‘x’.

    Brett also explained that the study found some models were capable of solving current undergraduate programming assessments almost error-free, and could solve the Irish Leaving Certificate and UK A level Computer Science exams.

    What are challenges and opportunities for education?

    This level of performance raises many questions for computing educators about what is taught and how to assess students’ learning. To address this, Brett referred to his 2023 paper, which included findings from a literature review and a survey on students’ and instructors’ attitudes towards using LLMs in computing education. This analysis has helped him identify several opportunities as well as the ethical challenges education systems face regarding generative AI. 

    The opportunities include: 

    • The generation of unique content, lesson plans, programming tasks, or feedback to help educators with workload and productivity
    • More accessible content and tools generated by AI apps to make Computing more broadly accessible to more students
    • More engaging and meaningful student learning experiences, including using generative AI to enable creativity and using conversational agents to augment students’ learning
    • The impact on assessment practices, both in terms of automating the marking of current assessments as well as reconsidering what is assessed and how

    Some of the challenges include:

    • The lack of reliability and accuracy of outputs from generative AI tools
    • The need to educate everyone about AI to create a baseline level of understanding
    • The legal and ethical implications of using AI in computing education and beyond
    • How to deal with questionable or even intentionally harmful uses of AI and mitigating the consequences of such uses

    Programming as a basic skill for all subjects

    Next, Brett talked about concrete actions that he thinks we need to take in response to these opportunities and challenges. 

    He emphasised our responsibility to keep students safe. One way to do this is to empower all students with a baseline level of knowledge about AI, at an age-appropriate level, to enable them to keep themselves safe. 

    Secondary school age learners in a computing classroom.

    He also discussed the increased relevance of programming to all subjects, not only Computing, in a similar way to how reading and mathematics transcend the boundaries of their subjects, and the need he sees to adapt subjects and curricula to that effect. 

    As an example of how rapidly curricula may need to change with increasing AI use by students, Brett looked at the Irish Computer science specification for “senior cycle” (final two years of second-level, ages 16–18). This curriculum was developed in 2018 and remains a strong computing curriculum in Brett’s opinion. However, he pointed out that it only contains a single learning outcome on AI. 

    To help educators bridge this gap, in the book Brett wrote alongside Keith Quille to accompany the curriculum, they included two chapters dedicated to AI, machine learning, and ethics and computing. Brett believes these types of additional resources may be instrumental for teaching and learning about AI as resources are more adaptable and easier to update than curricula. 

    Generative AI in computing education

    Taking the opportunity to use generative AI to reimagine new types of programming problems, Brett and colleagues have developed Promptly, a tool that allows students to practise prompting AI code generators. This tool provides a combined approach to learning about generative AI while learning programming with an AI tool. 

    Promptly is intended to help students learn how to write effective prompts. It encourages students to specify and decompose the programming problem they want to solve, read the code generated, compare it with test cases to discern why it is failing (if it is), and then update their prompt accordingly (Figure 2). 

    An example of the Promptly interface.

    Figure 2: Example of a student’s use of Promptly.

    Early undergraduate student feedback points to Promptly being a useful way to teach programming concepts and encourage metacognitive programming skills. The tool is further described in a paper, and whilst the initial evaluation was aimed at undergraduate students, Brett positioned it as a secondary school–level tool as well. 

    Brett hopes that by using generative AI tools like this, it will be possible to better equip a larger and more diverse pool of students to engage with computing.

    Re-examining the concept of programming

    Brett concluded his seminar by broadening the relevance of programming to all learners, while challenging us to expand our perspectives of what programming is. If we define programming as a way of prompting a machine to get an output, LLMs allow all of us to do so without the need for learning the syntax of traditional programming languages. Taking that view, Brett left us with a question to consider: “How do we prepare for this from an educational perspective?”

    You can watch Brett’s presentation here:

    [youtube https://www.youtube.com/watch?v=n0BZq8uRutQ?feature=oembed&w=500&h=281]

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI. 

    For our next seminar on Tuesday 11 June at 17:00 to 18:30 GMT, we’re joined by Veronica Cucuiat (Raspberry Pi Foundation), who will talk about whether LLMs could be employed to help understand programming error messages, which can present a significant obstacle to anyone new to coding, especially young people.  

    To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • An update from the Raspberry Pi Computing Education Research Centre

    An update from the Raspberry Pi Computing Education Research Centre

    Reading Time: 7 minutes

    It’s been nearly two years since the launch of the Raspberry Pi Computing Education Research Centre. Today, the Centre’s Director Dr Sue Sentance shares an update about the Centre’s work.

    The Raspberry Pi Computing Education Research Centre (RPCERC) is unique for two reasons: we are a joint initiative between the University of Cambridge and the Raspberry Pi Foundation, with a team that spans both; and we focus exclusively on the teaching and learning of computing to young people, from their early years to the end of formal education.

    Educators and researchers mingle at a conference.
    At the RPCERC launch in July 2022

    We’ve been very busy at the RPCERC since we held our formal launch event in July 2022. We would love everyone who follows the Raspberry Pi Foundation’s work to keep an eye on what we are up to too: you can do that by checking out our website and signing up to our termly newsletter

    What does the RPCERC do?

    As the name implies, our work is focused on research into computing education and all our research projects align to one of the following themes:

    • AI education
    • Broadening participation in computing
    • Computing around the world
    • Pedagogy and the teaching of computing
    • Physical computing
    • Programming education

    These themes encompass substantial research questions, so it’s clear we have a lot to do! We have only been established for a few years, but we’ve made a good start and are grateful to those who have funded additional projects that we are working on.

    A student in a computing classroom.

    In our work, we endeavour to maintain two key principles that are hugely important to us: sharing our work widely and working collaboratively. We strive to engage in the highest quality rigorous research, and to publish in academic venues. However, we make sure these are available openly for those outside academia. We also favour research that is participatory and collaborative, so we work closely with teachers and other stakeholders. 

    Within our six themes we are running a number of projects, and I’ll outline a few of these here.

    Exploring physical computing in primary schools

    Physical computing is more engaging than simply learning programming and computing skills on screen because children can build interactive and tangible artefacts that exist in the real world. But does this kind of engagement have any lasting impact? Do positive experiences with technology lead to more confidence and creativity later on? These are just some of the questions we aim to answer.

    Three young people working on a computing project.

    We are delighted to be starting a new longitudinal project investigating the experience of young people who have engaged with the BBC micro:bit and other physical computing devices. We aim to develop insights into changes in attitudes, agency, and creativity at key points as students progress from primary through to secondary education in the UK. 

    To do this, we will be following a cohort of children over the course of five years — as they transition from primary school to secondary school — to give us deeper insights into the longer-term impact of working with physical computing than has been possible previously with shorter projects. This longer-term project has been made possible through a generous donation from the Micro:bit Educational Foundation, the BBC, and Nominet. 

    Do follow our research to see what we find out!

    Generative AI for computing teachers

    We are conducting a range of projects in the general area of artificial intelligence (AI), looking both at how to teach and learn AI, and how to learn programming with the help of AI. In our work, we often use the SEAME framework to simplify and categorise aspects of the teaching and learning of AI. However, for many teachers, it’s the use of AI that has generated the most interest for them, both for general productivity and for innovative ways of teaching and learning. 

    A group of students and a teacher at the Coding Academy in Telangana.

    In one of our AI-related projects, we have been working with a group of computing teachers and the Faculty of Education to develop guidance for schools on how generative AI can be useful in the context of computing teaching. Computing teachers are at the forefront of this potential revolution for school education, so we’ve enjoyed the opportunity to set up this researcher–teacher working group to investigate these issues. We hope to be publishing our guidance in June — again watch this space!

    Culturally responsive computing teaching

    We’ve carried out a few different projects in the last few years around culturally responsive computing teaching in schools, which to our knowledge are unique for the UK setting. Much of the work on culturally responsive teaching and culturally relevant pedagogy (which stem from different theoretical bases) has been conducted in the USA, and we believe we are the only research team in the UK working on the implications of culturally relevant pedagogy research for computing teaching here. 

    Two young people learning together at a laptop.

    In one of our studies, we worked with a group of teachers in secondary and primary schools to explore ways in which they could develop and reflect on the meaning of culturally responsive computing teaching in their context. We’ve published on this work, and also produced a technical report describing the whole project. 

    In another project, we worked with primary teachers to explore how existing resources could be adapted to be appropriate for their specific context and children. These projects have been funded by Cognizant and Google. 

    ‘Core’ projects

    As well as research that is externally funded, it’s important that we work on more long-term projects that build on our research expertise and where we feel we can make a contribution to the wider community. 

    We have four projects that I would put into this category:

    1. Teacher research projects
      This year, we’ve been running a project called Teaching Inquiry in Computing Education, which supports teachers to carry out their own research in the classroom.
    2. Computing around the world
      Following on from our survey of UK and Ireland computing teachers and earlier work on surveying teachers in Africa and globally, we are developing a broader picture of how computing education in school is growing around the world. Watch this space for more details.
    3. PRIMM
      We devised the Predict–Run–Investigate–Modify–Make lesson structure for programming a few years ago and continue to research in this area.
    4. LCT semantic wave theory
      Together with universities in London and Australia, we are exploring ways in which computing education can draw on legitimation code theory (LCT)

    We are currently looking for a research associate to lead on one or more of these core projects, so if you’re interested, get in touch. 

    Developing new computing education researchers

    One of our most important goals is to support new researchers in computing education, and this involves recruiting and training PhD students. During 2022–2023, we welcomed our very first PhD students, Laurie Gale and Salomey Afua Addo, and we will be saying hello to two more in October 2024. PhD students are an integral part of RPCERC, and make a great contribution across the team, as well as focusing on their own particular area of interest in depth. Laurie and Salomey have also been out and about visiting local schools too. 

    Laurie’s PhD study focuses on debugging, a key element of programming education. He is looking at lower secondary school students’ attitudes to debugging, their debugging behaviour, and how to teach debugging. If you’d like to take part in Laurie’s research, you can contact us at rpcerc-enquiries@cst.cam.ac.uk.

    Salomey’s work is in the area of AI education in K–12 and spans the UK and Ghana. Her first study considered the motivation of teachers in the UK to teach AI and she has spent some weeks in Ghana conducting a case study on the way in which Ghana implemented AI into the curriculum in 2020.

    Thanks!

    We are very grateful to the Raspberry Pi Foundation for providing a donation which established the RPCERC and has given us financial security for the next few years. We’d also like to express our thanks for other donations and project funding we’ve received from Google, Google DeepMind, the Micro:bit Educational Foundation, BBC, and Nominet. If you would like to work with us, please drop us a line at rpcerc-enquiries@cst.cam.ac.uk.

    Website: LINK

  • Insights into students’ attitudes to using AI tools in programming education

    Insights into students’ attitudes to using AI tools in programming education

    Reading Time: 4 minutes

    Educators around the world are grappling with the problem of whether to use artificial intelligence (AI) tools in the classroom. As more and more teachers start exploring the ways to use these tools for teaching and learning computing, there is an urgent need to understand the impact of their use to make sure they do not exacerbate the digital divide and leave some students behind.

    A teenager learning computer science.

    Sri Yash Tadimalla from the University of North Carolina and Dr Mary Lou Maher, Director of Research Community Initiatives at the Computing Research Association, are exploring how student identities affect their interaction with AI tools and their perceptions of the use of AI tools. They presented findings from two of their research projects in our March seminar.

    How students interact with AI tools 

    A common approach in research is to begin with a preliminary study involving a small group of participants in order to test a hypothesis, ways of collecting data from participants, and an intervention. Yash explained that this was the approach they took with a group of 25 undergraduate students on an introductory Java programming course. The research observed the students as they performed a set of programming tasks using an AI chatbot tool (ChatGPT) or an AI code generator tool (GitHub Copilot). 

    The data analysis uncovered five emergent attitudes of students using AI tools to complete programming tasks: 

    • Highly confident students rely heavily on AI tools and are confident about the quality of the code generated by the tool without verifying it
    • Cautious students are careful in their use of AI tools and verify the accuracy of the code produced
    • Curious students are interested in exploring the capabilities of the AI tool and are likely to experiment with different prompts 
    • Frustrated students struggle with using the AI tool to complete the task and are likely to give up 
    • Innovative students use the AI tool in creative ways, for example to generate code for other programming tasks

    Whether these attitudes are common for other and larger groups of students requires more research. However, these preliminary groupings may be useful for educators who want to understand their students and how to support them with targeted instructional techniques. For example, highly confident students may need encouragement to check the accuracy of AI-generated code, while frustrated students may need assistance to use the AI tools to complete programming tasks.

    An intersectional approach to investigating student attitudes

    Yash and Mary Lou explained that their next research study took an intersectional approach to student identity. Intersectionality is a way of exploring identity using more than one defining characteristic, such as ethnicity and gender, or education and class. Intersectional approaches acknowledge that a person’s experiences are shaped by the combination of their identity characteristics, which can sometimes confer multiple privileges or lead to multiple disadvantages.

    A student in a computing classroom.

    In the second research study, 50 undergraduate students participated in programming tasks and their approaches and attitudes were observed. The gathered data was analysed using intersectional groupings, such as:

    • Students who were from the first generation in their family to attend university and female
    • Students who were from an underrepresented ethnic group and female 

    Although the researchers observed differences amongst the groups of students, there was not enough data to determine whether these differences were statistically significant.

    Who thinks using AI tools should be considered cheating? 

    Participating students were also asked about their views on using AI tools, such as “Did having AI help you in the process of programming?” and “Does your experience with using this AI tool motivate you to continue learning more about programming?”

    The same intersectional approach was taken towards analysing students’ answers. One surprising finding stood out: when asked whether using AI tools to help with programming tasks should be considered cheating, students from more privileged backgrounds agreed that this was true, whilst students with less privilege disagreed and said it was not cheating.

    This finding is only with a very small group of students at a single university, but Yash and Mary Lou called for other researchers to replicate this study with other groups of students to investigate further. 

    You can watch the full seminar here:

    [youtube https://www.youtube.com/watch?v=0oIGA7NJREI?feature=oembed&w=500&h=281]

    Acknowledging differences to prevent deepening divides

    As researchers and educators, we often hear that we should educate students about the importance of making AI ethical, fair, and accessible to everyone. However, simply hearing this message isn’t the same as truly believing it. If students’ identities influence how they view the use of AI tools, it could affect how they engage with these tools for learning. Without recognising these differences, we risk continuing to create wider and deeper digital divides. 

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI

    For our next seminar on Tuesday 16 April at 17:00 to 18:30 GMT, we’re joined by Brett A. Becker (University College Dublin), who will talk about how generative AI can be used effectively in secondary school programming education and how it can be leveraged so that students can be best prepared for continuing their education or beginning their careers. To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • New resource to help teachers make Computing culturally relevant

    New resource to help teachers make Computing culturally relevant

    Reading Time: 6 minutes

    Here at the Raspberry Pi Foundation, we believe that it’s important that our academic research has a practical application. An important area of research we are engaged in is broadening participation in computing education by investigating how the subject can be made more culturally relevant — we have published several studies in this area. 

    Licensed under the Open Government Licence.

    However, we know that busy teachers do not have time to keep abreast of all the latest research. This is where our Pedagogy Quick Reads come in. They show teachers how an area of current research either has been or could be applied in practice. 

    Our new Pedagogy Quick Reads summarises the central tenets of culturally relevant pedagogy (the theory) and then lays out 10 areas of opportunity as concrete ways for you to put the theory into practice.

    Why is culturally relevant pedagogy necessary?

    Computing remains an area where many groups of people are underrepresented, including those marginalised because of their gender, ethnicity, socio-economic background, additional educational needs, or age. For example, recent stats in the BCS’ Annual Diversity Report 2023 record that in the UK, the proportion of women working in tech was 20% in 2021, and Black women made up only 0.7% of tech specialists. Beyond gender and ethnicity, pupils who have fewer social and economic opportunities ‘don’t see Computing as a subject for somebody like them’, a recent report from Teach First found. 

    In a computing classroom, a girl laughs at what she sees on the screen.

    The fact that in the UK, 94% of girls and 79% of boys drop Computing at age 14 should be of particular concern for Computing educators. This last statistic makes it painfully clear that there is much work to be done to broaden the appeal of Computing in schools. One approach to make the subject more inclusive and attractive to young people is to make it more culturally relevant. 

    As part of our research to help teachers effectively adapt their curriculum materials to make them culturally relevant and engaging for their learners, we’ve identified 10 areas of opportunity — areas where teachers can choose to take actions to bring the latest research on culturally relevant pedagogy into their classrooms, right here, right now. 

    Applying the areas of opportunity in your classroom

    The Pedagogy Quick Read gives teachers ideas for how they can use the areas of opportunity (AOs) to begin to review their own curriculum, teaching materials, and practices. We recommend picking one area initially, and focusing on that perhaps for a term. This helps you avoid being overwhelmed, and is particularly useful if you are trying to reach a particular group, for example, Year 9 girls, or low-attaining boys, or learners who lack confidence or motivation. 

    Two learners do physical computing in the primary school classroom.

    For example, one simple intervention is AO1 ‘Finding out more about our learners’. It’s all too easy for teachers to assume that they know what their students’ interests are. And getting to know your students can be especially tricky at secondary level, when teachers might only see a class once a fortnight or in a carousel. 

    However, finding out about your learners can be easily achieved in an online survey homework task, set at the beginning of a new academic year or term or unit of work. Using their interests, along with considerations of their backgrounds, families, and identities as inputs in curriculum planning can have tangible benefits: students may begin to feel an increased sense of belonging when they see their interests or identities reflected in the material later used. 

    How we’re using the AOs

    The Quick Read presents two practical case studies of how we’ve used the 10 AO to adapt and assess different lesson materials to increase their relevance for learners. 

    Case study 1: Teachers in UK primary school adapt resources

    As we’ve shared before, we implemented culturally relevant pedagogy as part of UK primary school teachers’ professional development in a recent research project. The Quick Read provides details of how we supported teachers to use the AOs to adapt teaching material to make it more culturally relevant to learners in their own contexts. Links to the resources used to review 2 units of work, lesson by lesson, to adapt tasks, learning material, and outcomes are included in the Quick Read. 

    A table laying out the process of adapting a computing lesson so it's culturally relevant.
    Extract from the booklet used in a teacher professional development workshop to frame possible adaptations to lesson activities.

    Case study 2: Reflecting on the adaption of resources for a vocational course for young adults in a Kenyan refugee camp

    In a different project, we used the AOs to reflect on our adaptation of classroom materials from The Computing Curriculum, which we had designed for schools in England originally. Partnering with Amala Education, we adapted Computing Curriculum materials to create a 100-hour course for young adults at Kakuma refugee camp in Kenya who wanted to develop vocational digital literacy skills. 

    The diagram below shows our ratings of the importance of applying each AO while adapting materials for this particular context. In this case, the most important areas for making adaptations were to make the context more culturally relevant, and to improve the materials’ accessibility in terms of readability and output formats (text, animation, video, etc.). 

    Importance of the areas of opportunity to a course adaptation.

    You can use this method of reflection as a way to evaluate your progress in addressing different AOs in a unit of work, across the materials for a whole year group, or even for your school’s whole approach. This may be useful for highlighting those areas which have, perhaps, been overlooked. 

    Applying research to practice with the AOs

    The ‘Areas of opportunity’ Pedagogy Quick Read aims to help teachers apply research to their practice by summarising current research and giving practical examples of evidence-based teaching interventions and resources they can use.

    Two children code on laptops while an adult supports them.

    The set of AOs was developed as part of a wider research project, and each one is itself research-informed. The Quick Read includes references to that research for everyone who wants to know more about culturally relevant pedagogy. This supporting evidence will be useful to teachers who want to address the topic of culturally relevant pedagogy with senior or subject leaders in their school, who often need to know that new initiatives are evidence-based.

    Our goal for the Quick Read is to raise awareness of tried and tested pedagogies that increase accessibility and broaden the appeal of Computing education, so that all of our students can develop a sense of belonging and enjoyment of Computing.

    Let us know if you have a story to tell about how you have applied one of the areas of opportunity in your classroom.

    To date, our research in the field of culturally relevant pedagogy has been generously supported by funders including Cognizant and Google. We are very grateful to our partners for enabling us to learn more about how to make computing education inclusive for all.

    Website: LINK