Schlagwort: install instructions

  • Celebrating Coolest Projects 2024, plus dates for 2025

    Celebrating Coolest Projects 2024, plus dates for 2025

    Reading Time: 6 minutes

    Coolest Projects celebrates young digital creators and the amazing things they make with technology. Today, we’re sharing the impact that taking part in Coolest Projects showcases has on young people’s motivation and confidence, as well as announcing dates for Coolest Projects 2025.

    Coolest Projects will be back in 2025.

    Coolest Projects in 2024

    This year, 7197 young people across 4678 projects took part in our global Coolest Projects online showcase, with participants from 43 countries. All of these projects were shared in our online showcase gallery, and we hosted a live-streamed event celebrating the judges’ favourites watched by over 1000 people.

    At the 2024 in-person Coolest Projects events in Ireland and the UK, 192 young people shared 120 projects. At these events, the young creators presented their projects to other attendees and our team of judges. They also took part in other activities, including a digital escape room.

    We wanted to understand how Coolest Projects impacts young people, and so we collected the following data:

    • For the online showcase, mentors and tech creators filled in a survey when they completed their project registration, and we ran focus groups with mentors, who are adults that support the tech creators
    • At the UK and Ireland in-person events, creators completed a survey, other event attendees completed feedback cards, and we also interviewed creators. 

    Online showcase: Impact on skills and confidence

    In the survey, mentors and young people told us that taking part in the Coolest Projects online showcase had a positive impact:

    • 91% of young people and 87% of mentors agreed or strongly agreed that taking part in Coolest Projects online inspired them or their team to continue to participate in computing and technology
    • 89% of young people and 86% of mentors agreed or strongly agreed that taking part in Coolest Projects online increased their or their team’s confidence in coding and digital making

    Mentors told us that the community built by the online showcase gallery played an important role in making young people want to participate,  and improving their confidence. One mentor said that “[being part of the online showcase] motivates them actually to do something innovative and… [the] global community motivates them to think out of the box.”

    The favourites in the web category for Coolest Projects 2024.

    They also reported that the online community inspired young people to solve real-world problems. One mentor said, “the wonderful thing is the kids create so many things which are solutions to world problems.” Others told us that young people feel a great deal of pride that their solutions are available for others to see online and that they get ideas from other projects on how to solve problems. One mentor from India told us about a young person who created a boat to clean rivers and then was inspired to learn to program micro:bit devices, having seen similar projects in the gallery.

    Focus on ‘growth, not competition’

    The phrase ‘growth, not competition’ came from the mentors in the focus group, and we love this description.

    “[With] Coolest Projects… the only agenda is to grow. Grow with the coding, nurture your skills, creativity.” – Focus group attendee.

    In he focus group, mentors told us they really valued the way the Coolest Projects online showcase allows all young people to participate, including those who were less experienced tech creators. One mentor told us that because everyone’s project is displayed in the gallery the beauty is that everybody is encouraged individually… regarding the experience. … They can start with very small things. And they have [the] opportunity to upload it on the platform.”

    One mentor who supports young people in remote locations in India spoke about the way Coolest Projects offers a level playing field where his students can be included and participate to the same extent as less remote places: “students never feel left behind.”

    Three young people working together on a tech project.

    The in-person events also reflected the diversity of young people’s experience of digital making. Of those creators who answered surveys, 35% said it was their first time making this kind of project, while 37% said they had made similar projects before. This mix of experience was also reflected in the interviews, with some young people creating something for the first time and having only started learning coding recently, while others had been developing their projects for a long time. Many creators told us they felt inspired by the opportunity to show their projects to others, regardless of their experience level.

    In-person events: Building a community of digital creators

    Attendees at both Coolest Projects UK and Ireland commented on the sense of community and the excitement that was generated by being around other young people. One young person spoke about how much they enjoyed being “surrounded by people who like coding too”. They really valued seeing other creators’ projects and speaking with their peers.

    The exhibition hall at Coolest Projects Ireland 2023.

    The young people we spoke to reflected on the positive impact of this sense of community and belonging. They told us that seeing others’ projects inspired them to develop their ideas and learn new skills. One young person described how the in-person event allowed them to get inspired and socialise.” A second said, “that’s why I’m here — to get inspired.”

    Another clear theme was how much young people valued the opportunity to show their projects to others. This gave them confidence in their own ability to build things. One creator at Coolest Projects UK spoke about the sense of achievement they felt after building something themselves and then being able to present it to others.

    Two young people share a project on their laptop with a judge at Coolest Projects.

    Young people at both events spoke about their plans for what they wanted to do next, talking about trying new technology such as building games, learning Python, and creating mobile apps. At both events, creators described how they wanted to continue to develop and build on their existing ideas with the possibility of showcasing them again.

    What we want to learn next

    We are continuing to look for ways to improve the Coolest Projects experience for young people and their mentors. As part of this, we are conducting user experience research to understand how we can improve the registration process for the online Coolest Projects showcase.

    We were delighted to hear from mentors in this year’s focus group how much they valued the online showcase gallery, and we want to understand the impact of this resource better as part of Coolest Projects 2025. If you want to find out more about Coolest Projects, we highly recommend taking a look at the amazing projects made by young people around the world

    Dates for Coolest Projects 2025

    We’re so pleased that so many young people around the world loved taking part in Coolest Projects this year. And we’re very excited that Coolest Projects will be back and bigger than ever before in 2025! 

    The Coolest Projects online showcase is open to any young person up to age 18, based anywhere in the world. Registration opens 14 January, and we’ll host a celebratory livestream on 25 June.

    A Coolest Projects sign with two people doing handstands in front of it.

    Coolest Projects in-person events will also be popping up around the world. In-person events are open to everyone up to age 18 in the host country. Family and friends are very welcome to come along too. 

    Save the date for:

    • Coolest Projects Ireland, 1 March 2025
    • Coolest Projects USA, 5 April 2025
    • Coolest Projects Belgium, 20 April 2025
    • Coolest Projects UK, 17 May 2025
    • Coolest Projects Belgium, date to be confirmed (TBC)
    • Coolest Projects Ghana, date TBC
    • Coolest Projects India, date TBC
    • Coolest Projects Malaysia, date TBC
    • Coolest Projects South Africa, date TBC

    Keep an eye on the Coolest Projects website for more event dates and details coming soon.

    It’s never too early to start making and saving digital projects to showcase next year. We can’t wait to see what the world’s young tech creators will make!

    Website: LINK

  • The Computing Curriculum: Three global perspectives

    The Computing Curriculum: Three global perspectives

    Reading Time: 5 minutes

    Across continents and cultural contexts, our free Computing Curriculum serves as a common thread that connects educators. Read the stories of 3 educators who share their thoughts on the curriculum’s application, adaptability, and the impact it’s had on their educational settings. 

    I’m Freda, and I co-founded a non-profit organisation called Waloyo in South Africa.

    Photo of Freda, co-founder of the non-profit organisation called Waloyo.

    Coming from a background of technology consulting, I know the value of computing education. I have a real drive to teach young kids coding so they can get ahead and find jobs in our digital economy.

    Our role at Waloyo is to work with non-profit organisations that work with young people and want to expand their services to include computing skills training. Waloyo trains non-profit facilitators, who in turn teach computing skills to youth between the ages of 6 and 18. A unique challenge is that the majority of facilitators we train don’t have any previous computing experience. The resources we use need to be clear and easy to follow.

    What I really love about The Computing Curriculum resources is the facilitator guides.

    Our initial plan was to run the training programmes after school and outside the school curriculum, but we were getting requests from schools to support them too. South Africa doesn’t have a national computing curriculum, so there aren’t many subject specialist teachers. So we looked for curriculum resources from other countries to support our work and that’s how we found The Computing Curriculum. 

    In rural Africa where we work, students have low levels of exposure to computers and computing. So whether they are 6 or 18 years old, we usually start with Scratch. The younger kids then continue with Scratch and the older kids move quickly on to Python as they build confidence.

    Screenshot of Scratch 3 interface

    What I really love about The Computing Curriculum resources is the facilitator guides. They fit in well with our process of training NGO facilitators to work directly with the kids. I love the comprehensiveness and flexibility of what your curriculum provides to enable this method of delivery.

    So far we’ve launched 3 programmes in communities in South Africa, impacting around 150 young people, and it’s worked beautifully. It’s phenomenal to see how excited the kids get when the computer does what they want it to do!

    I’m Al, and I’ve been a secondary science teacher since 1991.

    Photo of Al out hiking in rocky terrain.

    For the past 13 years, I’ve taught in international schools. Two years ago, I decided to retrain in teaching computing. My wife and I are currently teaching in Kazakhstan. I now teach at primary level but still handle some secondary classes. For primary, there’s significant time pressure, especially with extra lessons for the local language, making it challenging to fit computing into the schedule.

    The private schools where I work are starting to implement the UK computer science curriculum. At one of the schools, they have a robotics course which has given rise to a misconception that everything in computing is about robotics! My role, therefore, involves expanding the concept of robotics to include a broader range of computing activities and finding efficient ways to integrate these new materials into the curriculum with minimal effort from the staff. I focus on selecting appropriate units to fit into what the schools are already doing rather than implementing a comprehensive new program.

    The Raspberry Pi Foundation’s curriculum resources are valuable because they provide comprehensive lists of programs and ideas that I can adapt for my colleagues. I adapt resources to make them more accessible for primary teachers, simplifying and customising them for ease of use.

    The Raspberry Pi Foundation’s curriculum resources are valuable because they provide comprehensive lists of programs and ideas that I can adapt for my colleagues.

    Once students understand that computing is a tool for developing skills rather than just passive consumption, they take ownership of their learning which boosts their confidence. Culturally relevant materials are particularly effective, especially in diverse international classrooms. Adapting resources to be culturally relevant and incorporating students’ examples enhances their usefulness and impact. The resources are excellent, but by tailoring them, they can be even more effective, particularly in an international context with diverse nationalities and learning concepts.

    Head of ICT at an international school in Egypt

    In a computing classroom, a boy looks down at a keyboard.

    As Head of Department, I am responsible for what all the different age groups learn, from year 1 to year 12. We use the Cambridge International (CIE) curriculum, so I was looking for supplementary resources that build from the basics, have a clear progression map, and complement the resources we already had.

    With The Computing Curriculum, it is easy to pick out individual lesson resources to use. I love that it doesn’t need a licence and that the students don’t face any problems when they download it to practise at home. I’m covering curriculums for both computing and digital literacy, so I use resources that are relevant to my curriculum maps.

    With The Computing Curriculum, it is easy to pick out individual lesson resources to use.

    In some schools, their idea of an ICT lesson is getting students to play games, use Word documents, make PowerPoint presentations, and that’s it. But this generation of students love coding and making their own games. So instead of playing the game, we teach them how to develop a game and how to add the characters themselves.

    From year 1 to year 2, students take part in a wide range of computing activities and develop a lot of new skills. They find these skills amazing. It makes them feel engaged, excited, and that they are doing something valuable.

    Using The Computing Curriculum 

    These educators’ stories show how easy it is to adapt our Computing Curriculum to your unique context, enhancing students’ technical skills and inspiring creativity, critical thinking, and a passion for problem-solving. We look forward to continuing this journey with these and other educators as they transform computing education for their learners.

    If you’re looking for new computing resources to teach with, why not give The Computing Curriculum a try? You can also read our culturally relevant pedagogy research that Al mentions in his interview.

    Website: LINK

  • Introducing the new Code Club

    Introducing the new Code Club

    Reading Time: 6 minutes

    Today we’re unveiling a fresh look and feel for Code Club, along with a new ambition to inspire 10 million more young people to get creative with technology over the next decade.

    Three young tech creators at laptops at a Code Club session.

    Code Club is a network of free coding clubs where young people learn how to create with technology. Founded in the UK in 2012, it has grown to be a global movement that has already inspired more than 2 million young people to learn how to build their own apps, games, animations, websites, and so much more. 

    We know that Code Club works. Independent evaluations have demonstrated that attending a Code Club helps young people develop their programming skills as well as wider life skills like confidence, resilience, and skills in problem-solving and communication. This impact is a result of the positive learning environment created by the teachers and volunteers that run Code Clubs, with young people enjoying the activities and developing skills independently and collaboratively — including young people who sometimes struggle in a formal classroom setting.

    Just as important, we know that Code Clubs inspire young people from all backgrounds, including girls and young people from communities that are underrepresented in the technology sector. 

    What’s changing and why 

    While we are incredibly proud of the impact that Code Club has already achieved, we want to see many more young people benefiting, and that led us to set the ambitious goal to reach 10 million more young people over the next decade.

    Two mentors and a young tech creator at a laptop at a Code Club session.

    To help us figure out how to reach that ambition, we spent a lot of time this year listening to the community as well as engaging with parents, teachers, and young people who aren’t yet involved in Code Club. All of the changes we’ve made have been informed by those conversations and are designed to make it easier for educators and volunteers all over the world to set up and run Code Clubs.

    The biggest change is that we are making Code Club a more flexible model that can be adapted to reflect your local context and culture to ensure that it is as meaningful as possible for the young people in your community. 

    That means you can host a Code Club in a school or a community venue, like a library or makerspace; you can choose the age range and rhythm of meetings that make sense for your setting; and you can tailor the activities that you offer to the interests and skills of the young people you are serving. In order for the movement to be as inclusive as possible, you don’t even need to be called ‘Code Club’ to be an ‘Official Raspberry Pi Foundation Code Club’ and benefit from all the support we offer. 

    Two mentors and a young tech creator at a computer at a Code Club session.

    To support this change, we have developed a Code Club Charter that we ask all club leaders and mentors to sign up to. This sets out the principles that are shared by all Code Clubs, along with the commitments that the Raspberry Pi Foundation is making about our support to you.

    We have launched a new website that makes it easier for you to find the information you need to set up and run your Code Club, along with an updated and simplified club leader guide. In a few weeks time, we are launching a new online course with guidance on how to run a successful club, and we will be adding to our programme of online community calls, webinars, and training to support a growing community of club leaders and mentors.

    The Code Club website's homepage.

    One of the most important parts of our support for Code Clubs is the projects that help young people learn how to bring their ideas to life using a wide range of hardware and software. As they are created by experienced educators, based on research, rigorously tested, and translated into dozens of languages, you can have confidence that these projects lead to meaningful and lasting learning outcomes for the young people attending your club. Code Club projects enable young people to learn independently, meaning that mentors don’t need technical skills. 

    What this means for CoderDojos 

    Alongside Code Club, the Foundation supports CoderDojo, a network of coding clubs that started life in Cork, Ireland in 2011 and merged with the Raspberry Pi Foundation in 2017. 

    In order to reduce duplication and make it easier for anyone to set up and run a coding club, we have decided to bring together the resources and support for all club leaders and mentors under one website, which is the new Code Club website.

    There is no need for existing CoderDojos to change their name or anything about the way they operate. All registered CoderDojos will be able to manage their club in exactly the same way through the new website, and to access all of the support and resources that we offer to all coding clubs. New clubs will be able to register as CoderDojos.

    Two young tech creators at a tablet at a Code Club session.

    The ethos, experiences, and lessons from the CoderDojo community have been a vital part of the development of the new Code Club. We have worked hard to make sure that all existing CoderDojos feel that their values are reflected in the Charter, and that the guidance and resources we offer address their circumstances. 

    CoderDojos will very much remain part of this community, and the Raspberry Pi Foundation will continue to celebrate and learn from the amazing work of CoderDojos all over the world. 

    Code Club in the age of artificial intelligence 

    With AI already transforming so many parts of our lives, it’s not surprising that some people are starting to ask whether young people even need to learn to code anymore. 

    Three young tech creators at laptops at a Code Club session.

    We’ve got a lot to say on this subject — so watch this space — but the short version is that learning how to create with technology has never been more important. The way that humans give instructions to computers is changing, and Code Club provides a way for young people to experiment with new technologies like AI in a safe environment. Over the next couple of weeks, we’ll be launching new Code Club projects that support young people to learn about AI technologies, including generative AI, and we’ll be providing support for club leaders and mentors on the topic too. 

    Thank you and get involved

    I want to end by saying a huge thank you to everyone who has been part of the Code Club journey so far, and particularly to everyone who has worked so hard on this project over the past year — far too many people to name here, but you know who you are. I also want to thank all of the parents, teachers, mentors, and partners who have provided the feedback and ideas that have shaped these changes.

    A young tech creator at a tablet at a Code Club session.

    Code Club and CoderDojo were both founded in the early 2010s by individuals who wanted to give more young people the opportunity to be digital creators, not just consumers. From that first Dojo in Cork, Ireland, and the first Code Clubs in London, UK, we’ve built a global movement that has empowered millions of young people to engage confidently with a world that is being transformed by digital technologies.

    It’s never been a better time to get involved with Code Club, so please take a look and get in touch if you need any help or support to get started.

    Website: LINK

  • Hello World #25 out now: Generative AI

    Hello World #25 out now: Generative AI

    Reading Time: 3 minutes

    Since they became publicly available at the end of 2022, generative AI tools have been hotly discussed by educators: what role should these tools for generating human-seeming text, images, and other media play in teaching and learning?

    Two years later, the one thing most people agree on is that, like it or not, generative AI is here to stay. And as a computing educator, you probably have your learners and colleagues looking to you for guidance about this technology. We’re sharing how educators like you are approaching generative AI in issue 25 of Hello World, out today for free.

    Digital image of a copy of Hello World magazine, issue 25.

    Generative AI and teaching

    Since our ‘Teaching and AI’ issue a year ago, educators have been making strides grappling with generative AI’s place in their classroom, and with the potential risks to young people. In this issue, you’ll hear from a wide range of educators who are approaching this technology in different ways. 

    For example:

    • Laura Ventura from Gwinnett County Public Schools (GCPS) in Georgia, USA shares how the GCPS team has integrated AI throughout their K–12 curriculum
    • Mark Calleja from our team guides you through using the OCEAN prompt process to reliably get the results you want from an LLM 
    • Kip Glazer, principal at Mountain View High School in California, USA shares a framework for AI implementation aimed at school leaders
    • Stefan Seegerer, a researcher and educator in Germany, discusses why unplugged activities help us focus on what’s really important in teaching about AI

    This issue also includes practical solutions to problems that are unique to computer science educators:

    • Graham Hastings in the UK shares his solution to tricky crocodile clips when working with micro:bits
    • Riyad Dhuny shares his case study of home-hosting a learning management system with his students in Mauritius

    And there is lots more for you to discover in issue 25.

    Whether or not you use generative AI as part of your teaching practice, it’s important for you to be aware of AI technologies and how your young people may be interacting with it. In his article “A problem-first approach to the development of AI systems”, Ben Garside from our team affirms that:

    “A big part of our job as educators is to help young people navigate the changing world and prepare them for their futures, and education has an essential role to play in helping people understand AI technologies so that they can avoid the dangers.

    Our approach at the Raspberry Pi Foundation is not to focus purely on the threats and dangers, but to teach young people to be critical users of technologies and not passive consumers. […]

    Our call to action to educators, carers, and parents is to have conversations with your young people about generative AI. Get to know their opinions on it and how they view its role in their lives, and help them to become critical thinkers when interacting with technology.”

    Share your thoughts & subscribe to Hello World

    Computing teachers are being asked again to teach something that they didn’t study. With generative AI as with all things computing, we want to support your teaching and share your successes. We hope you enjoy this issue of Hello World, and please get in touch with your article ideas or what you would like to see in the magazine.


    We’d like to thank Oracle for supporting this issue.

    Website: LINK

  • Free online course on understanding AI for educators

    Free online course on understanding AI for educators

    Reading Time: 5 minutes

    To empower every educator to confidently bring AI into their classroom, we’ve created a new online training course called ‘Understanding AI for educators’ in collaboration with Google DeepMind. By taking this course, you will gain a practical understanding of the crossover between AI tools and education. The course includes a conceptual look at what AI is, how AI systems are built, different approaches to problem-solving with AI, and how to use current AI tools effectively and ethically.

    Image by Mudassar Iqbal from Pixabay

    In this post, I will share our approach to designing the course and some of the key considerations behind it — all of which you can apply today to teach your learners about AI systems.

    Design decisions: Nurturing knowledge and confidence

    We know educators have different levels of confidence with AI tools — we designed this course to help create a level playing field. Our goal is to uplift every educator, regardless of their prior experience, to a point where they feel comfortable discussing AI in the classroom.

    Three computer science educators discuss something at a screen.

    AI literacy is key to understanding the implications and opportunities of AI in education. The course provides educators with a solid conceptual foundation, enabling them to ask the right questions and form their own perspectives.

    As with all our AI learning materials that are part of Experience AI, we’ve used specific design principles for the course:

    • Choosing language carefully: We never anthropomorphise AI systems, replacing phrases like “The model understands” with “The model analyses”. We do this to make it clear that AI is just a computer system, not a sentient being with thoughts or feelings.
    • Accurate terminology: We avoid using AI as a singular noun, opting instead for the more accurate ‘AI tool’ when talking about applications or ‘AI system’ when talking about underlying component parts. 
    • Ethics: The social and ethical impacts of AI are not an afterthought but highlighted throughout the learning materials.

    Three main takeaways

    The course offers three main takeaways any educator can apply to their teaching about AI systems. 

    1. Communicating effectively about AI systems

    Deciding the level of detail to use when talking about AI systems can be difficult — especially if you’re not very confident about the topic. The SEAME framework offers a solution by breaking down AI into 4 levels: social and ethical, application, model, and engine. Educators can focus on the level most relevant to their lessons and also use the framework as a useful structure for classroom discussions.

    The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works).

    You might discuss the impact a particular AI system is having on society, without the need to explain to your learners how the model itself has been trained or tested. Equally, you might focus on a specific machine learning model to look at where the data used to create it came from and consider the effect the data source has on the output. 

    2. Problem-solving approaches: Predictive vs. generative AI

    AI applications can be broadly separated into two categories: predictive and generative. These two types of AI model represent two vastly different approaches to problem-solving

    People create predictive AI models to make predictions about the future. For example, you might create a model to make weather forecasts based on previously recorded weather data, or to recommend new movies to you based on your previous viewing history. In developing predictive AI models, the problem is defined first — then a specific dataset is assembled to help solve it. Therefore, each predictive AI model usually is only useful for a small number of applications.

    Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.
    Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

    Generative AI models are used to generate media (such as text, code, images, or audio). The possible applications of these models are much more varied because people can use media in many different kinds of ways. You might say that the outputs of generative AI models could be used to solve — or at least to partially solve — any number of problems, without these problems needing to be defined before the model is created.

    3. Using generative AI tools: The OCEAN process

    Generative AI systems rely on user prompts to generate outputs. The OCEAN process, outlined in the course, offers a simple yet powerful framework for prompting AI tools like Gemini, Stable Diffusion or ChatGPT. 

    Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.
    Yasmine Boudiaf & LOTI / Better Images of AI / Data Processing / CC-BY 4.0

    The first three steps of the process help you write better prompts that will result in an output that is as close as possible to what you are looking for, while the last two steps outline how to improve the output:

    1. Objective: Clearly state what you want the model to generate
    2. Context: Provide necessary background information
    3. Examples: Offer specific examples to fine-tune the model’s output
    4. Assess: Evaluate the output 
    5. Negotiate: Refine the prompt to correct any errors in the output

    The final step in using any generative AI tool should be to closely review or edit the output yourself. These tools will very quickly get you started but you’ll always have to rely on your own human effort to ensure the quality of your work. 

    Helping educators to be critical users

    We believe the knowledge and skills our ‘Understanding AI for educators’ course teaches will help any educator determine the right AI tools and concepts to bring into their classroom, regardless of their specialisation. Here’s what one course participant had to say:

    “From my inexperienced viewpoint, I kind of viewed AI as a cheat code. I believed that AI in the classroom could possibly be a real detriment to students and eliminate critical thinking skills.

    After learning more about AI [on the course] and getting some hands-on experience with it, my viewpoint has certainly taken a 180-degree turn. AI definitely belongs in schools and in the workplace. It will take time to properly integrate it and know how to ethically use it. Our role as educators is to stay ahead of this trend as opposed to denying AI’s benefits and falling behind.” – ‘Understanding AI for educators’ course participant

    All our Experience AI resources — including this online course and the teaching materials — are designed to foster a generation of AI-literate educators who can confidently and ethically guide their students in navigating the world of AI.

    You can sign up to the course for free here: 

    A version of this article also appears in Hello World issue 25, which will be published on Monday 23 September and will focus on all things generative AI and education.

    Website: LINK

  • How useful do teachers find error message explanations generated by AI? Pilot research results

    How useful do teachers find error message explanations generated by AI? Pilot research results

    Reading Time: 7 minutes

    As discussions of how artificial intelligence (AI) will impact teaching, learning, and assessment proliferate, I was thrilled to be able to add one of my own research projects to the mix. As a research scientist at the Raspberry Pi Foundation, I’ve been working on a pilot research study in collaboration with Jane Waite to explore the topic of program error messages (PEMs). 

    Computer science students at a desktop computer in a classroom.

    PEMs can be a significant barrier to learning for novice coders, as they are often confusing and difficult to understand. This can hinder troubleshooting and progress in coding, and lead to frustration. 

    Recently, various teams have been exploring how generative AI, specifically large language models (LLMs), can be used to help learners understand PEMs. My research in this area specifically explores secondary teachers’ views of the explanations of PEMs generated by a LLM, as an aid for learning and teaching programming, and I presented some of my results in our ongoing seminar series.

    Understanding program error messages is hard at the start

    I started the seminar by setting the scene and describing the current background of research on novices’ difficulty in using PEMs to fix their code, and the efforts made to date to improve these. The three main points I made were that:

    1. PEMs are often difficult to decipher, especially by novices, and there’s a whole research area dedicated to identifying ways to improve them.
    2. Recent studies have employed LLMs as a way of enhancing PEMs. However, the evidence on what makes an ‘effective’ PEM for learning is limited, variable, and contradictory.
    3. There is limited research in the context of K–12 programming education, as well as research conducted in collaboration with teachers to better understand the practical and pedagogical implications of integrating LLMs into the classroom more generally.

    My pilot study aims to fill this gap directly, by reporting K–12 teachers’ views of the potential use of LLM-generated explanations of PEMs in the classroom, and how their views fit into the wider theoretical paradigm of feedback literacy. 

    What did the teachers say?

    To conduct the study, I interviewed eight expert secondary computing educators. The interviews were semi-structured activity-based interviews, where the educators got to experiment with a prototype version of the Foundation’s publicly available Code Editor. This version of the Code Editor was adapted to generate LLM explanations when the question mark next to the standard error message is clicked (see Figure 1 for an example of a LLM-generated explanation). The Code Editor version called the OpenAI GPT-3.5 interface to generate explanations based on the following prompt: “You are a teacher talking to a 12-year-old child. Explain the error {error} in the following Python code: {code}”. 

    The Foundation’s Python Code Editor with LLM feedback prototype.
    Figure 1: The Foundation’s Code Editor with LLM feedback prototype.

    Fifteen themes were derived from the educators’ responses and these were split into five groups (Figure 2). Overall, the educators’ views of the LLM feedback were that, for the most part, a sensible explanation of the error messages was produced. However, all educators experienced at least one example of invalid content (LLM “hallucination”). Also, despite not being explicitly requested in the LLM prompt, a possible code solution was always included in the explanation.

    Themes and groups derived from teachers’ responses.
    Figure 2: Themes and groups derived from teachers’ responses.

    Matching the themes to PEM guidelines

    Next, I investigated how the teachers’ views correlated to the research conducted to date on enhanced PEMs. I used the guidelines proposed by Brett Becker and colleagues, which consolidate a lot of the research done in this area into ten design guidelines. The guidelines offer best practices on how to enhance PEMs based on cognitive science and educational theory empirical research. For example, they outline that enhanced PEMs should provide scaffolding for the user, increase readability, reduce cognitive load, use a positive tone, and provide context to the error.

    Out of the 15 themes identified in my study, 10 of these correlated closely to the guidelines. However, the 10 themes that correlated well were, for the most part, the themes related to the content of the explanations, presentation, and validity (Figure 3). On the other hand, the themes concerning the teaching and learning process did not fit as well to the guidelines.

    Correlation between teachers’ responses and enhanced PEM design guidelines.
    Figure 3: Correlation between teachers’ responses and enhanced PEM design guidelines.

    Does feedback literacy theory fit better?

    However, when I looked at feedback literacy theory, I was able to correlate all fifteen themes — the theory fits.

    Feedback literacy theory positions the feedback process (which includes explanations) as a social interaction, and accounts for the actors involved in the interaction — the student and the teacher — as well as the relationships between the student, the teacher, and the feedback. We can explain feedback literacy theory using three constructs: feedback types, student feedback literacy, and teacher feedback literacy (Figure 4). 

    Feedback literacy at the intersection between feedback types, student feedback literacy, and teacher feedback literacy.
    Figure 4: Feedback literacy at the intersection between feedback types, student feedback literacy, and teacher feedback literacy.

    From the feedback literacy perspective, feedback can be grouped into four types: telling, guiding, developing understanding, and opening up new perspectives. The feedback type depends on the role of the student and teacher when engaging with the feedback (Figure 5). 

    From the student perspective, the competencies and dispositions students need in order to use feedback effectively can be stated as: appreciating the feedback processes, making judgements, taking action, and managing affect. Finally, from a teacher perspective, teachers apply their feedback literacy skills across three dimensions: design, relational, and pragmatic. 

    In short, according to feedback literacy theory, effective feedback processes entail well-designed feedback with a clear pedagogical purpose, as well as the competencies students and teachers need in order to make sense of the feedback and use it effectively.

    A computer science teacher sits with students at computers in a classroom.

    This theory therefore provided a promising lens for analysing the educators’ perspectives in my study. When the educators’ views were correlated to feedback literacy theory, I found that:

    1. Educators prefer the LLM explanations to fulfil a guiding and developing understanding role, rather than telling. For example, educators prefer to either remove or delay the code solution from the explanation, and they like the explanations to include keywords based on concepts they are teaching in the classroom to guide and develop students’ understanding rather than tell.
    1. Related to students’ feedback literacy, educators talked about the ways in which the LLM explanations help or hinder students to make judgements and action the feedback in the explanations. For example, they talked about how detailed, jargon-free explanations can help students make judgments about the feedback, but invalid explanations can hinder this process. Therefore, teachers talked about the need for ways to manage such invalid instances. However, for the most part, the educators didn’t talk about eradicating them altogether. They talked about ways of flagging them, using them as counter-examples, and having visibility of them to be able to address them with students.
    1. Finally, from a teacher feedback literacy perspective, educators discussed the need for professional development to manage feedback processes inclusive of LLM feedback (design) and address issues resulting from reduced opportunities to interact with students (relational and pragmatic). For example, if using LLM explanations results in a reduction in the time teachers spend helping students debug syntax errors from a pragmatic time-saving perspective, then what does that mean for the relationship they have with their students? 

    Conclusion from the study

    By correlating educators’ views to feedback literacy theory as well as enhanced PEM guidelines, we can take a broader perspective on how LLMs might not only shape the content of the explanations, but the whole social interaction around giving and receiving feedback. Investigating ways of supporting students and teachers to practise their feedback literacy skills matters just as much, if not more, than focusing on the content of PEM explanations. 

    This study was a first-step exploration of eight educators’ views on the potential impact of using LLM explanations of PEMs in the classroom. Exactly what the findings of this study mean for classroom practice remains to be investigated, and we also need to examine students’ views on the feedback and its impact on their journey of learning to program. 

    If you want to hear more, you can watch my seminar:

    [youtube https://www.youtube.com/watch?v=fVD2zpGpcY0?feature=oembed&w=500&h=281]

    You can also read the associated paper, or find out more about the research instruments on this project website.

    If any of these ideas resonated with you as an educator, student, or researcher, do reach out — we’d love to hear from you. You can contact me directly at veronica.cucuiat@raspberrypi.org or drop us a line in the comments below. 

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI. Check out the schedule of our upcoming seminars

    To take part in the next seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    You can also catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • Impact of Experience AI: Reflections from students and teachers

    Impact of Experience AI: Reflections from students and teachers

    Reading Time: 5 minutes

    “I’ve enjoyed actually learning about what AI is and how it works, because before I thought it was just a scary computer that thinks like a human,” a student learning with Experience AI at King Edward’s School, Bath, UK, told us. 

    This is the essence of what we aim to do with our Experience AI lessons, which demystify artificial intelligence (AI) and machine learning (ML). Through Experience AI, teachers worldwide are empowered to confidently deliver engaging lessons with a suite of resources that inspire and educate 11- to 14-year-olds about AI and the role it could play in their lives.

    “I learned new things and it changed my mindset that AI is going to take over the world.” – Student, Malaysia

    Experience AI students in Malaysia
    Experience AI students in Malaysia

    Developed by us with Google DeepMind, our first set of Experience AI lesson resources was aimed at a UK audience and launched in April 2023. Next we released tailored versions of the resources for 5 other countries, working in close partnership with organisations in Malaysia, Kenya, Canada, Romania, and India. Thanks to new funding from Google.org, we’re now expanding Experience AI for 16 more countries and creating new resources on AI safety, with the aim of providing leading-edge AI education for more than 2 million young people across Europe, the Middle East, and Africa. 

    In this blog post, you’ll hear directly from students and teachers about the impact the Experience AI lessons have had so far. 

    Case study:  Experience AI in Malaysia

    Penang Science Cluster in Malaysia is among the first organisations we’ve partnered with for Experience AI. Speaking to Malaysian students learning with Experience AI, we found that the lessons were often very different from what they had expected. 

    Launch of Experience AI in Malaysia
    Launch of Experience AI in Malaysia

    “I actually thought it was going to be about boring lectures and not much about AI but more on coding, but we actually got to do a lot of hands-on activities, which are pretty fun. I thought AI was just about robots, but after joining this, I found it could be made into chatbots or could be made into personal helpers.” – Student, Malaysia

    “Actually, I thought AI was mostly related to robots, so I was expecting to learn more about robots when I came to this programme. It widened my perception on AI.” – Student, Malaysia. 

    The Malaysian government actively promotes AI literacy among its citizens, and working with local education authorities, Penang Science Cluster is using Experience AI to train teachers and equip thousands of young people in the state of Penang with the understanding and skills to use AI effectively. 

    “We envision a future where AI education is as fundamental as mathematics education, providing students with the tools they need to thrive in an AI-driven world”, says Aimy Lee, Chief Operating Officer at Penang Science Cluster. “The journey of AI exploration in Malaysia has only just begun, and we’re thrilled to play a part in shaping its trajectory.”

    Giving non-specialist teachers the confidence to introduce AI to students

    Experience AI provides lesson plans, classroom resources, worksheets, hands-on activities, and videos to help teachers introduce a wide range of AI applications and help students understand how they work. The resources are based on research, and because we adapt them to each partner’s country, they are culturally relevant and relatable for students. Any teacher can use the resources in their classroom, whether or not they have a background in computing education. 

    “Our Key Stage 3 Computing students now feel immensely more knowledgeable about the importance and place that AI has in their wider lives. These lessons and activities are engaging and accessible to students and educators alike, whatever their specialism may be.” – Dave Cross,  North Liverpool Academy, UK

    “The feedback we’ve received from both teachers and learners has been overwhelmingly positive. They consistently rave about how accessible, fun, and hands-on these resources are. What’s more, the materials are so comprehensive that even non-specialists can deliver them with confidence.” – Storm Rae, The National Museum of Computing, UK

    Experience AI teacher training in Kenya
    Experience AI teacher training in Kenya

    “[The lessons] go above and beyond to ensure that students not only grasp the material but also develop a genuine interest and enthusiasm for the subject.” – Teacher, Changamwe Junior School, Mombasa, Kenya

    Sparking debates on bias and the limitations of AI

    When learners gain an understanding of how AI works, it gives them the confidence to discuss areas where the technology doesn’t work well or its output is incorrect. These classroom debates deepen and consolidate their knowledge, and help them to use AI more critically.

    “Students enjoyed the practical aspects of the lessons, like categorising apples and tomatoes. They found it intriguing how AI could sometimes misidentify objects, sparking discussions on its limitations. They also expressed concerns about AI bias, which these lessons helped raise awareness about. I didn’t always have all the answers, but it was clear they were curious about AI’s implications for their future.” – Tracey Mayhead, Arthur Mellows Village College, Peterborough, UK

    Experience AI students in UK
    Experience AI students in UK

    “The lessons that we trialled took some of the ‘magic’ out of AI and started to give the students an understanding that AI is only as good as the data that is used to build it.” – Jacky Green, Waldegrave School, UK 

    “I have enjoyed learning about how AI is actually programmed, rather than just hearing about how impactful and great it could be.” – Student, King Edward’s School, Bath, UK 

    “It has changed my outlook on AI because now I’ve realised how much AI actually needs human intelligence to be able to do anything.” – Student, Arthur Mellows Village College, Peterborough, UK 

    “I didn’t really know what I wanted to do before this but now knowing more about AI, I probably would consider a future career in AI as I find it really interesting and I really liked learning about it.” – Student, Arthur Mellows Village College, Peterborough, UK 

    If you’d like to get involved with Experience AI as an educator and use our free lesson resources with your class, you can start by visiting experience-ai.org.

    Website: LINK

  • The European Astro Pi Challenge 2024/25 launches today

    The European Astro Pi Challenge 2024/25 launches today

    Reading Time: 4 minutes

    Registration is now open for the European Astro Pi Challenge 2024/25! The Astro Pi Challenge, an ESA Education project run in collaboration with us here at the Raspberry Pi Foundation, offers young people the incredible opportunity to write computer programs that will run in space.

    Logo of the European Astro Pi Challenge.

    Young people can take part in two exciting missions for beginners and more experienced coders, and send their code to run on special Raspberry Pi computers, called Astro Pis, on board the International Space Station (ISS).

    Meet the new Astro Pi ambassador, Sławosz Uznański

    We are delighted that new ESA project astronaut Sławosz Uznański will be the ambassador for this year’s Astro Pi Challenge. Sławosz, born in Poland in 1984, has a background in space systems engineering and has conducted research in radiation effects. He recently served as the Engineer in Charge of CERN’s largest accelerator, the Large Hadron Collider.

    Mission Zero: Send your pixel art into space

    In Mission Zero, young people create beautiful pixel art to display on the Astro Pis’ LED screens. This mission requires no prior experience of Python coding, and it can be completed in around an hour.

    A selection of pixel art images by Mission Zero 2023/24 participants. The images show a variety of plants and animals, such as a cactus, a cat, and an elephant.
    Pixel art examples by Mission Zero 2023/24 participants

    To take part, young people design and code pixel art inspired by nature on Earth and beyond, to display on the Astro Pi computers for the astronauts on the ISS to see as they go about their daily tasks.

    Using our step-by-step Mission Zero project guide, young people will learn to create simple Python programs in which they will code with variables and use the colour sensors on the Astro Pis to change the background colour in their images. To help your teams create their designs, check out the examples from teams that took part in Mission Zero in 2023/24 in the project guide.

    A young person smiles while using a laptop.

    Young people can create their Mission Zero programs individually or in teams of up to 4 people, and this year, we have added a save function for young people as they code. This will make it easier for mentors to run Mission Zero over more than one session, and also means that young people can finish their projects at home. They will need to use your classroom code and their team name to load their saved projects. 

    Mission Space Lab: Calculate the speed of the ISS

    Mission Space Lab asks teams to solve a real-world scientific task in space. It is ideally suited to young people who would like to learn more about space science and stretch their programming skills.

    A photo of Mexico taken by an Astro Pi computer on board the ISS.
     A photo of Mexico taken using an Astro Pi computer during a team’s experiment in Mission Space Lab 2023/24

    In Mission Space Lab this year, the task for teams of 2 to 6 young people is to calculate the speed at which the International Space Station is travelling — as accurately as possible. Teams need to write a Python program that:

    1. Collects data from the Astro Pi computers’ sensors or cameras about the orientation and motion of the ISS as it orbits the Earth, and
    2. Uses this data to calculate the travel speed
    The Astro Pi computers inside the International Space Station.
    The Astro Pi computers at a window in the International Space Station.

    This year we have created a new way for teams to test their programs, with an online version of the Astro Pi Replay tool. All teams need to do is select their program and run it in Astro Pi Replay, which will create a real-time simulation of the program running on the ISS, using historical data and images. Astro Pi Replay will also show program outputs and report errors. This means teams can code their program in their preferred code editor, then test with an internet browser. However, if they wish, teams can still run the Astro Pi Replay tool offline with Thonny.

    Important dates for your diary

    • 16 September 2024: Registration is now open for Mission Zero and Mission Space Lab!
    • 24 February 2025: Mission Space Lab submissions close
    • 24 March 2025: Mission Zero submissions close
    • April–May 2025: Astro Pi programs run on the International Space Station
    • June 2025: Astro Pi teams receive their certificates

    Register today

    Both missions are open to young people up to age 19 from eligible countries — all ESA Member States and beyond. To find out more and register, visit astro-pi.org

    Look out for updates and resources being shared on the Astro Pi website, including a Mission Zero video codealong and Mission Space Lab live streams. You can also keep up-to-date with all the Astro Pi news on the Astro Pi X account, our Facebook, LinkedIn, and Instagram, or by signing up to the newsletter at astro-pi.org.

    We can’t wait to see your programs!

    Website: LINK

  • Experience AI: How research continues to shape the resources

    Experience AI: How research continues to shape the resources

    Reading Time: 5 minutes

    Since we launched the Experience AI learning programme in the UK in April 2023, educators in 130 countries have downloaded Experience AI lesson resources. They estimate reaching over 630,000 young people with the lessons, helping them to understand how AI works and to build the knowledge and confidence to use AI tools responsibly. Just last week, we announced another exciting expansion of Experience AI: thanks to $10 million in funding from Google.org, we will be able to work with local partner organisations to provide research-based AI education to an estimated over 2 million young people across Europe, the Middle East and Africa.

    Trainer discussing Experience AI at a teacher training event in Kenya.
    Experience AI teacher training in Kenya

    This blog post explains how we use research to continue to shape our Experience AI resources, including the new AI safety resources we are developing. 

    The beginning of Experience AI

    Artificial intelligence (AI) and machine learning (ML) applications are part of our everyday lives — we use them every time we scroll through social media feeds organised by recommender systems or unlock an app with facial recognition. For young people, there is more need than ever to gain the skills and understanding to critically engage with AI technologies. 

    Someone holding a mobile phone that's open on their social media apps folder.

    We wanted to design free lesson resources to help teachers in a wide range of subjects confidently introduce AI and ML to students aged 11 to 14 (Key Stage 3). This led us to develop Experience AI, in collaboration with Google DeepMind, offering materials including lesson plans, slide decks, videos (both teacher- and student-facing), student activities, and assessment questions. 

    SEAME: The research-based framework behind Experience AI

    The Experience AI resources were built on rigorous research from the Raspberry Pi Computing Education Research Centre as well as from other researchers, including those we hosted at our series of seminars on AI and data science education. The Research Centre’s work involved mapping and categorising over 500 resources used to teach AI and ML, and found that the majority were one-off activities, and that very few resources were tailored to a specific age group.

    An example activity slide in the Experience AI lessons where students learn about bias.
    An example activity in the Experience AI lessons where students learn about bias.

    To analyse the content that existing AI education resources covered, the Centre developed a simple framework called SEAME. The framework gives you an easy way to group concepts, knowledge, and skills related to AI and ML based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works.)

    Through Experience AI, learners also gain an understanding of the models underlying AI applications, and the processes used to train and test ML models.

    An example activity slide in the Experience AI lessons where students learn about classification.
    An example activity in the Experience AI lessons where students learn about classification.

    Our Experience AI lessons cover all four levels of SEAME and focus on applications of AI that are relatable for young people. They also introduce learners to AI-related issues such as privacy or bias concerns, and the impact of AI on employment. 

    The six foundation lessons of Experience AI

    1. What is AI?: Learners explore the current context of AI and how it is used in the world around them. Looking at the differences between rule-based and data-driven approaches to programming, they consider the benefits and challenges that AI could bring to society. 
    2. How computers learn: Focusing on the role of data-driven models in AI systems, learners are introduced to ML and find out about three common approaches to creating ML models. Finally they explore classification, a specific application of ML.
    3. Bias in, bias out: Students create their own ML model to classify images of apples and tomatoes. They discover that a limited dataset is likely to lead to a flawed ML model. Then they explore how bias can appear in a dataset, resulting in biased predictions produced by a ML model. 
    4. Decision trees: Learners take their first in-depth look at a specific type of ML model: decision trees. They see how different training datasets result in the creation of different ML models, experiencing first-hand what the term ‘data-driven’ means.
    5. Solving problems with ML models: Students are introduced to the AI project lifecycle and use it to create a ML model. They apply a human-focused approach to working on their project, train a ML model, and finally test their model to find out its accuracy.
    6. Model cards and careers: Learners finish the AI project lifecycle by creating a model card to explain their ML model. To complete the unit, they explore a range of AI-related careers, hear from people working in AI research at Google DeepMind, and explore how they might apply AI and ML to their interests. 
    Experience AI banner.

    We also offer two additional stand-alone lessons: one on large language models, how they work, and why they’re not always reliable, and the other on the application of AI in ecosystems research, which lets learners explore how AI tools can be used to support animal conservation. 

    New AI safety resources: Empowering learners to be critical users of technology

    We have also been developing a set of resources for educator-led sessions on three topics related to AI safety, funded by Google.org

    • AI and your data: With the support of this resource, young people reflect on the data they have already provided to AI applications in their daily lives, and think about how the prevalence of AI tools might change the way they protect their data.  
    • Media literacy in the age of AI: This resource highlights the ways AI tools can be used to perpetuate misinformation and how AI applications can help people combat misleading claims.
    • Using generative AI responsibly: With this resource, young people consider their responsibilities when using generative AI, and their expectations of developers who release Experience AI tools. 

    Other research principles behind our free teaching resources 

    As well as using the SEAME framework, we have incorporated a whole host of other research-based concepts in the design principles for the Experience AI resources. For example, we avoid anthropomorphism — that is, words or imagery that can lead learners to wrongly believe that AI applications have sentience or intentions like humans do — and we instead promote the understanding that it’s people who design AI applications and decide how they are used. We also teach about data-driven application design, which is a core concept in computational thinking 2.0.  

    Share your feedback

    We’d love to hear your thoughts and feedback about using the Experience AI resources. Your comments help us to improve the current materials, and to develop future resources. You can tell us what you think using this form

    And if you’d like to start using the Experience AI resources as an educator, you can download them for free at experience-ai.org.

    Website: LINK

  • Adapting primary Computing resources for cultural responsiveness: Bringing in learners’ identity

    Adapting primary Computing resources for cultural responsiveness: Bringing in learners’ identity

    Reading Time: 6 minutes

    In recent years, the emphasis on creating culturally responsive educational practices has gained significant traction in schools worldwide. This approach aims to tailor teaching and learning experiences to better reflect and respect the diverse cultural backgrounds of students, thereby enhancing their engagement and success in school. In one of our recent research studies, we collaborated with a small group of primary school Computing teachers to adapt existing resources to be more culturally responsive to their learners.

    Teachers work together to identify adaptations to Computing lessons.
    At a workshop for the study, teachers collaborated to identify adaptations to Computing lessons

    We used a set of ten areas of opportunity to scaffold and prompt teachers to look for ways that Computing resources could be adapted, including making changes to the content or the context of lessons, and using pedagogical techniques such as collaboration and open-ended tasks. 

    Today’s blog lays out our findings about how teachers can bring students’ identities into the classroom as an entry point for culturally responsive Computing teaching.

    Collaborating with teachers

    A group of twelve primary teachers, from schools spread across England, volunteered to participate in the study. The primary objective was for our research team to collaborate with these teachers to adapt two units of work about creating digital images and vector graphics so that they better aligned with the cultural contexts of their students. The research team facilitated an in-person, one-day workshop where the teachers could discuss their experiences and work in small groups to adapt materials that they then taught in their classrooms during the following term.

    A shared focus on identity

    As the workshop progressed, an interesting pattern emerged. Despite the diversity of schools and student populations represented by the teachers, each group independently decided to focus on the theme of identity in their adaptations. This was not a directive from the researchers, but rather a spontaneous alignment of priorities among the teachers.

    An example slide from a culturally adapted activity to create a vector graphic emoji.
    An example of an adapted Computing activity to create a vector graphic emoji.

    The focus on identity manifested in various ways. For some teachers, it involved adding diverse role models so that students could see themselves represented in computing, while for others, it meant incorporating discussions about students’ own experiences into the lessons. However, the most compelling commonality across all groups was the decision to have students create a digital picture that represented something important about themselves. This digital picture could take many forms — an emoji, a digital collage, an avatar to add to a game, or even creating fantastical animals. The goal of these activities was to provide students with a platform to express aspects of their identity that were significant to them whilst also practising the skills to manipulate vector graphics or digital images.

    Funds of identity theory

    After the teachers had returned to their classrooms and taught the adapted lessons to their students, we analysed the digital pictures created by the students using funds of identity theory. This theory explains how our personal experiences and backgrounds shape who we are and what makes us unique and individual, and argues that our identities are not static but are continuously shaped and reshaped through interactions with the world around us. 

    Keywords for the funds of identity framework, drawing on work by Esteban-Guitart and Moll (2014) and Poole (2017).
    Funds of identity framework, drawing on work by Esteban-Guitart and Moll (2014) and Poole (2017).

    In the context of our study, this theory argues that students bring their funds of identity into their Computing classrooms, including their cultural heritage, family traditions, languages, values, and personal interests. Through the image editing and vector graphics activities, students were able to create what the funds of identity theory refers to as identity artefacts. This allowed them to explore and highlight the various elements that hold importance in their lives, illuminating different facets of their identities. 

    Students’ funds of identity

    The use of the funds of identity theory provided a robust framework for understanding the digital artefacts created by the students. We analysed the teachers’ descriptions of the artefacts, paying close attention to how students represented their identities in their creations.

    1. Personal interests and values 

    One significant aspect of the analysis centered around the personal interests and values reflected in the artefacts. Some students chose to draw on their practical funds of identity and create images about hobbies that were important to them, such as drawing or playing football. Others focused on existential  funds of identity and represented values that were central to their personalities, such as cool, chatty, or quiet.

    2. Family and community connections

    Many students also chose to include references to their family and community in their artefacts. Social funds of identity were displayed when students featured family members in their images. Some students also drew on their institutional funds, adding references to their school, or geographical funds, by showing places such as the local area or a particular country that held special significance for them. These references highlighted the importance of familial and communal ties in shaping the students’ identities.

    3. Cultural representation

    Another common theme was the way students represented their cultural backgrounds. Some students chose to highlight their cultural funds of identity, creating images that included their heritage, including their national flag or traditional clothing. Other students incorporated ideological aspects of their identity that were important to them because of their faith, including Catholicism and Islam. This aspect of the artefacts demonstrated how students viewed their cultural heritage as an integral part of their identity.

    Implications for culturally responsive Computing teaching

    The findings from this study have several important implications. Firstly, the spontaneous focus on identity by the teachers suggests that identity is a powerful entry point for culturally responsive Computing teaching. Secondly, the application of the funds of identity theory to the analysis of student work demonstrates the diverse cultural resources that students bring to the classroom and highlights ways to adapt Computing lessons in ways that resonate with students’ lived experiences.

    An example of an identity artefact made by one of the students in a culturally adapted lesson on vector graphics.
    An example of an identity artefact made by one of the students in the culturally adapted lesson on vector graphics. 

    However, we also found that teachers often had to carefully support students to illuminate their funds of identity. Sometimes students found it difficult to create images about their hobbies, particularly if they were from backgrounds with fewer social and economic opportunities. We also observed that when teachers modelled an identity artefact themselves, perhaps to show an example for students to aim for, students then sometimes copied the funds of identity revealed by the teacher rather than drawing on their own funds. These points need to be taken into consideration when using identity artefact activities. 

    Finally, these findings relate to lessons about image editing and vector graphics that were taught to students aged 8- to 10-years old in England, and it remains to be explored how students in other countries or of different ages might reveal their funds of identity in the Computing classroom.

    Moving forward with cultural responsiveness

    The study demonstrated that when Computing teachers are given the opportunity to collaborate and reflect on their practice, they can develop innovative ways to make their teaching more culturally responsive. The focus on identity, as seen in the creation of identity artefacts, provided students with a platform to express themselves and connect their learning to their own lives. By understanding and valuing the funds of identity that students bring to the classroom, teachers can create a more equitable and empowering educational experience for all learners.

    Two learners do physical computing in the primary school classroom.

    We’ve written about this study in more detail in a full paper and a poster paper, which will be published at the WiPSCE conference next week. 

    We would like to thank all the researchers who worked on this project, including our collaborations with Lynda Chinaka from the University of Roehampton, and Alex Hadwen-Bennett from King’s College London. Finally, we are grateful to Cognizant for funding this academic research, and to the cohort of primary Computing teachers for their enthusiasm, energy, and creativity, and their commitment to this project.

    Website: LINK

  • Experience AI at UNESCO’s Digital Learning Week

    Experience AI at UNESCO’s Digital Learning Week

    Reading Time: 5 minutes

    Last week, we were honoured to attend UNESCO’s Digital Learning Week conference to present our free Experience AI resources and how they can help teachers demystify AI for their learners.  

    A group of educators at a UNESCO conference.

    The conference drew a worldwide audience in-person and online to hear about the work educators and policy makers are doing to support teachers’ use of AI tools in their teaching and learning. Speaker after speaker reiterated that the shared goal of our work is to support learners to become critical consumers and responsible creators of AI systems.

    In this blog, we share how our conference talk demonstrated the use of Experience AI for pursuing this globally shared goal, and how the Experience AI resources align with UNESCO’s newly launched AI competency framework for students.

    Presenting the design principles behind Experience AI

    Our talk about Experience AI, our learning programme developed with Google DeepMind, focused on the research-informed approach we are taking in our resource development. Specifically, we spoke about three key design principles that we embed in the Experience AI resources:

    Firstly, using AI and machine learning to solve problems requires learners and educators to think differently to traditional computational thinking and use a data-driven approach instead, as laid out in the research around computational thinking 2.0.

    Secondly, every word we use in our teaching about AI is important to help young people form accurate mental models about how AI systems work. In particular, we focused our examples around the need to avoid anthropomorphising language when we describe AI systems. Especially given that some developers produce AI systems with the aim to make them appear human-like in their design and outputs, it’s important that young people understand that AI systems are in fact built and designed by humans.

    Thirdly we described how we used the SEAME framework we adapted from work by Jane Waite (Raspberry Pi Foundation) and Paul Curzon (Queen Mary University, London) to categorise hundreds of AI education resources and inform the design of our Experience AI resources. The framework offers a common language for educators when assessing the content of resources, and when supporting learners to understand the different aspects of AI systems. 

    By presenting our design principles, we aimed to give educators, policy makers, and attendees from non-governmental organisations practical recommendations and actionable considerations for designing learning materials on AI literacy.   

    How Experience AI aligns with UNESCO’s new AI competency framework for students

    At Digital Learning Week, UNESCO launched two AI competency frameworks:

    • A framework for students, intended to help teachers around the world with integrating AI tools in activities to engage their learners
    • A framework for teachers, “defining the knowledge, skills, and values teachers must master in the age of AI”

    AI competency framework for students

    We have had the chance to map the Experience AI resources to UNESCO’s AI framework for students at a high level, finding that the resources cover 10 of the 12 areas of the framework (see image below).

    An adaptation of a summary table from UNESCO’s new student competency framework (CC-BY-SA 3.0 IGO), highlighting the 10 areas covered by our Experience AI resources

    For instance, throughout the Experience AI resources runs a thread of promoting “citizenship in the AI era”: the social and ethical aspects of AI technologies are highlighted in all the lessons and activities. In this way, they provide students with the foundational knowledge of how AI systems work, and where they may work badly. Using the resources, educators can teach their learners core AI and machine learning concepts and make these concepts concrete through practical activities where learners create their own models and critically evaluate their outputs. Importantly, by learning with Experience AI, students not only learn to be responsible users of AI tools, but also to consider fairness, accountability, transparency, and privacy when they create AI models.  

    Teacher competency framework for AI 

    UNESCO’s AI competency framework for teachers outlines 15 competencies across 5 dimensions (see image below).  We enjoyed listening to the launch panel members talk about the strong ambitions of the framework as well as the realities of teachers’ global and local challenges. The three key messages of the panel were:

    • AI will not replace the expertise of classroom teachers
    • Supporting educators to build AI competencies is a shared responsibility
    • Individual countries’ education systems have different needs in terms of educator support

    All three messages resonate strongly with the work we’re doing at the Raspberry Pi Foundation. Supporting all educators is a fundamental part of our resource development. For example, Experience AI offers everything a teacher with no technical background needs to deliver the lessons, including lesson plans, videos, worksheets and slide decks. We also provide a free online training course on understanding AI for educators. And in our work with partner organisations around the world, we adapt and translate Experience AI resources so they are culturally relevant, and we organise locally delivered teacher professional development. 

    A summary table from UNESCO’s new teacher competency framework (CC-BY-SA 3.0 IGO)

     The teachers’ competency framework is meant as guidance for educators, policy makers, training providers, and application developers to support teachers in using AI effectively, and in helping their learners gain AI literacy skills. We will certainly consult the document as we develop our training and professional development resources for teachers further.

    Towards AI literacy for all young people

    Across this year’s UNESCO’s Digital Learning Week, we saw that the role of AI in education took centre stage across the presentations and the informal conversations among attendees. It was a privilege to present our work and see how well Experience AI was received, with attendees recognising that our design principles align with the values and principles in UNESCO’s new AI competency frameworks.

    A conference table setup with a pair of headphones resting on top of a UNESCO brochure.

    We look forward to continuing this international conversation about AI literacy and working in aligned ways to support all young people to develop a foundational understanding of AI technologies.

    Website: LINK

  • Experience AI expands to reach over 2 million students

    Experience AI expands to reach over 2 million students

    Reading Time: 4 minutes

    Two years ago, we announced Experience AI, a collaboration between the Raspberry Pi Foundation and Google DeepMind to inspire the next generation of AI leaders.

    Today I am excited to announce that we are expanding the programme with the aim of reaching more than 2 million students over the next 3 years, thanks to a generous grant of $10m from Google.org. 

    Why do kids need to learn about AI

    AI technologies are already changing the world and we are told that their potential impact is unprecedented in human history. But just like every other wave of technological innovation, along with all of the opportunities, the AI revolution has the potential to leave people behind, to exacerbate divisions, and to make more problems than it solves.

    Part of the answer to this dilemma lies in ensuring that all young people develop a foundational understanding of AI technologies and the role that they can play in their lives. 

    An educator points to an image on a student's computer screen.

    That’s why the conversation about AI in education is so important. A lot of the focus of that conversation is on how we harness the power of AI technologies to improve teaching and learning. Enabling young people to use AI to learn is important, but it’s not enough. 

    We need to equip young people with the knowledge, skills, and mindsets to use AI technologies to create the world they want. And that means supporting their teachers, who once again are being asked to teach a subject that they didn’t study. 

    Experience AI 

    That’s the work that we’re doing through Experience AI, an ambitious programme to provide teachers with free classroom resources and professional development, enabling them to teach their students about AI technologies and how they are changing the world. All of our resources are grounded in research that defines the concepts that make up AI literacy, they are rooted in real world examples drawing on the work of Google DeepMind, and they involve hands-on, interactive activities. 

    The Experience AI resources have already been downloaded 100,000 times across 130 countries and we estimate that 750,000 young people have taken part in an Experience AI lesson already. 

    In November 2023, we announced that we were building a global network of partners that we would work with to localise and translate the Experience AI resources, to ensure that they are culturally relevant, and organise locally delivered teacher professional development. We’ve made a fantastic start working with partners in Canada, India, Kenya, Malaysia, and Romania; and it’s been brilliant to see the enthusiasm and demand for AI literacy from teachers and students across the globe. 

    Thanks to an incredibly generous donation of $10m from Google.org – announced at Google.org’s first Impact Summit  – we will shortly be welcoming new partners in 17 countries across Europe, the Middle East, and Africa, with the aim of reaching more than 2 million students in the next three years. 

    AI Safety

    Alongside the expansion of the global network of Experience AI partners, we are also launching new resources that focus on critical issues of AI safety. 

    A laptop surrounded by various screens displaying images, videos, and a world map.

    AI and Your Data: Helping young people reflect on the data they are already providing to AI applications in their lives and how the prevalence of AI tools might change the way they protect their data.

    Media Literacy in the Age of AI: Highlighting the ways AI tools can be used to perpetuate misinformation and how AI applications can help combat misleading claims.

    Using Generative AI Responsibly: Empowering young people to reflect on their responsibilities when using Generative AI and their expectations of developers who release AI tools.

    Get involved

    In many ways, this moment in the development of AI technologies reminds me of the internet in the 1990s (yes, I am that old). We all knew that it had potential, but no-one could really imagine the full scale of what would follow. 

    We failed to rise to the educational challenge of that moment and we are still living with the consequences: a dire shortage of talent; a tech sector that doesn’t represent all communities and voices; and young people and communities who are still missing out on economic opportunities and unable to utilise technology to solve the problems that matter to them. 

    We have an opportunity to do a better job this time. If you’re interested in getting involved, we’d love to hear from you.

    Website: LINK

  • Join the UK Bebras Challenge 2024

    Join the UK Bebras Challenge 2024

    Reading Time: 4 minutes

    The UK Bebras Challenge, the nation’s largest computing competition, is back and open for entries from schools. This year’s challenge will be open for entries from 4–15 November. Last year, over 400,000 students from across the UK took part. Read on to learn how your school can get involved.

    What is UK Bebras?

    UK Bebras is a free-to-enter annual competition that is designed to spark interest in computational thinking among students aged 6 to 19 by providing engaging and thought-provoking activities. The 45-minute challenge is accessible to everyone, offering age-appropriate interactive questions for students at different levels, including a tailored version for students with severe sight impairments. 

    The questions are designed to give every student the opportunity to showcase their potential, whether they excel in maths or computing, or not. With self-marking questions and no programming required, it’s easy for schools to participate.

    “Thank you for another fantastic Bebras event! My students have really enjoyed it. This is the first year that one of my leadership team actually did the Bebras to understand what we are preparing the children for — she was very impressed!” Reference 5487

    A class of primary school students do coding at laptops.

    “I really enjoyed doing the Bebras challenge yesterday. It was the most accessible it’s ever been for me as a braillist/screen reader user.” Reference 5372

    What does a UK Bebras question look like?

    The questions are inspired by classic computing problems but are presented in a fun, age-appropriate way. For instance, a puzzle for 6- to 8-year-olds might involve guiding a hungry tortoise along the most efficient path across a lawn, while 16- to 19-year-olds could be asked to sort members for quiz teams based on who knows who — a challenging problem relating to graph theory.

    Here’s a question we ran in 2023 for the Castors group (ages 8 to 10). Can you solve it? 

    Planting carrots

    A robotic rabbit is planting carrot seeds in these four earth mounds.

    It can respond to these commands:

    jump left to the next mound
    jump right to the next mound
    plant a carrot seed in the mound you are on

    Here is a sequence of commands for the rabbit:



    We don’t know which mound the rabbit started on, but we do know that, when it followed this sequence, it placed each of three carrot seeds on different mounds.

    Question: 

    Which picture shows how the carrot seeds could have been planted by the robot following the sequence of commands?

    Example puzzle answer

    The correct answer is:

    The image below shows the route the robot takes by following the instructions:

    After executing the first two commands

    the rabbit places the seed on the mound to the far right:

    It then executes the commands

    and lays the next seed:

    Then it jumps to the left twice and lays the last seed

    So the carrot seeds will be on the hills in the order:

    Did you get it right?

    How do I get my school involved?

    Visit the UK Bebras website for more information and to register your school. Once you’ve registered, you’ll get access to the entire UK Bebras back catalogue of questions, allowing you to create custom quizzes for your students to tackle at any time throughout the year. These quizzes are self-marking, and you can download your students’ results to keep track of their progress. Schools have found these questions perfect for enrichment activities, end-of-term quizzes, lesson starters, and even full lessons to develop computational thinking skills.

    Join for free at bebras.uk/admin.

    Website: LINK

  • Bridging the gap from Scratch to Python: Introducing ‘Paint with Python’

    Bridging the gap from Scratch to Python: Introducing ‘Paint with Python’

    Reading Time: 3 minutes

    We have developed an innovative activity to support young people as they transition from visual programming languages like Scratch to text-based programming languages like Python.

    An illustration of a web browser window with colourful tags and labels around it.

    This activity introduces a unique interface that empowers learners to easily interact with Python while they create a customised painting app.

    “The kids liked the self-paced learning, it allowed them to work at their own rate. They liked using RGB tables to find their specific colours.” – Code Club mentor

    Why learn to code Python?

    We’ve long been championing Python as an ideal tool for young people who want to start text-based programming. Python has simple syntax and needs very few lines of code to get started, and there is a vibrant community of supportive programmers surrounding it.

    However, we know that starting with Python can be challenging for young people who have never done any text-based coding. They can face obstacles such as software installation issues, getting used to a new syntax, and the need for appropriate typing skills.

    How ‘Paint with Python’ helps learners get started

    ‘Paint with Python’ is an online educational activity that addresses many of these challenges and helps young people learn to code Python for the first time. It’s entirely web-based, requiring no software installation beyond a web browser. Instructions are displayed in a side panel, allowing learners to read and code without needing to switch tabs.

    To help young people with creating their painting app, much of the initial code is pre-written behind the scenes, which enables learners to focus on experimenting with Python and observing the outcomes. They engage with the code by clicking on suggested options or, in some cases, by typing small snippets of Python. For example, they can select colours from a range of options or, as they grow more confident, type RGB values to create custom colours.

    The activity is fully responsive for mobile and tablet devices and provides a final view of the full program on the last page, together with suggested routes to continue learning text-based programming.

    An accessible introduction to text-based programming

    We believe this activity offers an accessible way for young learners to begin their journey with text-based programming and learning to code Python. The code they write is straightforward and the activity is designed to minimise errors. When mistakes do occur, the interface provides clear, constructive feedback, guiding learners to make corrections.

    Try out ‘Paint with Python’ at rpf.io/paint-with-python. We’d love to hear your feedback! Please send any thoughts you have to uxresearch@raspberrypi.org. 

    This activity was developed with support from the Cisco Foundation. Through our funding partnership with them, we’ve been able to provide thousands of young people with the inspiration and opportunity to progress their coding skills anywhere, and on any device.

    Website: LINK

  • Get ready for Moonhack 2024: Projects on climate change

    Get ready for Moonhack 2024: Projects on climate change

    Reading Time: 3 minutes

    Moonhack is a free, international coding challenge for young people run online every year by Code Club Australia, powered by our partner the Telstra Foundation. The yearly challenge is open to young people worldwide, and in 2023, over 44,500 young people registered to take part.

    A Moonhack 2024 logo.

    Moonhack 2024 runs from 14 to 31 October. This year’s theme is taken from World Space Week 2024: climate change. As always, the projects cater for everyone from brand-new beginners to more experienced coders. And young people have a chance to win a prize for their submitted project!

    We caught up with Kaye North, Community and Engagement Manager at Code Club Australia, to find out more.

    What to expect from Moonhack in 2024

    For this year’s projects, Kaye told us that she collaborated with farmers, scientists, and young people from across Australia to cover diverse topics related to climate change and space. The projects will help participants learn about topics from how people who work in agriculture use climate data to increase crop yields and practise sustainable farming, to the impact of rising global temperatures on sea life populations.

    An illustration depicting various elements related to the environment and sustainability.

    Kaye also hopes to help young people understand the role of satellite data related to climate change, such as the data NASA collects and shares via satellite. Satellite data on rising sea levels, called out in United Nations Sustainable Development Goal 13, forms the basis of one of the Moonhack projects this year.

    Moonhack participants will be able to code with Scratch, micro:bit, or Python. They can also take on a project brief where they may choose their favourite programming language and even include physical computing if they wish.

    A computing classroom filled with learners.

    All six projects will be available from 1 September when registration opens, and projects can be submitted until 30 November.

    Inspiring young people to create a better future

    Climate change is an issue that affects everyone, and for many young people it’s a source of concern. Kaye’s aim this year is to show small changes young people can make to contribute to a big, global impact.

    “Moonhack’s question this year is ‘Can we create calls to action through our coding to influence others to make better choices, or even inform them of things that they didn’t know that they can share with others?’” – Kaye North, Code Club Australia

    Moonhack support for volunteers, teachers and parents

    This year’s Moonhack includes new resources to help educators and mentors who are supporting young people to take part:

    Get your young coders involved: Key info

    • Registration for Moonhack 2024 opens on 1 September
    • The challenge runs from 14 to 31 October, and projects can be submitted until 30 November
    • Participation is free and open to any young coder worldwide, whether they are part of a Code Club or not
    • Everyone from beginners to advanced coders can participate
    • The six projects for Moonhack 2024 will be available in around 30 languages

    To find out more, visit the Moonhack website and sign up to the Moonhack newsletter.

    Code Club Australia is powered by the Telstra Foundation as part of a strategic partnership with us at the Raspberry Pi Foundation.

    Website: LINK

  • Celebrating the community: Isabel

    Celebrating the community: Isabel

    Reading Time: 5 minutes

    One of our favourite things is sharing the stories of amazing young people, volunteers, and educators who are using their passion for technology to create positive change in the world around them.

    Recently, we had the pleasure of speaking with Isabel, a computer science teacher at Barton Peveril Sixth Form College in Eastleigh, England. She told us her fascinating journey from industry to education, along with how she is helping to make the tech space inviting to all.

    From industry to the classroom: Isabel’s journey to encourage diversity in tech

    Isabel’s path to working in the tech sector started with her early exposure to engineering thanks to her father’s career in telecoms.

    “I find this is true for a lot of female engineers my age: you will find that their dad or their uncle was an engineer. I remember that when I made the decision to study engineering, my teachers asked me if I was sure that it was something I wanted to do.”

    Isabel pursued a degree in engineering because she loved the technical aspects, and during her studies she found a passion for programming. She went to work as a software engineer in Hampshire, contributing to the development of 3G mobile phone technology.

    Despite enjoying her career in tech, Isabel felt a strong pull towards teaching due to her long-standing involvement with youth groups and a desire to give back to the community.

    “While I was at university in London, I took part in a scheme where we could go into local primary schools and help with their science teaching. At the time, I just thought this was my way of giving back, I hadn’t really thought of it as a career. But actually, after a while, I thought ‘I’m enjoying this programming, but I really liked helping the young kids as well’.”

    The transition wasn’t easy, as Computer Science was not widely taught in schools at the time, but Isabel persevered, teaching IT and Media to her classes as well.

    Once Isabel settled into her teaching role, she began thinking about how she could tackle a problem she noticed in the STEM field.

    Championing diversity in tech

    Having experienced first-hand what it was like to be the only woman in STEM spaces, Isabel’s commitment to diversity in technology is at the core of her teaching philosophy. She works hard to create an inclusive environment and a diversity of opportunities in her classroom, making sure girls feel encouraged to pursue careers in tech through exploring various enrichment activities.

    Two educators at a desk using their computers.

    Isabel focuses on enrichment activities that bridge the gap between academic learning and real-world application. She runs various projects and competitions, ensuring a balanced representation of girls in these initiatives, and gives her students the opportunity to participate in programs like the Industrial Cadets, Student Robotics, and Coolest Projects

    Isabel told us that she feels these opportunities provide essential soft skills that are crucial for success in any career.

    “The A level environment is so academic; it is heavily focused on working on your own on very abstract topics. Having worked in industry and knowing the need to collaborate, I found that really hard. So I’ve always made sure to do lots of projects with my students where we actually work with real engineers, do real-world projects. I believe strongly in teaching soft skills like team working, project management, and time management.”

    Harnessing trusted resources

    A key resource in Isabel’s teaching toolkit is the Ada Computer Science platform. She values its reliability and the timely updates to the topics, which are crucial in a rapidly evolving subject like Computer Science.

    She said she encourages both her students and fellow teachers, especially those who have retrained in Computer Science, to use the platform as a resource. 

    “Ada Computer Science is amazing. We know we can rely on saying to the students ‘look on Ada, the information will be correct’ because I trust the people creating the resources. And we even found ourselves as teachers double-checking things on there. We struggle to get Computer science teachers, so actually only two of us are Computer Science teachers, and the other three are Maths teachers we have trained up. To be able to say ‘if you are not sure about something, look on Ada’ is a really nice thing to have.”

    A large group of educators at a workshop.

    The ongoing challenge and hope for the future

    Despite her efforts, Isabel acknowledges that progress in getting more girls to pursue tech careers is slow. Many girls still view tech as an uninviting space and feel like they don’t belong when they find themselves as one of a few girls — if not the only one — in a class. But Isabel remains hopeful that continuous exposure and positive experiences can change these perceptions.

    “I talk to students who are often the only girl in the class and they find that really hard. So, if at GCSE they are the only girl in the class, they won’t do [the subject] at A level. So, if we leave it until A level, it is almost too late. Because of this, I try as much as I can to get as many girls as possible onto my engineering enrichment projects to show them as many opportunities in engineering as possible early on.”

    Her work with organisations like the UK Electronics Skills Foundation reflects her commitment to raising awareness about careers in electronics and engineering. Through her outreach and enrichment projects, Isabel educates younger students about the opportunities in these fields, hoping to inspire more girls to consider them as viable career paths.

    Looking ahead

    As new technology continues to be built, Isabel recognises the challenges in keeping up with rapid changes, especially with fields like artificial intelligence (AI). She stays updated through continuous learning and collaborating with her peers, and encourages her students to be adaptable and open to new developments. “The world of AI is both exciting and daunting,” she admits. “We need to prepare our students for a future that we can hardly predict.”

    Isabel’s dedication to teaching, her advocacy for diversity, and her efforts to provide real-world learning opportunities make her an inspiring educator. Her commitment was recognised by the Era Foundation in 2023: Isabel was named as one of their David Clark Prize recipients. The award recognises those who “have gone above and beyond the curriculum to inspire students and showcase real-world engineering in the classroom”.

    A woman receives a certificate of recognition.

    Isabel not only imparts technical knowledge — she inspires her students to believe in their potential, encouraging a new generation of diverse tech professionals. 

    If Isabel’s story has inspired you to encourage the next generation of young tech creators, check out the free teaching and training resources we provide to support your journey.

    If you are working in Computer Science teaching for learners age 14 and up, take a look at how Ada Computer Science will support you. 

    Website: LINK

  •  CSTA 2024: What happened in Las Vegas

     CSTA 2024: What happened in Las Vegas

    Reading Time: 4 minutes

    About three weeks ago, a small team from the Raspberry Pi Foundation braved high temperatures and expensive coffees (and a scarcity of tea) to spend time with educators at the CSTA Annual Conference in Las Vegas.

    A team of 6 educators inside a conference hall.

    With thousands of attendees from across the US and beyond participating in engaging workshops, thought-provoking talks, and visiting the fantastic expo hall, the CSTA conference was an excellent opportunity for us to connect with and learn from educators.

    Meeting educators & sharing resources

    Our hope for the conference week was to meet and learn from as many different educators as possible, and we weren’t disappointed. We spoke with a wide variety of teachers, school administrators, and thought leaders about the progress, successes, and challenges of delivering successful computer science (CS) programs in the US (more on this soon). We connected and reconnected with so many educators at our stand, gave away loads of stickers… and we even gave away a Raspberry Pi Pico to one lucky winner each day.

    A group of educators taking a selfie at a conference.
    The team with one of the winners of a Raspberry Pi Pico

    As well as learning from hundreds of educators throughout the week, we shared some of the ways in which the Foundation supports teachers to deliver effective CS education. Our team was on hand to answer questions about our wide range of free learning materials and programs to support educators and young people alike. We focused on sharing our projects site and all of the ways educators can use the site’s unique projects pathways in their classrooms. And of course we talked to educators about Code Club. It was awesome to hear from club leaders about the work their students accomplished, and many educators were eager to start a new club at their schools! 

    An educator is holding Hello World magazine.
    We gave a copy of the second Big Book to all conference attendees.

    Back in 2022 at the last in-person CSTA conference, we had donated a copy of our first special edition of Hello World magazine, The Big Book of Computing Pedagogy, for every attendee. This time around, we donated copies of our follow-up special edition, The Big Book of Computing Content. Where the first Big Book focuses on how to teach computing, the second Big Book delves deep into what we teach as the subject of computing, laying it out in 11 content strands.

    Our talks about teaching (with) AI

    One of the things that makes CSTA conferences so special is the fantastic range of talks, workshops, and other sessions running at and around the conference. We took the opportunity to share some of our work in flash talks and two full-length sessions.

    One of the sessions was led by one of our Senior Learning Managers, Ben Garside, who gave a talk to a packed room on what we’ve learned from developing AI education resources for Experience AI. Ben shared insights we’ve gathered over the last two years and talked about the design principles behind the Experience AI resources.

    An educator is giving a talk at a conference.
    Ben discussed AI education with attendees.

    Being in the room for Ben’s talk, I was struck by two key takeaways:

    1. The issue of anthropomorphism, that is, projecting human-like characteristics onto artificial intelligence systems and other machines. This presents several risks and obstacles for young people trying to understand AI technology. In our teaching, we need to take care to avoid anthropomorphizing AI systems, and to help young people shift false conceptions they might bring into the classroom.
    2. Teaching about AI requires fostering a shift in thinking. When we teach traditional programming, we show learners that this is a rules-based, deterministic approach; meanwhile, AI systems based on machine learning are driven by data and statistical patterns. These two approaches and their outcomes are distinct (but often combined), and we need to help learners develop their understanding of the significant differences.

    Our second session was led by Diane Dowling, another Senior Learning Manager at the Foundation. She shared some of the development work behind Ada Computer Science, our free platform providing educators and learners with a vast set of questions and content to help understand CS.

    An educator is presenting at a conference.
    Diane presented our trial with using LLM-based automated feedback.

    Recently, we’ve been experimenting with the use of a large language model (LLM) on Ada to provide assessment feedback on long-form questions. This led to a great conversation between Diane and the audience about the practicalities, risks, and implications of such feature.

    More on what we learned from CSTA coming soon

    We had a fantastic time with the educators in Vegas and are grateful to CSTA and their sponsors for the opportunity to meet and learn from so many different people. We’ll be sharing some of what we learned from the educators we spoke to in a future blog post, so watch this space.

    A group of educators standing outside a conference venue.

    Website: LINK

  • Why we’re taking a problem-first approach to the development of AI systems

    Why we’re taking a problem-first approach to the development of AI systems

    Reading Time: 7 minutes

    If you are into tech, keeping up with the latest updates can be tough, particularly when it comes to artificial intelligence (AI) and generative AI (GenAI). Sometimes I admit to feeling this way myself, however, there was one update recently that really caught my attention. OpenAI launched their latest iteration of ChatGPT, this time adding a female-sounding voice. Their launch video demonstrated the model supporting the presenters with a maths problem and giving advice around presentation techniques, sounding friendly and jovial along the way. 

    A finger clicking on an AI app on a phone.

    Adding a voice to these AI models was perhaps inevitable as big tech companies try to compete for market share in this space, but it got me thinking, why would they add a voice? Why does the model have to flirt with the presenter? 

    Working in the field of AI, I’ve always seen AI as a really powerful problem-solving tool. But with GenAI, I often wonder what problems the creators are trying to solve and how we can help young people understand the tech. 

    What problem are we trying to solve with GenAI?

    The fact is that I’m really not sure. That’s not to suggest that I think that GenAI hasn’t got its benefits — it does. I’ve seen so many great examples in education alone: teachers using large language models (LLMs) to generate ideas for lessons, to help differentiate work for students with additional needs, to create example answers to exam questions for their students to assess against the mark scheme. Educators are creative people and whilst it is cool to see so many good uses of these tools, I wonder if the developers had solving specific problems in mind while creating them, or did they simply hope that society would find a good use somewhere down the line?

    An educator points to an image on a student's computer screen.

    Whilst there are good uses of GenAI, you don’t need to dig very deeply before you start unearthing some major problems. 

    Anthropomorphism

    Anthropomorphism relates to assigning human characteristics to things that aren’t human. This is something that we all do, all of the time, without it having consequences. The problem with doing this with GenAI is that, unlike an inanimate object you’ve named (I call my vacuum cleaner Henry, for example), chatbots are designed to be human-like in their responses, so it’s easy for people to forget they’re not speaking to a human. 

    A photographic rendering of a smiling face emoji seen through a refractive glass grid, overlaid with a diagram of a neural network.
    Image by Alan Warburton / © BBC / Better Images of AI / Social Media / CC-BY 4.0

    As feared, since my last blog post on the topic, evidence has started to emerge that some young people are showing a desire to befriend these chatbots, going to them for advice and emotional support. It’s easy to see why. Here is an extract from an exchange between the presenters at the ChatGPT-4o launch and the model:

    ChatGPT (presented with a live image of the presenter): “It looks like you’re feeling pretty happy and cheerful with a big smile and even maybe a touch of excitement. Whatever is going on? It seems like you’re in a great mood. Care to share the source of those good vibes?”
    Presenter: “The reason I’m in a good mood is we are doing a presentation showcasing how useful and amazing you are.”
    ChatGPT: “Oh stop it, you’re making me blush.” 

    The Family Online Safety Institute (FOSI) conducted a study looking at the emerging hopes and fears that parents and teenages have around GenAI.

    One quote from a teenager said:

    “Some people just want to talk to somebody. Just because it’s not a real person, doesn’t mean it can’t make a person feel — because words are powerful. At the end of the day, it can always help in an emotional and mental way.”  

    The prospect of teenagers seeking solace and emotional support from a generative AI tool is a concerning development. While these AI tools can mimic human-like conversations, their outputs are based on patterns and data, not genuine empathy or understanding. The ultimate concern is that this exposes vulnerable young people to be manipulated in ways we can’t predict. Relying on AI for emotional support could lead to a sense of isolation and detachment, hindering the development of healthy coping mechanisms and interpersonal relationships. 

    A photographic rendering of a simulated middle-aged white woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network.
    Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0

    Arguably worse is the recent news of the world’s first AI beauty pageant. The very thought of this probably elicits some kind of emotional response depending on your view of beauty pageants. There are valid concerns around misogyny and reinforcing misguided views on body norms, but it’s also important to note that the winner of “Miss AI” is being described as a lifestyle influencer. The questions we should be asking are, who are the creators trying to have influence over? What influence are they trying to gain that they couldn’t get before they created a virtual woman? 

    DeepFake tools

    Another use of GenAI is the ability to create DeepFakes. If you’ve watched the most recent Indiana Jones movie, you’ll have seen the technology in play, making Harrison Ford appear as a younger version of himself. This is not in itself a bad use of GenAI technology, but the application of DeepFake technology can easily become problematic. For example, recently a teacher was arrested for creating a DeepFake audio clip of the school principal making racist remarks. The recording went viral before anyone realised that AI had been used to generate the audio clip. 

    Easy-to-use DeepFake tools are freely available and, as with many tools, they can be used inappropriately to cause damage or even break the law. One such instance is the rise in using the technology for pornography. This is particularly dangerous for young women, who are the more likely victims, and can cause severe and long-lasting emotional distress and harm to the individuals depicted, as well as reinforce harmful stereotypes and the objectification of women. 

    Why we should focus on using AI as a problem-solving tool

    Technological developments causing unforeseen negative consequences is nothing new. A lot of our job as educators is about helping young people navigate the changing world and preparing them for their futures and education has an essential role in helping people understand AI technologies to avoid the dangers. 

    Our approach at the Raspberry Pi Foundation is not to focus purely on the threats and dangers, but to teach young people to be critical users of technologies and not passive consumers. Having an understanding of how these technologies work goes a long way towards achieving sufficient AI literacy skills to make informed choices and this is where our Experience AI program comes in. 

    An Experience AI banner.

    Experience AI is a set of lessons developed in collaboration with Google DeepMind and, before we wrote any lessons, our team thought long and hard about what we believe are the important principles that should underpin teaching and learning about artificial intelligence. One such principle is taking a problem-first approach and emphasising that computers are tools that help us solve problems. In the Experience AI fundamentals unit, we teach students to think about the problem they want to solve before thinking about whether or not AI is the appropriate tool to use to solve it. 

    Taking a problem-first approach doesn’t by default avoid an AI system causing harm — there’s still the chance it will increase bias and societal inequities — but it does focus the development on the end user and the data needed to train the models. I worry that focusing on market share and opportunity rather than the problem to be solved is more likely to lead to harm.

    Another set of principles that underpins our resources is teaching about fairness, accountability, transparency, privacy, and security (Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education, Understanding Artificial Intelligence Ethics and Safety) in relation to the development of AI systems. These principles are aimed at making sure that creators of AI models develop models ethically and responsibly. The principles also apply to consumers, as we need to get to a place in society where we expect these principles to be adhered to and consumer power means that any models that don’t, simply won’t succeed. 

    Furthermore, once students have created their models in the Experience AI fundamentals unit, we teach them about model cards, an approach that promotes transparency about their models. Much like how nutritional information on food labels allows the consumer to make an informed choice about whether or not to buy the food, model cards give information about an AI model such as the purpose of the model, its accuracy, and known limitations such as what bias might be in the data. Students write their own model cards based on the AI solutions they have created. 

    What else can we do?

    At the Raspberry Pi Foundation, we have set up an AI literacy team with the aim to embed principles around AI safety, security, and responsibility into our resources and align them with the Foundations’ mission to help young people to:

    • Be critical consumers of AI technology
    • Understand the limitations of AI
    • Expect fairness, accountability, transparency, privacy, and security and work toward reducing inequities caused by technology
    • See AI as a problem-solving tool that can augment human capabilities, but not replace or narrow their futures 

    Our call to action to educators, carers, and parents is to have conversations with your young people about GenAI. Get to know their opinions on GenAI and how they view its role in their lives, and help them to become critical thinkers when interacting with technology. 

    Website: LINK

  • Celebrating Astro Pi 2024

    Celebrating Astro Pi 2024

    Reading Time: 5 minutes

    About the projects

    Over the past few months, young people across Europe have run their computer programs  on the International Space Station (ISS) as part of Astro Pi Mission Zero and Mission Space Lab.

    Mission Zero code deployment
    Mission Zero code deployment | Credits: ESA/NASA

    Mission Zero offers young people the chance to write a simple program that takes a reading from the colour and luminosity sensor on an Astro Pi computer on board the ISS, and uses it to set the background colour in a personalised image for the astronauts to see as they go about their daily tasks. In total, 16,039 teams and 24,663 young people participated in Mission Zero this year. This was a 3% increase in teams entering compared to last year.

    Mission Space Lab offers teams of young people the chance to run scientific experiments on board the ISS. This year, 564 teams and 2,008 young people participated in Mission Space Lab. Compared with last year, there was a 4% increase in the number of teams who managed to achieve flight status and run their code in space.

    Two young people at a computer.

    To evaluate the projects, we encouraged mentors to complete surveys once their teams had submitted their computer programs. Overall, 135 Mission Zero mentors (11% of mentors) and 56 Mission Space Lab mentors (15% of mentors) completed surveys. We also ran focus groups with mentors from both projects to understand their experiences and the impact of these projects on young people.

    Impact on young people

    Understanding how technology is changing the world

    The mentors we spoke to told us how valuable Mission Zero and Mission Space Lab are because these experiences connect young people to real technology. Mentors felt that Mission Zero and Mission Space Lab bridge the gap between theoretical coding and tangible outcomes, giving young people the confidence to engage with technology.

    “Participating in Mission Space Lab offers students a great opportunity to work with the International Space Station, to see the Earth from above, to challenge them to overcome the terrestrial limits. It’s very important.” — Mission Space Lab mentor

    A young person working on a coding project on a computer.

    “We want students to use their digital skills as superpowers to make the world a better place and this competition really aligns with that because regardless of your race, your ethnicity, your gender, you can write some code that actually runs in space. And if you can do that, then you can make medical tech, or you can solve the big problem that the adults of the world are still grappling with, so it’s the opening up [of] opportunities.” — Mission Zero mentor

    Mentors observed that the project inspired children to consider careers they previously thought were out of reach. Space exploration was no longer a far away and theoretical idea for the children, but something connected to their everyday lives and their own learning.

    “Some of the people that I was teaching this to felt like becoming an astronaut was really difficult to learn… now it’s not necessarily a distant thing to study.” — Mission Zero mentor

    Mentors also described how the young people gained confidence in their ability to engage with technologies. One mentor described the “self-esteem” and “pride” younger pupils gained from participation. Others talked about the confidence that came with achieving something like having their code run in space and receiving certificates proving they were “space scientists”.

    Our mentors

    None of this would be possible without the hard work and dedication of our mentors. So, as part of our evaluation, we wanted to understand how we can best support them. For Mission Space Lab, that took the form of assessing the new guidance that we published this year and that sits alongside the project. When we spoke to mentors, they told us this guide provided clear, step-by-step guidance that enabled the young people to work through the project, and the majority of survey respondents agreed: 89% rated the Mission Space Lab project guide as somewhat or very understandable. 

    We also heard from mentors about the ways they are using Mission Zero in a wider context. Some told us that their schools ran the project as part of space-themed weeks where they used Mission Zero in conversations about space exploration, the Hubble telescope, and learning the names of the stars. Others used Mission Zero across multiple subjects by designing images and holding art competitions based on the design, as well as learning about pixels and animations. 

    A young person at a desk using a computer.

    Additionally, it was a pleasure to hear about young people who had participated in Mission Zero in previous years gaining leadership skills by supporting other young people to complete Mission Zero this year.

    Next steps

    Thank you to all the mentors who provided constructive feedback through surveys and focus groups. We have read and considered every comment and will continue to consider how to improve the experience for mentors and young people. 

    We will publish an in-depth report with the findings of our evaluation later in the year; however, we’ve already made some changes to the programme that will be launching for the 2024/25 Astro Pi challenge and wanted to share these updates with you now.

    Improvements for next year:

    Mission Zero

    • We’re adding a save button to Mission Zero to allow young people to work on this across multiple sessions.
    • We’re adding new code examples to the Mission Zero project guide. These have been selected from team submissions from the 2023/24 challenge.

    Mission Space Lab

    • We’re creating an online testing tool for Mission Space Lab so that it will be easier for teams to test whether or not their code works. It will feature new data and images captured from the ISS in spring 2024.

    We hope that all the young people and mentors who participated in last year’s Astro Pi challenge enjoyed the experience and learnt a lot. With the exciting updates we’re working on for the 2024/25 Astro Pi challenge, we hope to see even more young people participate and share their creative projects next year.

    Project launch dates

    • 16 September 2024: Mission Zero and Mission Space Lab launch
    • 24 February 2025: Mission Space Lab submissions close
    • 24 March 2025: Mission Zero submissions close
    • April – May 2025: Programs run on the International Space Station
    • June 2025: Teams receive certificates 

    Website: LINK

  • New guide on using generative AI for teachers and schools

    New guide on using generative AI for teachers and schools

    Reading Time: 5 minutes

    The world of education is loud with discussions about the uses and risks of generative AI — tools for outputting human-seeming media content such as text, images, audio, and video. In answer, there’s a new practical guide on using generative AI aimed at Computing teachers (and others), written by a group of classroom teachers and researchers at the Raspberry Pi Computing Education Research Centre and Faculty of Education at the University of Cambridge.

    Two educators discuss something at a desktop computer.

    Their new guide is a really useful overview for everyone who wants to:

    • Understand the issues generative AI tools present in the context of education
    • Find out how to help their schools and students navigate them
    • Discover ideas on how to make use of generative AI tools in their teaching

    Since generative AI tools have become publicly available, issues around data privacy and plagiarism are at the front of educators’ minds. At the same time, many educators are coming up with creative ways to use generative AI tools to enhance teaching and learning. The Research Centre’s guide describes the areas where generative AI touches on education, and lays out what schools and teachers can do to use the technology beneficially and help their learners do the same.

    Teaching students about generative AI tools

    It’s widely accepted that AI tools can bring benefits but can also be used in unhelpful or harmful ways. Basic knowledge of how AI and machine learning works is key to being able to get the best from them. The Research Centre’s guide shares recommended educational resources for teaching learners about AI.

    A desktop computer showing the Experience AI homepage.

    One of the recommendations is Experience AI, a set of free classroom resources we’re creating. It includes a set of 6 lessons for providing 11- to 14-year-olds with a foundational understanding of AI systems, as well as a standalone lesson specifically for teaching about large language model-based AI tools, such as ChatGPT and Google Gemini. These materials are for teachers of any specialism, not just for Computing teachers.

    You’ll find that even a brief introduction to how large language models work is likely to make students’ ideas about using these tools to do all their homework much less appealing. The guide outlines creative ways you can help students see some of generative AI’s pitfalls, such as asking students to generate outputs and compare them, paying particular attention to inaccuracies in the outputs.

    Generative AI tools and teaching computing

    We’re still learning about what the best ways to teach programming to novice learners are. Generative AI has the potential to change how young people learn text-based programming, as AI functionality is now integrated into many of the major programming environments, generating example solutions or helping to spot errors.

    A web project in the Code Editor.

    The Research Centre’s guide acknowledges that there’s more work to be done to understand how and when to support learners with programming tasks through generative AI tools. (You can follow our ongoing seminar series on the topic.) In the meantime, you may choose to support established programming pedagogies with generative AI tools, such as prompting an AI chatbot to generate a PRIMM activity on a particular programming concept.

    As ethics and the impact of technology play an important part in any good Computing curriculum, the guide also shares ways to use generative AI tools as a focus for your classroom discussions about topics such as bias and inequality.

    Using generative AI tools to support teaching and learning

    Teachers have been using generative AI applications as productivity tools to support their teaching, and the Research Centre’s guide gives several examples you can try out yourself. Examples include creating summaries of textual materials for students, and creating sets of questions on particular topics. As the guide points out, when you use generative AI tools like this, it’s important to always check the accuracy of the generated materials before you give any of them to your students.

    Putting a school-wide policy in place

    Importantly, the Research Centre’s guide highlights the need for a school-wide acceptable use policy (AUP) that informs teachers, other school staff, and students on how they may use generative AI tools. This section of the guide suggests websites that offer sample AUPs that can be used as a starting point for your school. Your AUP should aim to keep users safe, covering e-safety, privacy, and security issues as well as offering guidance on being transparent about the use of generative tools.

    Teachers in discussion at a table.

    It’s not uncommon that schools look to specialist Computing teachers to act as the experts on questions around use of digital tools. However, for developing trust in how generative AI tools are used in the school, it’s important to encourage as wide a range of stakeholders as possible to be consulted in the process of creating an AUP.

    A source of support for teachers and schools

    As the Research Centre’s guide recognises, the landscape of AI and our thinking about it might change. In this uncertain context, the document offers a sensible and detailed overview of where we are now in understanding the current impact of generative AI on Computing as a subject, and on education more broadly. The example use cases and thought-provoking next steps on how this technology can be used and what its known risks and concerns are should be helpful for all interested educators and schools.

    I recommend that all Computing teachers read this new guide, and I hope you feel inspired about the key role that you can play in shaping the future of education affected by AI.

    Website: LINK

  • Four key learnings from teaching Experience AI lessons

    Four key learnings from teaching Experience AI lessons

    Reading Time: 4 minutes

    Developed by us and Google DeepMind, Experience AI provides teachers with free resources to help them confidently deliver lessons that inspire and educate young people about artificial intelligence (AI) and the role it could play in their lives.

    Tracy Mayhead is a computer science teacher at Arthur Mellows Village College in Cambridgeshire. She recently taught Experience AI to her KS3 pupils. In this blog post, she shares 4 key learnings from this experience.

    A photo of Tracy Mayhead in a classroom.

    1. Preparation saves time

    The Experience AI lesson plans provided a clear guide on how to structure our lessons.

    Each lesson includes teacher-facing intro videos, a lesson plan, a slide deck, activity worksheets, and student-facing videos that help to introduce each new AI concept. 

    It was handy to know in advance which websites needed unblocking so students could access them. 

    You can find a unit overview on the Experience AI website to get an idea of what is included in each lesson.

    “My favourite bit was making my own model, and choosing the training data. I enjoyed seeing how the amount of data affected the accuracy of the AI and testing the model.” – Student, Arthur Mellows Village College, UK 

    2. The lessons can be adapted to meet student’s needs 

    It was clear from the start that I could adapt the lessons to make them work for myself and my students.

    Having estimated times and corresponding slides for activities was beneficial for adjusting the lesson duration. The balance between learning and hands-on tasks was just right.

    A group of students at a desk in a classroom.

    I felt fairly comfortable with my understanding of AI basics. However, teaching it was a learning experience, especially in tailoring the lessons to cater to students with varying knowledge. Their misconceptions sometimes caught me off guard, like their belief that AI is never wrong. Adapting to their needs and expectations was a learning curve. 

    “It has definitely changed my outlook on AI. I went from knowing nothing about it to understanding how it works, why it acts in certain ways, and how to actually create my own AI models and what data I would need for that.” – Student, Arthur Mellows Village College, UK 

    3. Young people are curious about AI and how it works

    My students enjoyed the practical aspects of the lessons, like categorising apples and tomatoes. They found it intriguing how AI could sometimes misidentify objects, sparking discussions on its limitations. They also expressed concerns about AI bias, which these lessons helped raise awareness about. I didn’t always have all the answers, but it was clear they were curious about AI’s implications for their future.

    It’s important to acknowledge that as a teacher you won’t always have all the answers especially when teaching AI literacy, which is such a new area. This is something that can be explored in a class alongside students.

    There is an online course you can use that can help get you started teaching about AI if you are at all nervous.

    [youtube https://www.youtube.com/watch?v=gScgJf289Cs?feature=oembed&w=500&h=281]

    “I learned a lot about AI and the possibilities it holds to better our futures as well as how to train it and problems that may arise when training it.” – Student, Arthur Mellows Village College, UK

    4. Engaging young people with AI is important

    Students are fascinated by AI and they recognise its significance in their future. It is important to equip them with the knowledge and skills to fully engage with AI.

    Experience AI provides a valuable opportunity to explore these concepts and empower students to shape and question the technology that will undoubtedly impact their lives.

    “It has changed my outlook on AI because I now understand it better and feel better equipped to work with AI in my working life.” – Student, Arthur Mellows Village College, UK 

    A group of Year 10 students in a classroom.

    What is your experience of teaching Experience AI lessons?

    We completely agree with Tracy. AI literacy empowers people to critically evaluate AI applications and how they are being used. Our Experience AI resources help to foster critical thinking skills, allowing learners to use AI tools to address challenges they are passionate about. 

    We’re also really interested to learn what misconceptions students have about AI and how teachers are addressing them. If you come across misconceptions that surprise you while you’re teaching with the Experience AI lesson materials, please let us know via the feedback form linked in the final lesson of the six-lesson unit.

    If you would like to teach Experience AI lessons to your students, download the free resources from experience-ai.org

    Website: LINK

  • Empowering undergraduate computer science students to shape generative AI research

    Empowering undergraduate computer science students to shape generative AI research

    Reading Time: 6 minutes

    As use of generative artificial intelligence (or generative AI) tools such as ChatGPT, GitHub Copilot, or Gemini becomes more widespread, educators are thinking carefully about the place of these tools in their classrooms. For undergraduate education, there are concerns about the role of generative AI tools in supporting teaching and assessment practices. For undergraduate computer science (CS) students, generative AI also has implications for their future career trajectories, as it is likely to be relevant across many fields.

    Dr Stephen MacNeil, Andrew Tran, and Irene Hou (Temple University)

    In a recent seminar in our current series on teaching programming (with or without AI), we were delighted to be joined by Dr Stephen MacNeil, Andrew Tran, and Irene Hou from Temple University. Their talk showcased several research projects involving generative AI in undergraduate education, and explored how undergraduate research projects can create agency for students in navigating the implications of generative AI in their professional lives.

    Differing perceptions of generative AI

    Stephen began by discussing the media coverage around generative AI. He highlighted the binary distinction between media representations of generative AI as signalling the end of higher education — including programming in CS courses — and other representations that highlight the issues that using generative AI will solve for educators, such as improving access to high-quality help (specifically, virtual assistance) or personalised learning experiences.

    Students sitting in a lecture at a university.

    As part of a recent ITiCSE working group, Stephen and colleagues conducted a survey of undergraduate CS students and educators and found conflicting views about the perceived benefits and drawbacks of generative AI in computing education. Despite this divide, most CS educators reported that they were planning to incorporate generative AI tools into their courses. Conflicting views were also noted between students and educators on what is allowed in terms of generative AI tools and whether their universities had clear policies around their use.

    The role of generative AI tools in students’ help-seeking

    There is growing interest in how undergraduate CS students are using generative AI tools. Irene presented a study in which her team explored the effect of generative AI on undergraduate CS students’ help-seeking preferences. Help-seeking can be understood as any actions or strategies undertaken by students to receive assistance when encountering problems. Help-seeking is an important part of the learning process, as it requires metacognitive awareness to understand that a problem exists that requires external help. Previous research has indicated that instructors, teaching assistants, student peers, and online resources (such as YouTube and Stack Overflow) can assist CS students. However, as generative AI tools are now widely available to assist in some tasks (such as debugging code), Irene and her team wanted to understand which resources students valued most, and which factors influenced their preferences. Their study consisted of a survey of 47 students, and follow-up interviews with 8 additional students. 

    Undergraduate CS student use of help-seeking resources

    Responding to the survey, students stated that they used online searches or support from friends/peers more frequently than two generative AI tools, ChatGPT and GitHub Copilot; however, Irene indicated that as data collection took place at the beginning of summer 2023, it is possible that students were not familiar with these tools or had not used them yet. In terms of students’ experiences in seeking help, students found online searches and ChatGPT were faster and more convenient, though they felt these resources led to less trustworthy or lower-quality support than seeking help from instructors or teaching assistants.

    Two undergraduate students are seated at a desk, collaborating on a computing task.

    Some students felt more comfortable seeking help from ChatGPT than peers as there were fewer social pressures. Comparing generative AI tools and online searches, one student highlighted that unlike Stack Overflow, solutions generated using ChatGPT and GitHub Copilot could not be verified by experts or other users. Students who received the most value from using ChatGPT in seeking help either (i) prompted the model effectively when requesting help or (ii) viewed ChatGPT as a search engine or comprehensive resource that could point them in the right direction. Irene cautioned that some students struggled to use generative AI tools effectively as they had limited understanding of how to write effective prompts.

    Using generative AI tools to produce code explanations

    Andrew presented a study where the usefulness of different types of code explanations generated by a large language model was evaluated by students in a web software development course. Based on Likert scale data, they found that line-by-line explanations were less useful for students than high-level summary or concept explanations, but that line-by-line explanations were most popular. They also found that explanations were less useful when students already knew what the code did. Andrew and his team then qualitatively analysed code explanations that had been given a low rating and found they were overly detailed (i.e. focusing on superfluous elements of the code), the explanation given was the wrong type, or the explanation mixed code with explanatory text. Despite the flaws of some explanations, they concluded that students found explanations relevant and useful to their learning.

    Perceived usefulness of code explanation types

    Using generative AI tools to create multiple choice questions

    In a separate study, Andrew and his team investigated the use of ChatGPT to generate novel multiple choice questions for computing courses. The researchers prompted two models, GPT-3 and GPT-4, with example question stems to generate correct answers and distractors (incorrect but plausible choices). Across two data sets of example questions, GPT-4 significantly outperformed GPT-3 in generating the correct answer (75.3% and 90% vs 30.8% and 36.7% of all cases). GPT-3 performed less well at providing the correct answer when faced with negatively worded questions. Both models generated correct answers as distractors across both sets of example questions (GPT-3: 11.1% and 10% of cases; GPT-4: 9.9% and 17.8%). They concluded that educators would still need to verify whether answers were correct and distractors were appropriate.

    An undergraduate student is raising his hand up during a lecture at a university.

    Undergraduate students shaping the direction of generative AI research

    With student concerns about generative AI and its implications for the world of work, the seminar ended with a hopeful message highlighting undergraduate students being proactive in conducting their own research and shaping the direction of generative AI research in computer science education. Stephen concluded the seminar by celebrating the undergraduate students who are undertaking these research projects.

    You can watch the seminar here:

    [youtube https://www.youtube.com/watch?v=Pq-d6wipGRQ?feature=oembed&w=500&h=281]

    If you are interested to learn more about Stephen’s work on generative AI, you can read about how undergraduate students used generative AI tools to create analogies for recursion. If you would like to experiment with using generative AI tools to assist with debugging, you could try using Gemini, ChatGPT, or Copilot.

    Join our next seminar

    Our current seminar series is on teaching programming with or without AI. 

    In our next seminar, on 16 July at 17:00 to 18:30 BST, we welcome Laurie Gale (Raspberry Pi Computing Education Research Centre, University of Cambridge), who will discuss how to teach debugging to secondary school students. To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    The schedule of our upcoming seminars is available online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK