Schlagwort: AI literacy

  • Why kids still need to learn to code in the age of AI 

    Why kids still need to learn to code in the age of AI 

    Reading Time: 3 minutes

    Today we’re publishing a position paper setting out five arguments for why we think that kids still need to learn to code in the age of artificial intelligence.

    A whimsical cartoon of someone struggling with vibe coding at a desktop computer and a second person with a superhero cape and a t-shirt saying 'programmer' coming to their rescue.
    Generated using ChatGPT.

    Just like every wave of technological innovation that has come before, the advances in artificial intelligence (AI) are raising profound questions about the future of human work. History teaches us that technology has the potential to both automate and augment human effort, destroying some jobs and creating new ones. The only thing we know for sure is that it is impossible to predict the precise nature and pace of the changes that are coming. 

    One of the fastest-moving applications of generative AI technologies are the systems that can generate code. What started as the coding equivalent of autocomplete has quickly progressed to tools that can generate increasingly complex code from natural language prompts. 

    This has given birth to the notion of “vibe-coding” and led some commentators to predict the end of the software development industry as we know it. It shouldn’t be a surprise then that there is a vigorous debate about whether kids still need to learn to code. 

    In the position paper we put forward five arguments for why we think the answer is an unequivocal yes.

    We need humans who are skilled programmers 

    First, we argue that even in a world where AI can generate code, we need skilled human programmers who can think critically, solve problems, and make ethical decisions. The large language models that underpin these tools are probabilistic systems designed to provide statistically acceptable outputs and, as any skilled software engineer will tell you, simply writing more code faster isn’t necessarily a good thing. 

    Learning to code is an essential part of learning to program

    Learning to code is the most effective way we know for a young person to develop the mental models and fluency to become a skilled human programmer. The hard cognitive work of reading, modifying, writing, explaining, and testing code is precisely how young people develop a deep understanding of programming and computational thinking. 

    Learning to code will open up even more opportunities in the age of AI 

    While there’s no doubt that AI is going to reshape the labour market, the evidence from history suggests that it will increase the reach of programming and computational approaches across the economy and into new domains, creating demand for humans who are skilled programmers. We also argue that coding is no longer just for software engineers, it’s becoming a core skill that enables people to work effectively and think critically in a world shaped by intelligent machines. From healthcare to agriculture, we are already seeing demand for people who can combine programming with domain-specific skills and craft knowledge. 

    Coding is a literacy that helps young people have agency in a digital world

    Alongside the arguments for coding as a route to opening up economic opportunities, we argue that coding and programming gives young people a way to express themselves, to learn, and to make sense of the world. 

    And perhaps most importantly, that learning to code is about power. Providing young people with a solid grounding in computational literacy, developed through coding, helps ensure that they have agency. Without it, they risk being manipulated by systems they don’t understand. As Rushkoff said: “Program, or be programmed”.  

    The kids who learn to code will shape the future

    Finally, we argue that the power to create with technology is already concentrated in too small and homogenous a group of people. We need to open up the opportunity to learn to code to all young people because it will help us mobilise the full potential of human talent, will lead to more inclusive and effective digital solutions to the big global challenges we face, and will help ensure that everyone can share in the societal and economic benefits of technological progress. 

    The work we need to do 

    We end the paper with a call to action for all of us working in education. We need to challenge the false narrative that AI is removing the need for kids to learn to code, and redouble our efforts to ensure that all young people are equipped to take advantage of the opportunities in a world where AI is ubiquitous.

    You can read the full paper here:


    The cartoon image for this blog was created using ChatGPT-4o, which was prompted to produce a “whimsical cartoon that expresses some of the key ideas in the position paper”. It took several iterations.

    Website: LINK

  • UNESCO’s International Day of Education 2025: AI and the future of education

    UNESCO’s International Day of Education 2025: AI and the future of education

    Reading Time: 6 minutes

    Recently, our Chief Learning Officer Rachel Arthur and I had the opportunity to attend UNESCO’s International Day of Education 2025, which focused on the role of education in helping people “understand and steer AI to better ensure that they retain control over this new class of technology and are able to direct it towards desired objectives that respect human rights and advance progress toward the Sustainable Development Goals”.

    How teachers continue to play a vital role in the future of education

    Throughout the event, a clear message from UNESCO was that teachers have a very important role to play in the future of education systems, regardless of the advances in technology — a message I find very reassuring. However, as with any good-quality debate, the sessions also reflected a range of other opinions and approaches, which should be listened to and discussed too. 

    With this in mind, I was interested to hear a talk by a school leader from England who is piloting the first “teacherless” classroom. They are trialling a programme with twenty Year 10 students (ages 14–15), using an AI tool developed in-house. This tool is trained on eight existing learning platforms, pulling content and tailoring the learning experience based on regular assessments. The students work independently using an AI tool in the morning, supported by a learning mentor in the classroom, while afternoons focus on developing “softer skills”. The school believes this approach will allow students to complete their GCSE exams in just one year instead of two, seeing it as a solution to the years of lost learning caused by lockdowns during the coronavirus pandemic.

    Whilst they were reporting early success in this approach, what occurred to me during the talk was the question of how we can decide if this approach is the right one. The results might sound attractive to school leaders, but do we need a more rounded view of what education should look like? Whatever your views on the purpose of schools, I suspect most people would agree that they serve a much greater purpose than just achieving the top results. 

    Whilst AI tools may be able to provide personalised learning experiences, it is crucial to consider the role of teachers in young people’s education. If we listed the skills required for a teacher to do their job effectively, I believe we would all reach the same conclusion: teachers play a pivotal role in a young person’s life — one that definitely goes beyond getting the best exam results. According to the Educational Endowment Foundation, high-quality teaching is the most important lever schools have on pupil outcomes

    “Quality education demands quality educators” – Farida Shaheed, United Nations Special Rapporteur on the Right to Education

    Also, at this stage in AI adoption, can we be sure that this use of AI tools isn’t disadvantageous to any students? We know that machine learning models generate biased results, but I’m not aware of research showing that these systems are fair to all students and do not disadvantage any demographic. An argument levelled against this point is that teachers can also be biased. Aside from the fact that systems have a potentially much larger impact on more students than any individual teacher, I worry that this argument leads to us accepting machine bias, rather than expecting the highest of standards. It is essential that providers of any educational software that processes student data adhere to the principles of fairness, accountability, transparency, privacy, and security (FATPS).

    How can the agency of teachers be cultivated in AI adoption?

    We are undeniably at a very early stage of a changing education landscape because of AI, and an important question is how teachers can be supported. 

    “Education has a foundational role to play in helping individuals and groups determine what tasks should be outsourced to AI and what tasks need to remain firmly in human hands.” – UNESCO 

    I was delighted to have been invited to be part of a panel at the event discussing how the agency of teachers can be cultivated in AI adoption. The panel consisted of people with different views and expertise, but importantly, included a classroom teacher, emphasising the importance of listening to educators and not making decisions on their behalf without them. As someone who works primarily on AI literacy education, my talk was centred around my belief that AI literacy education for teachers is of paramount importance. 

    Having a basic understanding of how data-driven systems work will empower teachers to think critically and become discerning users, making conscious choices about which tools to use and for what purpose. 

    For example, while attending the Bett education technology exhibition recently, I was struck by the prevalence of education products that included the use of AI. With ever more options available, we need teachers to be able to make informed choices about which products will benefit and not harm their students. 

    “Teachers urgently need to be empowered to better understand the technical, ethical and pedagogical dimensions of AI.” – Stefania Giannini, Assistant Director-General for Education, UNESCO, AI competency framework for teachers

    A very interesting paper released recently showed that individuals with lower AI literacy levels are more receptive towards AI-powered products and services. In short, people with higher literacy levels are more aware of the capabilities and limitations of AI systems. Perhaps this doesn’t mean that people with higher AI literacy levels see all AI tools as ‘bad’, but maybe that they are more able to think critically about the tools and make informed choices about their use. 

    UN Special Rapporteur highlights urgent education challenges

    For me, the most powerful talk of the day came from Farida Shaheed, the United Nations Special Rapporteur on the Right to Education. I would urge anyone to listen to it (a recording is available on YouTube — the talk begins around 2:16:00). 

    The talk included many facts that helped to frame some of the challenges we are facing. Ms Shaheed stated that “29% of all schools lack access to basic drinking water, without which education is not possible”. This is a sobering thought, particularly when there is a growing narrative that AI systems have the potential to democratise education. 

    When speaking about the AI tools being developed for education, Ms Shaheed questioned who the tools are for: “It’s telling that [so very few edtech tools] are developed for teachers. […] Is this just because teachers are a far smaller client base or is it a desire to automate teachers out of the equation?”

    I’m not sure if I know the answer to this question, but it speaks to my worry that the motivation for tech development does not prioritise taking a human-centred approach. We have to remember that as consumers, we do have more power than we think. If we do not want a future where AI tools are replacing teachers, then we need to make sure that there is not a demand for those tools. 

    The conference was a fantastic event to be part of, as it was an opportunity to listen to such a diverse range of perspectives. Certainly, we are facing challenges, but equally, it is both reassuring and exciting to know that so many people across the globe are working together to achieve the best possible outcomes for future generations. Ms Shaheed’s concluding message resonated strongly with me:

    “[Share good practices], so we can all move together in a co-creative process that is inclusive of everybody and does not leave anyone behind.” 

    As always, we’d love to hear your views — you can contact us here.

    Website: LINK

  • The need to invest in AI skills in schools

    The need to invest in AI skills in schools

    Reading Time: 6 minutes

    Earlier this week, the UK Government published its AI Opportunities Action Plan, which sets out an ambitious vision to maintain the UK’s position as a global leader in artificial intelligence. 

    Whether you’re from the UK or not, it’s a good read, setting out the opportunities and challenges facing any country that aspires to lead the world in the development and application of AI technologies. 

    In terms of skills, the Action Plan highlights the need for the UK to train tens of thousands more AI professionals by 2030 and sets out important goals to expand education pathways into AI, invest in new undergraduate and master’s scholarships, tackle the lack of diversity in the sector, and ensure that the lifelong skills agenda focuses on AI skills. 

    Photo of a group of young people working through some Experience AI content.

    This is all very important, but the Action Plan fails to mention what I think is one of the most important investments we need to make, which is in schools. 

    “Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”

    While reading the section of the Action Plan that dealt with AI skills, I was reminded of this quote attributed to Bill Gates, which was adapted from Roy Amara’s law of technology. We tend to overestimate what we can achieve in the short term and underestimate what we can achieve in the long term. 

    In focusing on the immediate AI gold rush, there is a risk that the government overlooks the investments we need to make right now in schools, which will yield huge returns — for individuals, communities, and economies — over the long term. Realising the full potential of a future where AI technologies are ubiquitous requires genuinely long-term thinking, which isn’t always easy for political systems that are designed around short-term results. 

    Photo focused on a young person working on a computer in a classroom.

    But what are those investments? The Action Plan rightly points out that the first step for the government is to accurately assess the size of the skills gap. As part of that work, we need to figure out what needs to change in the school system to build a genuinely diverse and broad pipeline of young people with AI skills. The good news is that we’ve already made a lot of progress. 

    AI literacy

    Over the past three years, the Raspberry Pi Foundation and our colleagues in the Raspberry Pi Computing Education Research Centre at the University of Cambridge have been working to understand and define what AI literacy means. That led us to create a research-informed model for AI literacy that unpacks the concepts and knowledge that constitute a foundational understanding of AI. 

    In partnership with one of the leading UK-based AI companies, Google DeepMind, we used that model to create Experience AI. This suite of classroom resources, teacher professional development, and hands-on practical activities enables non-specialist teachers to deliver engaging lessons that help young people build that foundational understanding of AI technologies. 

    We’ve seen huge demand from UK schools already, with thousands of lessons taught in UK schools, and we’re delighted to be working with Parent Zone to support a wider roll out in the UK, along with free teacher professional development.  

    CEO Philip Colligan and Prime Minister Keir Starmer at the UK launch of Experience AI.
    CEO Philip Colligan and Prime Minister Keir Starmer at the UK launch of Experience AI.

    With the generous support of Google.org, we are working with a global network of education partners — from Nigeria to Nepal — to localise and translate these resources, and deliver locally organised teacher professional development. With over 1 million young people reached already, Experience AI can plausibly claim to be the most widely used AI literacy curriculum in the world, and we’re improving it all the time. 

    All of the materials are available for anyone to use and can be found on the Experience AI website.

    There is no AI without CS

    With the CEO of GitHub claiming that it won’t be long before 80% of code is written by AI, it’s perhaps not surprising that some people are questioning whether we still need to teach kids how to code.

    I’ll have much more to say on this in a future blog post, but the short answer is that computer science and programming is set to become more — not less — important in the age of AI. This is particularly important if we want to tackle the lack of diversity in the tech sector and ensure that young people from all backgrounds have the opportunity to shape the AI-enabled future that they will be living in. 

    Close up of two young people working at a computer.

    The simple truth is that there is no artificial intelligence without computer science. The rapid advances in AI are likely to increase the range of problems that can be solved by technology, creating demand for more complex software, which in turn will create demand for more programmers with increasingly sophisticated and complex skills. 

    That’s why we’ve set ourselves the ambition that we will inspire 10 million more young people to learn how to get creative with technology over the next 10 years through Code Club. 

    Curriculum reform 

    But we also need to think about what needs to change in the curriculum to ensure that schools are equipping young people with the skills and knowledge they need to thrive in an AI-powered world. 

    That will mean changes to the computer science curriculum, providing different pathways that reflect young people’s interests and passions, but ensuring that every child leaves school with a qualification in computer science or applied digital skills. 

    It’s not just computer science courses. We need to modernise mathematics and figure out what a data science curriculum looks like (and where it fits). We also need to recognise that AI skills are just as relevant to biology, geography, and languages as they are to computer science. 

    A teacher assisting a young person with a coding project.

    To be clear, I am not talking about how AI technologies will save teachers time, transform assessments, or be used by students to write essays. I am talking about the fundamentals of the subjects themselves and how AI technologies are revolutionising the sciences and humanities in practice in the real world. 

    These are all areas where the Raspberry Pi Foundation is engaged in original research and experimentation. Stay tuned. 

    Supporting teachers

    All of this needs to be underpinned by a commitment to supporting teachers, including through funding and time to engage in meaningful professional development. This is probably the biggest challenge for policy makers at a time when budgets are under so much pressure. 

    For any nation to plausibly claim that it has an Action Plan to be an AI superpower, it needs to recognise the importance of making the long-term investment in supporting our teachers to develop the skills and confidence to teach students about AI and the role that it will play in their lives. 

    I’d love to hear what you think and if you want to get involved, please get in touch.

    Website: LINK

  • Ocean Prompting Process: How to get the results you want from an LLM

    Ocean Prompting Process: How to get the results you want from an LLM

    Reading Time: 5 minutes

    Have you heard of ChatGPT, Gemini, or Claude, but haven’t tried any of them yourself? Navigating the world of large language models (LLMs) might feel a bit daunting. However, with the right approach, these tools can really enhance your teaching and make classroom admin and planning easier and quicker. 

    That’s where the OCEAN prompting process comes in: it’s a straightforward framework designed to work with any LLM, helping you reliably get the results you want. 

    The great thing about the OCEAN process is that it takes the guesswork out of using LLMs. It helps you move past that ‘blank page syndrome’ — that moment when you can ask the model anything but aren’t sure where to start. By focusing on clear objectives and guiding the model with the right context, you can generate content that is spot on for your needs, every single time.

    5 ways to make LLMs work for you using the OCEAN prompting process

    OCEAN’s name is an acronym: objective, context, examples, assess, negotiate — so let’s begin at the top.

    1. Define your objective

    Think of this as setting a clear goal for your interaction with the LLM. A well-defined objective ensures that the responses you get are focused and relevant.

    Maybe you need to:

    • Draft an email to parents about an upcoming school event
    • Create a beginner’s guide for a new Scratch project
    • Come up with engaging quiz questions for your next science lesson

    By knowing exactly what you want, you can give the LLM clear directions to follow, turning a broad idea into a focused task.

    2. Provide some context 

    This is where you give the LLM the background information it needs to deliver the right kind of response. Think of it as setting the scene and providing some of the important information about why, and for whom, you are making the document.

    You might include:

    • The length of the document you need
    • Who your audience is — their age, profession, or interests
    • The tone and style you’re after, whether that’s formal, informal, or somewhere in between

    All of this helps the LLM include the bigger picture in its analysis and tailor its responses to suit your needs.

    3. Include examples

    By showing the LLM what you’re aiming for, you make it easier for the model to deliver the kind of output you want. This is called one-shot, few-shot, or many-shot prompting, depending on how many examples you provide.

    You can:

    • Include URL links 
    • Upload documents and images (some LLMs don’t have this feature)
    • Copy and paste other text examples into your prompt

    Without any examples at all (zero-shot prompting), you’ll still get a response, but it might not be exactly what you had in mind. Providing examples is like giving a recipe to follow that includes pictures of the desired result, rather than just vague instructions — it helps to ensure the final product comes out the way you want it.

    4. Assess the LLM’s response

    This is where you check whether what you’ve got aligns with your original goal and meets your standards.

    Keep an eye out for:

    • Hallucinations: incorrect information that’s presented as fact
    • Misunderstandings: did the LLM interpret your request correctly?
    • Bias: make sure the output is fair and aligned with diversity and inclusion principles

    A good assessment ensures that the LLM’s response is accurate and useful. Remember, LLMs don’t make decisions — they just follow instructions, so it’s up to you to guide them. This brings us neatly to the next step: negotiate the results.

    5. Negotiate the results

    If the first response isn’t quite right, don’t worry — that’s where negotiation comes in. You should give the LLM frank and clear feedback and tweak the output until it’s just right. (Don’t worry, it doesn’t have any feelings to be hurt!) 

    When you negotiate, tell the LLM if it made any mistakes, and what you did and didn’t like in the output. Tell it to ‘Add a bit at the end about …’ or ‘Stop using the word “delve” all the time!’ 

    How to get the tone of the document just right

    Another excellent tip is to use descriptors for the desired tone of the document in your negotiations with the LLM, such as, ‘Make that output slightly more casual.’

    In this way, you can guide the LLM to be:

    • Approachable: the language will be warm and friendly, making the content welcoming and easy to understand
    • Casual: expect laid-back, informal language that feels more like a chat than a formal document
    • Concise: the response will be brief and straight to the point, cutting out any fluff and focusing on the essentials
    • Conversational: the tone will be natural and relaxed, as if you’re having a friendly conversation
    • Educational: the language will be clear and instructive, with step-by-step explanations and helpful details
    • Formal: the response will be polished and professional, using structured language and avoiding slang
    • Professional: the tone will be business-like and precise, with industry-specific terms and a focus on clarity

    Remember: LLMs have no idea what their output says or means; they are literally just very powerful autocomplete tools, just like those in text messaging apps. It’s up to you, the human, to make sure they are on the right track. 

    Don’t forget the human edit 

    Even after you’ve refined the LLM’s response, it’s important to do a final human edit. This is your chance to make sure everything’s perfect, checking for accuracy, clarity, and anything the LLM might have missed. LLMs are great tools, but they don’t catch everything, so your final touch ensures the content is just right.

    At a certain point it’s also simpler and less time-consuming for you to alter individual words in the output, or use your unique expertise to massage the language for just the right tone and clarity, than going back to the LLM for a further iteration. 

    Ready to dive in? 

    Now it’s time to put the OCEAN process into action! Log in to your preferred LLM platform, take a simple prompt you’ve used before, and see how the process improves the output. Then share your findings with your colleagues. This hands-on approach will help you see the difference the OCEAN method can make!

    Sign up for a free account at one of these platforms:

    • ChatGPT (chat.openai.com)
    • Gemini (gemini.google.com)

    By embracing the OCEAN prompting process, you can quickly and easily make LLMs a valuable part of your teaching toolkit. The process helps you get the most out of these powerful tools, while keeping things ethical, fair, and effective.

    If you’re excited about using AI in your classroom preparation, and want to build more confidence in integrating it responsibly, we’ve got great news for you. You can sign up for our totally free online course on edX called ‘Teach Teens Computing: Understanding AI for Educators’ (helloworld.cc/ai-for-educators). In this course, you’ll learn all about the OCEAN process and how to better integrate generative AI into your teaching practice. It’s a fantastic way to ensure you’re using these technologies responsibly and ethically while making the most of what they have to offer. Join us and take your AI skills to the next level!

    A version of this article also appears in Hello World issue 25.

    Website: LINK

  • Exploring how well Experience AI maps to UNESCO’s AI competency framework for students

    Exploring how well Experience AI maps to UNESCO’s AI competency framework for students

    Reading Time: 9 minutes

    During this year’s annual Digital Learning Week conference in September, UNESCO launched their AI competency frameworks for students and teachers. 

    What is the AI competency framework for students? 

    The UNESCO competency framework for students serves as a guide for education systems across the world to help students develop the necessary skills in AI literacy and to build inclusive, just, and sustainable futures in this new technological era.

    It is an exciting document because, as well as being comprehensive, it’s the first global framework of its kind in the area of AI education.

    The framework serves three specific purposes:

    • It offers a guide on essential AI concepts and skills for students, which can help shape AI education policies or programs at schools
    • It aims to shape students’ values, knowledge, and skills so they can understand AI critically and ethically
    • It suggests a flexible plan for when and how students should learn about AI as they progress through different school grades

    The framework is a starting point for policy-makers, curriculum developers, school leaders, teachers, and educational experts to look at how it could apply in their local contexts. 

    It is not possible to create a single curriculum suitable for all national and local contexts, but the framework flags the necessary competencies for students across the world to acquire the values, knowledge, and skills necessary to examine and understand AI critically from a holistic perspective.

    How does Experience AI compare with the framework?

    A group of researchers and curriculum developers from the Raspberry Pi Foundation, with a focus on AI literacy, attended the conference and afterwards we tasked ourselves with taking a deep dive into the student framework and mapping our Experience AI resources to it. Our aims were to:

    • Identify how the framework aligns with Experience AI
    • See how the framework aligns with our research-informed design principles
    • Identify gaps or next steps

    Experience AI is a free educational programme that offers cutting-edge resources on artificial intelligence and machine learning for teachers, and their students aged 11 to 14. Developed in collaboration with the Raspberry Pi Foundation and Google DeepMind, the programme provides everything that teachers need to confidently deliver engaging lessons that will teach, inspire, and engage young people about AI and the role that it could play in their lives. The current curriculum offering includes a ‘Foundations of AI’ 6-lesson unit, 2 standalone lessons (‘AI and ecosystems’ and ‘Large language models’), and the 3 newly released AI safety resources. 

    Working through each lesson objective in the Experience AI offering, we compared them with each curricular goal to see where they overlapped. We have made this mapping publicly available so that you can see this for yourself: Experience AI – UNESCO AI Competency framework students – learning objective mapping (rpf.io/unesco-mapping)

    The first thing we discovered was that the mapping of the objectives did not have a 1:1 basis. For example, when we looked at a learning objective, we often felt that it covered more than one curricular goal from the framework. That’s not to say that the learning objective fully met each curricular goal, rather that it covers elements of the goal and in turn the student competency. 

    Once we had completed the mapping process, we analysed the results by totalling the number of objectives that had been mapped against each competency aspect and level within the framework.

    This provided us with an overall picture of where our resources are positioned against the framework. Whilst the majority of the objectives for all of the resources are in the ‘Human-centred mindset’ category, the analysis showed that there is still a relatively even spread of objectives in the other three categories (Ethics of AI, ML techniques and applications, and AI system design). 

    As the current resource offering is targeted at the entry level to AI literacy, it is unsurprising to see that the majority of the objectives were at the level of ‘Understand’. It was, however, interesting to see how many objectives were also at the ‘Apply’ level. 

    It is encouraging to see that the different resources from Experience AI map to different competencies in the framework. For example, the 6-lesson foundations unit aims to give students a basic understanding of how AI systems work and the data-driven approach to problem solving. In contrast, the AI safety resources focus more on the principles of Fairness, Accountability, Transparency, Privacy, and Security (FATPS), most of which fall more heavily under the ethics of AI and human-centred mindset categories of the competency framework. 

    What did we learn from the process? 

    Our principles align 

    We built the Experience AI resources on design principles based on the knowledge curated by Jane Waite and the Foundation’s researchers. One of our aims of the mapping process was to see if the principles that underpin the UNESCO competency framework align with our own.

    Avoiding anthropomorphism 

    Anthropomorphism refers to the concept of attributing human characteristics to objects or living beings that aren’t human. For reasons outlined in the blog I previously wrote on the issue, a key design principle for Experience AI is to avoid anthropomorphism at all costs. In our resources, we are particularly careful with the language and images that we use. Putting the human in the process is a key way in which we can remind students that it is humans who design and are responsible for AI systems. 

    Young people use computers in a classroom.

    It was reassuring to see that the UNESCO framework has many curricular goals that align closely to this, for example:

    • Foster an understanding that AI is human-led
    • Facilitate an understanding on the necessity of exercising sufficient human control over AI
    • Nurture critical thinking on the dynamic relationship between human agency and machine agency

    SEAME

    The SEAME framework created by Paul Curzon and Jane Waite offers a way for teachers, resource developers, and researchers to talk about the focus of AI learning activities by separating them into four layers: Social and Ethical (SE), Application (A), Models (M), and Engines (E). 

    The SEAME model and the UNESCO AI competency framework take two different approaches to categorising AI education — SEAME describes levels of abstraction for conceptual learning about AI systems, whereas the competency framework separates concepts into strands with progression. We found that although the alignment between the frameworks is not direct, the same core AI and machine learning concepts are broadly covered across both. 

    Computational thinking 2.0 (CT2.0)

    The concept of computational thinking 2.0 (a data-driven approach) stems from research by Professor Matti Tedre and Dr Henriikka Vartiainen from the University of Eastern Finland. The essence of this approach establishes AI as a different way to solve problems using computers compared to a more traditional computational thinking approach (a rule-based approach). This does not replace the traditional computational approach, but instead requires students to approach the problem differently when using AI as a tool. 

    An educator points to an image on a student's computer screen.

    The UNESCO framework includes many references within their curricular goals that places the data-driven approach at the forefront of problem solving using AI, including:

    • Develop conceptual knowledge on how AI is trained based on data 
    • Develop skills on assessing AI systems’ need for data, algorithms, and computing resources

    Where we slightly differ in our approach is the regular use of the term ‘algorithm’, particularly in the Understand and Apply levels of the framework. We have chosen to differentiate AI systems from traditional computational thinking approaches by avoiding the term ‘algorithm’ at the foundational stage of AI education. We believe the learners need a firm mental model of data-driven systems before students can understand that the Model and Engines of the SEAME model refer to algorithms (which would possibly correspond to the Create stage of the UNESCO framework). 

    We can identify areas for exploration

    As part of the international expansion of Experience AI, we have been working with partners from across the globe to bring AI literacy education to students in their settings. Part of this process has involved working with our partners to localise the resources, but also to provide training on the concepts covered in Experience AI. During localisation and training, our partners often have lots of queries about the lesson on bias. 

    As a result, we decided to see if mapping taught us anything about this lesson in particular, and if there was any learning we could take from it. At close inspection, we found that the lesson covers two out of the three curricular goals for the Understand element of the ‘Ethics of AI’ category (Embodied ethics). 

    Specifically, we felt the lesson:

    • Illustrates dilemmas around AI and identifies the main reasons behind ethical conflicts
    • Facilitates scenario-based understandings of ethical principles on AI and their personal implications

    What we felt isn’t covered in the lesson is:

    • Guide the embodied reflection and internalisation of ethical principles on AI

    Exploring this further, the framework describes this curricular goal as:

    Guide students to understand the implications of ethical principles on AI for their human rights, data privacy, safety, human agency, as well as for equity, inclusion, social justice and environmental sustainability. Guide students to develop embodied comprehension of ethical principles; and offer opportunities to reflect on personal attitudes that can help address ethical challenges (e.g. advocating for inclusive interfaces for AI tools, promoting inclusion in AI and reporting discriminatory biases found in AI tools).

    We realised that this doesn’t mean that the lesson on bias is ineffective or incomplete, but it does help us to think more deeply about the learning objective for the lesson. This may be something we will look to address in future iterations of the foundations unit or even in the development of new resources. What we have identified is a process that we can follow, which will help us with our decision making in the next phases of resource development. 

    How does this inform our next steps?

    As part of the analysis of the resources, we created a simple heatmap of how the Experience AI objectives relate to the UNESCO progression levels. As with the barcharts, the heatmap indicated that the majority of the objectives sit within the Understand level of progression, with fewer in Apply, and fewest in Create. As previously mentioned, this is to be expected with the resources being “foundational”. 

    The heatmap has, however, helped us to identify some interesting points about our resources that warrant further thought. For example, under the ‘Human-centred mindset’ competency aspect, there are more objectives under Apply than there are Understand. For ‘AI system design’, architecture design is the least covered aspect of Apply. 

    By identifying these areas for investigation, again it shows that we’re able to add the learnings from the UNESCO framework to help us make decisions.

    What next? 

    This mapping process has been a very useful exercise in many ways for those of us working on AI literacy at the Raspberry Pi Foundation. The process of mapping the resources gave us an opportunity to have deep conversations about the learning objectives and question our own understanding of our resources. It was also very satisfying to see that the framework aligns well with our own researched-informed design principles, such as the SEAME model and avoiding anthropomorphisation. 

    The mapping process has been a good starting point for us to understand UNESCO’s framework and we’re sure that it will act as a useful tool to help us make decisions around future enhancements to our foundational units and new free educational materials. We’re looking forward to applying what we’ve learnt to our future work! 

    Website: LINK

  • Impact of Experience AI: Reflections from students and teachers

    Impact of Experience AI: Reflections from students and teachers

    Reading Time: 5 minutes

    “I’ve enjoyed actually learning about what AI is and how it works, because before I thought it was just a scary computer that thinks like a human,” a student learning with Experience AI at King Edward’s School, Bath, UK, told us. 

    This is the essence of what we aim to do with our Experience AI lessons, which demystify artificial intelligence (AI) and machine learning (ML). Through Experience AI, teachers worldwide are empowered to confidently deliver engaging lessons with a suite of resources that inspire and educate 11- to 14-year-olds about AI and the role it could play in their lives.

    “I learned new things and it changed my mindset that AI is going to take over the world.” – Student, Malaysia

    Experience AI students in Malaysia
    Experience AI students in Malaysia

    Developed by us with Google DeepMind, our first set of Experience AI lesson resources was aimed at a UK audience and launched in April 2023. Next we released tailored versions of the resources for 5 other countries, working in close partnership with organisations in Malaysia, Kenya, Canada, Romania, and India. Thanks to new funding from Google.org, we’re now expanding Experience AI for 16 more countries and creating new resources on AI safety, with the aim of providing leading-edge AI education for more than 2 million young people across Europe, the Middle East, and Africa. 

    In this blog post, you’ll hear directly from students and teachers about the impact the Experience AI lessons have had so far. 

    Case study:  Experience AI in Malaysia

    Penang Science Cluster in Malaysia is among the first organisations we’ve partnered with for Experience AI. Speaking to Malaysian students learning with Experience AI, we found that the lessons were often very different from what they had expected. 

    Launch of Experience AI in Malaysia
    Launch of Experience AI in Malaysia

    “I actually thought it was going to be about boring lectures and not much about AI but more on coding, but we actually got to do a lot of hands-on activities, which are pretty fun. I thought AI was just about robots, but after joining this, I found it could be made into chatbots or could be made into personal helpers.” – Student, Malaysia

    “Actually, I thought AI was mostly related to robots, so I was expecting to learn more about robots when I came to this programme. It widened my perception on AI.” – Student, Malaysia. 

    The Malaysian government actively promotes AI literacy among its citizens, and working with local education authorities, Penang Science Cluster is using Experience AI to train teachers and equip thousands of young people in the state of Penang with the understanding and skills to use AI effectively. 

    “We envision a future where AI education is as fundamental as mathematics education, providing students with the tools they need to thrive in an AI-driven world”, says Aimy Lee, Chief Operating Officer at Penang Science Cluster. “The journey of AI exploration in Malaysia has only just begun, and we’re thrilled to play a part in shaping its trajectory.”

    Giving non-specialist teachers the confidence to introduce AI to students

    Experience AI provides lesson plans, classroom resources, worksheets, hands-on activities, and videos to help teachers introduce a wide range of AI applications and help students understand how they work. The resources are based on research, and because we adapt them to each partner’s country, they are culturally relevant and relatable for students. Any teacher can use the resources in their classroom, whether or not they have a background in computing education. 

    “Our Key Stage 3 Computing students now feel immensely more knowledgeable about the importance and place that AI has in their wider lives. These lessons and activities are engaging and accessible to students and educators alike, whatever their specialism may be.” – Dave Cross,  North Liverpool Academy, UK

    “The feedback we’ve received from both teachers and learners has been overwhelmingly positive. They consistently rave about how accessible, fun, and hands-on these resources are. What’s more, the materials are so comprehensive that even non-specialists can deliver them with confidence.” – Storm Rae, The National Museum of Computing, UK

    Experience AI teacher training in Kenya
    Experience AI teacher training in Kenya

    “[The lessons] go above and beyond to ensure that students not only grasp the material but also develop a genuine interest and enthusiasm for the subject.” – Teacher, Changamwe Junior School, Mombasa, Kenya

    Sparking debates on bias and the limitations of AI

    When learners gain an understanding of how AI works, it gives them the confidence to discuss areas where the technology doesn’t work well or its output is incorrect. These classroom debates deepen and consolidate their knowledge, and help them to use AI more critically.

    “Students enjoyed the practical aspects of the lessons, like categorising apples and tomatoes. They found it intriguing how AI could sometimes misidentify objects, sparking discussions on its limitations. They also expressed concerns about AI bias, which these lessons helped raise awareness about. I didn’t always have all the answers, but it was clear they were curious about AI’s implications for their future.” – Tracey Mayhead, Arthur Mellows Village College, Peterborough, UK

    Experience AI students in UK
    Experience AI students in UK

    “The lessons that we trialled took some of the ‘magic’ out of AI and started to give the students an understanding that AI is only as good as the data that is used to build it.” – Jacky Green, Waldegrave School, UK 

    “I have enjoyed learning about how AI is actually programmed, rather than just hearing about how impactful and great it could be.” – Student, King Edward’s School, Bath, UK 

    “It has changed my outlook on AI because now I’ve realised how much AI actually needs human intelligence to be able to do anything.” – Student, Arthur Mellows Village College, Peterborough, UK 

    “I didn’t really know what I wanted to do before this but now knowing more about AI, I probably would consider a future career in AI as I find it really interesting and I really liked learning about it.” – Student, Arthur Mellows Village College, Peterborough, UK 

    If you’d like to get involved with Experience AI as an educator and use our free lesson resources with your class, you can start by visiting experience-ai.org.

    Website: LINK

  • Experience AI: How research continues to shape the resources

    Experience AI: How research continues to shape the resources

    Reading Time: 5 minutes

    Since we launched the Experience AI learning programme in the UK in April 2023, educators in 130 countries have downloaded Experience AI lesson resources. They estimate reaching over 630,000 young people with the lessons, helping them to understand how AI works and to build the knowledge and confidence to use AI tools responsibly. Just last week, we announced another exciting expansion of Experience AI: thanks to $10 million in funding from Google.org, we will be able to work with local partner organisations to provide research-based AI education to an estimated over 2 million young people across Europe, the Middle East and Africa.

    Trainer discussing Experience AI at a teacher training event in Kenya.
    Experience AI teacher training in Kenya

    This blog post explains how we use research to continue to shape our Experience AI resources, including the new AI safety resources we are developing. 

    The beginning of Experience AI

    Artificial intelligence (AI) and machine learning (ML) applications are part of our everyday lives — we use them every time we scroll through social media feeds organised by recommender systems or unlock an app with facial recognition. For young people, there is more need than ever to gain the skills and understanding to critically engage with AI technologies. 

    Someone holding a mobile phone that's open on their social media apps folder.

    We wanted to design free lesson resources to help teachers in a wide range of subjects confidently introduce AI and ML to students aged 11 to 14 (Key Stage 3). This led us to develop Experience AI, in collaboration with Google DeepMind, offering materials including lesson plans, slide decks, videos (both teacher- and student-facing), student activities, and assessment questions. 

    SEAME: The research-based framework behind Experience AI

    The Experience AI resources were built on rigorous research from the Raspberry Pi Computing Education Research Centre as well as from other researchers, including those we hosted at our series of seminars on AI and data science education. The Research Centre’s work involved mapping and categorising over 500 resources used to teach AI and ML, and found that the majority were one-off activities, and that very few resources were tailored to a specific age group.

    An example activity slide in the Experience AI lessons where students learn about bias.
    An example activity in the Experience AI lessons where students learn about bias.

    To analyse the content that existing AI education resources covered, the Centre developed a simple framework called SEAME. The framework gives you an easy way to group concepts, knowledge, and skills related to AI and ML based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works.)

    Through Experience AI, learners also gain an understanding of the models underlying AI applications, and the processes used to train and test ML models.

    An example activity slide in the Experience AI lessons where students learn about classification.
    An example activity in the Experience AI lessons where students learn about classification.

    Our Experience AI lessons cover all four levels of SEAME and focus on applications of AI that are relatable for young people. They also introduce learners to AI-related issues such as privacy or bias concerns, and the impact of AI on employment. 

    The six foundation lessons of Experience AI

    1. What is AI?: Learners explore the current context of AI and how it is used in the world around them. Looking at the differences between rule-based and data-driven approaches to programming, they consider the benefits and challenges that AI could bring to society. 
    2. How computers learn: Focusing on the role of data-driven models in AI systems, learners are introduced to ML and find out about three common approaches to creating ML models. Finally they explore classification, a specific application of ML.
    3. Bias in, bias out: Students create their own ML model to classify images of apples and tomatoes. They discover that a limited dataset is likely to lead to a flawed ML model. Then they explore how bias can appear in a dataset, resulting in biased predictions produced by a ML model. 
    4. Decision trees: Learners take their first in-depth look at a specific type of ML model: decision trees. They see how different training datasets result in the creation of different ML models, experiencing first-hand what the term ‘data-driven’ means.
    5. Solving problems with ML models: Students are introduced to the AI project lifecycle and use it to create a ML model. They apply a human-focused approach to working on their project, train a ML model, and finally test their model to find out its accuracy.
    6. Model cards and careers: Learners finish the AI project lifecycle by creating a model card to explain their ML model. To complete the unit, they explore a range of AI-related careers, hear from people working in AI research at Google DeepMind, and explore how they might apply AI and ML to their interests. 
    Experience AI banner.

    We also offer two additional stand-alone lessons: one on large language models, how they work, and why they’re not always reliable, and the other on the application of AI in ecosystems research, which lets learners explore how AI tools can be used to support animal conservation. 

    New AI safety resources: Empowering learners to be critical users of technology

    We have also been developing a set of resources for educator-led sessions on three topics related to AI safety, funded by Google.org

    • AI and your data: With the support of this resource, young people reflect on the data they have already provided to AI applications in their daily lives, and think about how the prevalence of AI tools might change the way they protect their data.  
    • Media literacy in the age of AI: This resource highlights the ways AI tools can be used to perpetuate misinformation and how AI applications can help people combat misleading claims.
    • Using generative AI responsibly: With this resource, young people consider their responsibilities when using generative AI, and their expectations of developers who release Experience AI tools. 

    Other research principles behind our free teaching resources 

    As well as using the SEAME framework, we have incorporated a whole host of other research-based concepts in the design principles for the Experience AI resources. For example, we avoid anthropomorphism — that is, words or imagery that can lead learners to wrongly believe that AI applications have sentience or intentions like humans do — and we instead promote the understanding that it’s people who design AI applications and decide how they are used. We also teach about data-driven application design, which is a core concept in computational thinking 2.0.  

    Share your feedback

    We’d love to hear your thoughts and feedback about using the Experience AI resources. Your comments help us to improve the current materials, and to develop future resources. You can tell us what you think using this form

    And if you’d like to start using the Experience AI resources as an educator, you can download them for free at experience-ai.org.

    Website: LINK

  • Experience AI at UNESCO’s Digital Learning Week

    Experience AI at UNESCO’s Digital Learning Week

    Reading Time: 5 minutes

    Last week, we were honoured to attend UNESCO’s Digital Learning Week conference to present our free Experience AI resources and how they can help teachers demystify AI for their learners.  

    A group of educators at a UNESCO conference.

    The conference drew a worldwide audience in-person and online to hear about the work educators and policy makers are doing to support teachers’ use of AI tools in their teaching and learning. Speaker after speaker reiterated that the shared goal of our work is to support learners to become critical consumers and responsible creators of AI systems.

    In this blog, we share how our conference talk demonstrated the use of Experience AI for pursuing this globally shared goal, and how the Experience AI resources align with UNESCO’s newly launched AI competency framework for students.

    Presenting the design principles behind Experience AI

    Our talk about Experience AI, our learning programme developed with Google DeepMind, focused on the research-informed approach we are taking in our resource development. Specifically, we spoke about three key design principles that we embed in the Experience AI resources:

    Firstly, using AI and machine learning to solve problems requires learners and educators to think differently to traditional computational thinking and use a data-driven approach instead, as laid out in the research around computational thinking 2.0.

    Secondly, every word we use in our teaching about AI is important to help young people form accurate mental models about how AI systems work. In particular, we focused our examples around the need to avoid anthropomorphising language when we describe AI systems. Especially given that some developers produce AI systems with the aim to make them appear human-like in their design and outputs, it’s important that young people understand that AI systems are in fact built and designed by humans.

    Thirdly we described how we used the SEAME framework we adapted from work by Jane Waite (Raspberry Pi Foundation) and Paul Curzon (Queen Mary University, London) to categorise hundreds of AI education resources and inform the design of our Experience AI resources. The framework offers a common language for educators when assessing the content of resources, and when supporting learners to understand the different aspects of AI systems. 

    By presenting our design principles, we aimed to give educators, policy makers, and attendees from non-governmental organisations practical recommendations and actionable considerations for designing learning materials on AI literacy.   

    How Experience AI aligns with UNESCO’s new AI competency framework for students

    At Digital Learning Week, UNESCO launched two AI competency frameworks:

    • A framework for students, intended to help teachers around the world with integrating AI tools in activities to engage their learners
    • A framework for teachers, “defining the knowledge, skills, and values teachers must master in the age of AI”

    AI competency framework for students

    We have had the chance to map the Experience AI resources to UNESCO’s AI framework for students at a high level, finding that the resources cover 10 of the 12 areas of the framework (see image below).

    An adaptation of a summary table from UNESCO’s new student competency framework (CC-BY-SA 3.0 IGO), highlighting the 10 areas covered by our Experience AI resources

    For instance, throughout the Experience AI resources runs a thread of promoting “citizenship in the AI era”: the social and ethical aspects of AI technologies are highlighted in all the lessons and activities. In this way, they provide students with the foundational knowledge of how AI systems work, and where they may work badly. Using the resources, educators can teach their learners core AI and machine learning concepts and make these concepts concrete through practical activities where learners create their own models and critically evaluate their outputs. Importantly, by learning with Experience AI, students not only learn to be responsible users of AI tools, but also to consider fairness, accountability, transparency, and privacy when they create AI models.  

    Teacher competency framework for AI 

    UNESCO’s AI competency framework for teachers outlines 15 competencies across 5 dimensions (see image below).  We enjoyed listening to the launch panel members talk about the strong ambitions of the framework as well as the realities of teachers’ global and local challenges. The three key messages of the panel were:

    • AI will not replace the expertise of classroom teachers
    • Supporting educators to build AI competencies is a shared responsibility
    • Individual countries’ education systems have different needs in terms of educator support

    All three messages resonate strongly with the work we’re doing at the Raspberry Pi Foundation. Supporting all educators is a fundamental part of our resource development. For example, Experience AI offers everything a teacher with no technical background needs to deliver the lessons, including lesson plans, videos, worksheets and slide decks. We also provide a free online training course on understanding AI for educators. And in our work with partner organisations around the world, we adapt and translate Experience AI resources so they are culturally relevant, and we organise locally delivered teacher professional development. 

    A summary table from UNESCO’s new teacher competency framework (CC-BY-SA 3.0 IGO)

     The teachers’ competency framework is meant as guidance for educators, policy makers, training providers, and application developers to support teachers in using AI effectively, and in helping their learners gain AI literacy skills. We will certainly consult the document as we develop our training and professional development resources for teachers further.

    Towards AI literacy for all young people

    Across this year’s UNESCO’s Digital Learning Week, we saw that the role of AI in education took centre stage across the presentations and the informal conversations among attendees. It was a privilege to present our work and see how well Experience AI was received, with attendees recognising that our design principles align with the values and principles in UNESCO’s new AI competency frameworks.

    A conference table setup with a pair of headphones resting on top of a UNESCO brochure.

    We look forward to continuing this international conversation about AI literacy and working in aligned ways to support all young people to develop a foundational understanding of AI technologies.

    Website: LINK

  • Experience AI expands to reach over 2 million students

    Experience AI expands to reach over 2 million students

    Reading Time: 4 minutes

    Two years ago, we announced Experience AI, a collaboration between the Raspberry Pi Foundation and Google DeepMind to inspire the next generation of AI leaders.

    Today I am excited to announce that we are expanding the programme with the aim of reaching more than 2 million students over the next 3 years, thanks to a generous grant of $10m from Google.org. 

    Why do kids need to learn about AI

    AI technologies are already changing the world and we are told that their potential impact is unprecedented in human history. But just like every other wave of technological innovation, along with all of the opportunities, the AI revolution has the potential to leave people behind, to exacerbate divisions, and to make more problems than it solves.

    Part of the answer to this dilemma lies in ensuring that all young people develop a foundational understanding of AI technologies and the role that they can play in their lives. 

    An educator points to an image on a student's computer screen.

    That’s why the conversation about AI in education is so important. A lot of the focus of that conversation is on how we harness the power of AI technologies to improve teaching and learning. Enabling young people to use AI to learn is important, but it’s not enough. 

    We need to equip young people with the knowledge, skills, and mindsets to use AI technologies to create the world they want. And that means supporting their teachers, who once again are being asked to teach a subject that they didn’t study. 

    Experience AI 

    That’s the work that we’re doing through Experience AI, an ambitious programme to provide teachers with free classroom resources and professional development, enabling them to teach their students about AI technologies and how they are changing the world. All of our resources are grounded in research that defines the concepts that make up AI literacy, they are rooted in real world examples drawing on the work of Google DeepMind, and they involve hands-on, interactive activities. 

    The Experience AI resources have already been downloaded 100,000 times across 130 countries and we estimate that 750,000 young people have taken part in an Experience AI lesson already. 

    In November 2023, we announced that we were building a global network of partners that we would work with to localise and translate the Experience AI resources, to ensure that they are culturally relevant, and organise locally delivered teacher professional development. We’ve made a fantastic start working with partners in Canada, India, Kenya, Malaysia, and Romania; and it’s been brilliant to see the enthusiasm and demand for AI literacy from teachers and students across the globe. 

    Thanks to an incredibly generous donation of $10m from Google.org – announced at Google.org’s first Impact Summit  – we will shortly be welcoming new partners in 17 countries across Europe, the Middle East, and Africa, with the aim of reaching more than 2 million students in the next three years. 

    AI Safety

    Alongside the expansion of the global network of Experience AI partners, we are also launching new resources that focus on critical issues of AI safety. 

    A laptop surrounded by various screens displaying images, videos, and a world map.

    AI and Your Data: Helping young people reflect on the data they are already providing to AI applications in their lives and how the prevalence of AI tools might change the way they protect their data.

    Media Literacy in the Age of AI: Highlighting the ways AI tools can be used to perpetuate misinformation and how AI applications can help combat misleading claims.

    Using Generative AI Responsibly: Empowering young people to reflect on their responsibilities when using Generative AI and their expectations of developers who release AI tools.

    Get involved

    In many ways, this moment in the development of AI technologies reminds me of the internet in the 1990s (yes, I am that old). We all knew that it had potential, but no-one could really imagine the full scale of what would follow. 

    We failed to rise to the educational challenge of that moment and we are still living with the consequences: a dire shortage of talent; a tech sector that doesn’t represent all communities and voices; and young people and communities who are still missing out on economic opportunities and unable to utilise technology to solve the problems that matter to them. 

    We have an opportunity to do a better job this time. If you’re interested in getting involved, we’d love to hear from you.

    Website: LINK

  • Why we’re taking a problem-first approach to the development of AI systems

    Why we’re taking a problem-first approach to the development of AI systems

    Reading Time: 7 minutes

    If you are into tech, keeping up with the latest updates can be tough, particularly when it comes to artificial intelligence (AI) and generative AI (GenAI). Sometimes I admit to feeling this way myself, however, there was one update recently that really caught my attention. OpenAI launched their latest iteration of ChatGPT, this time adding a female-sounding voice. Their launch video demonstrated the model supporting the presenters with a maths problem and giving advice around presentation techniques, sounding friendly and jovial along the way. 

    A finger clicking on an AI app on a phone.

    Adding a voice to these AI models was perhaps inevitable as big tech companies try to compete for market share in this space, but it got me thinking, why would they add a voice? Why does the model have to flirt with the presenter? 

    Working in the field of AI, I’ve always seen AI as a really powerful problem-solving tool. But with GenAI, I often wonder what problems the creators are trying to solve and how we can help young people understand the tech. 

    What problem are we trying to solve with GenAI?

    The fact is that I’m really not sure. That’s not to suggest that I think that GenAI hasn’t got its benefits — it does. I’ve seen so many great examples in education alone: teachers using large language models (LLMs) to generate ideas for lessons, to help differentiate work for students with additional needs, to create example answers to exam questions for their students to assess against the mark scheme. Educators are creative people and whilst it is cool to see so many good uses of these tools, I wonder if the developers had solving specific problems in mind while creating them, or did they simply hope that society would find a good use somewhere down the line?

    An educator points to an image on a student's computer screen.

    Whilst there are good uses of GenAI, you don’t need to dig very deeply before you start unearthing some major problems. 

    Anthropomorphism

    Anthropomorphism relates to assigning human characteristics to things that aren’t human. This is something that we all do, all of the time, without it having consequences. The problem with doing this with GenAI is that, unlike an inanimate object you’ve named (I call my vacuum cleaner Henry, for example), chatbots are designed to be human-like in their responses, so it’s easy for people to forget they’re not speaking to a human. 

    A photographic rendering of a smiling face emoji seen through a refractive glass grid, overlaid with a diagram of a neural network.
    Image by Alan Warburton / © BBC / Better Images of AI / Social Media / CC-BY 4.0

    As feared, since my last blog post on the topic, evidence has started to emerge that some young people are showing a desire to befriend these chatbots, going to them for advice and emotional support. It’s easy to see why. Here is an extract from an exchange between the presenters at the ChatGPT-4o launch and the model:

    ChatGPT (presented with a live image of the presenter): “It looks like you’re feeling pretty happy and cheerful with a big smile and even maybe a touch of excitement. Whatever is going on? It seems like you’re in a great mood. Care to share the source of those good vibes?”
    Presenter: “The reason I’m in a good mood is we are doing a presentation showcasing how useful and amazing you are.”
    ChatGPT: “Oh stop it, you’re making me blush.” 

    The Family Online Safety Institute (FOSI) conducted a study looking at the emerging hopes and fears that parents and teenages have around GenAI.

    One quote from a teenager said:

    “Some people just want to talk to somebody. Just because it’s not a real person, doesn’t mean it can’t make a person feel — because words are powerful. At the end of the day, it can always help in an emotional and mental way.”  

    The prospect of teenagers seeking solace and emotional support from a generative AI tool is a concerning development. While these AI tools can mimic human-like conversations, their outputs are based on patterns and data, not genuine empathy or understanding. The ultimate concern is that this exposes vulnerable young people to be manipulated in ways we can’t predict. Relying on AI for emotional support could lead to a sense of isolation and detachment, hindering the development of healthy coping mechanisms and interpersonal relationships. 

    A photographic rendering of a simulated middle-aged white woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network.
    Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0

    Arguably worse is the recent news of the world’s first AI beauty pageant. The very thought of this probably elicits some kind of emotional response depending on your view of beauty pageants. There are valid concerns around misogyny and reinforcing misguided views on body norms, but it’s also important to note that the winner of “Miss AI” is being described as a lifestyle influencer. The questions we should be asking are, who are the creators trying to have influence over? What influence are they trying to gain that they couldn’t get before they created a virtual woman? 

    DeepFake tools

    Another use of GenAI is the ability to create DeepFakes. If you’ve watched the most recent Indiana Jones movie, you’ll have seen the technology in play, making Harrison Ford appear as a younger version of himself. This is not in itself a bad use of GenAI technology, but the application of DeepFake technology can easily become problematic. For example, recently a teacher was arrested for creating a DeepFake audio clip of the school principal making racist remarks. The recording went viral before anyone realised that AI had been used to generate the audio clip. 

    Easy-to-use DeepFake tools are freely available and, as with many tools, they can be used inappropriately to cause damage or even break the law. One such instance is the rise in using the technology for pornography. This is particularly dangerous for young women, who are the more likely victims, and can cause severe and long-lasting emotional distress and harm to the individuals depicted, as well as reinforce harmful stereotypes and the objectification of women. 

    Why we should focus on using AI as a problem-solving tool

    Technological developments causing unforeseen negative consequences is nothing new. A lot of our job as educators is about helping young people navigate the changing world and preparing them for their futures and education has an essential role in helping people understand AI technologies to avoid the dangers. 

    Our approach at the Raspberry Pi Foundation is not to focus purely on the threats and dangers, but to teach young people to be critical users of technologies and not passive consumers. Having an understanding of how these technologies work goes a long way towards achieving sufficient AI literacy skills to make informed choices and this is where our Experience AI program comes in. 

    An Experience AI banner.

    Experience AI is a set of lessons developed in collaboration with Google DeepMind and, before we wrote any lessons, our team thought long and hard about what we believe are the important principles that should underpin teaching and learning about artificial intelligence. One such principle is taking a problem-first approach and emphasising that computers are tools that help us solve problems. In the Experience AI fundamentals unit, we teach students to think about the problem they want to solve before thinking about whether or not AI is the appropriate tool to use to solve it. 

    Taking a problem-first approach doesn’t by default avoid an AI system causing harm — there’s still the chance it will increase bias and societal inequities — but it does focus the development on the end user and the data needed to train the models. I worry that focusing on market share and opportunity rather than the problem to be solved is more likely to lead to harm.

    Another set of principles that underpins our resources is teaching about fairness, accountability, transparency, privacy, and security (Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education, Understanding Artificial Intelligence Ethics and Safety) in relation to the development of AI systems. These principles are aimed at making sure that creators of AI models develop models ethically and responsibly. The principles also apply to consumers, as we need to get to a place in society where we expect these principles to be adhered to and consumer power means that any models that don’t, simply won’t succeed. 

    Furthermore, once students have created their models in the Experience AI fundamentals unit, we teach them about model cards, an approach that promotes transparency about their models. Much like how nutritional information on food labels allows the consumer to make an informed choice about whether or not to buy the food, model cards give information about an AI model such as the purpose of the model, its accuracy, and known limitations such as what bias might be in the data. Students write their own model cards based on the AI solutions they have created. 

    What else can we do?

    At the Raspberry Pi Foundation, we have set up an AI literacy team with the aim to embed principles around AI safety, security, and responsibility into our resources and align them with the Foundations’ mission to help young people to:

    • Be critical consumers of AI technology
    • Understand the limitations of AI
    • Expect fairness, accountability, transparency, privacy, and security and work toward reducing inequities caused by technology
    • See AI as a problem-solving tool that can augment human capabilities, but not replace or narrow their futures 

    Our call to action to educators, carers, and parents is to have conversations with your young people about GenAI. Get to know their opinions on GenAI and how they view its role in their lives, and help them to become critical thinkers when interacting with technology. 

    Website: LINK

  • New guide on using generative AI for teachers and schools

    New guide on using generative AI for teachers and schools

    Reading Time: 5 minutes

    The world of education is loud with discussions about the uses and risks of generative AI — tools for outputting human-seeming media content such as text, images, audio, and video. In answer, there’s a new practical guide on using generative AI aimed at Computing teachers (and others), written by a group of classroom teachers and researchers at the Raspberry Pi Computing Education Research Centre and Faculty of Education at the University of Cambridge.

    Two educators discuss something at a desktop computer.

    Their new guide is a really useful overview for everyone who wants to:

    • Understand the issues generative AI tools present in the context of education
    • Find out how to help their schools and students navigate them
    • Discover ideas on how to make use of generative AI tools in their teaching

    Since generative AI tools have become publicly available, issues around data privacy and plagiarism are at the front of educators’ minds. At the same time, many educators are coming up with creative ways to use generative AI tools to enhance teaching and learning. The Research Centre’s guide describes the areas where generative AI touches on education, and lays out what schools and teachers can do to use the technology beneficially and help their learners do the same.

    Teaching students about generative AI tools

    It’s widely accepted that AI tools can bring benefits but can also be used in unhelpful or harmful ways. Basic knowledge of how AI and machine learning works is key to being able to get the best from them. The Research Centre’s guide shares recommended educational resources for teaching learners about AI.

    A desktop computer showing the Experience AI homepage.

    One of the recommendations is Experience AI, a set of free classroom resources we’re creating. It includes a set of 6 lessons for providing 11- to 14-year-olds with a foundational understanding of AI systems, as well as a standalone lesson specifically for teaching about large language model-based AI tools, such as ChatGPT and Google Gemini. These materials are for teachers of any specialism, not just for Computing teachers.

    You’ll find that even a brief introduction to how large language models work is likely to make students’ ideas about using these tools to do all their homework much less appealing. The guide outlines creative ways you can help students see some of generative AI’s pitfalls, such as asking students to generate outputs and compare them, paying particular attention to inaccuracies in the outputs.

    Generative AI tools and teaching computing

    We’re still learning about what the best ways to teach programming to novice learners are. Generative AI has the potential to change how young people learn text-based programming, as AI functionality is now integrated into many of the major programming environments, generating example solutions or helping to spot errors.

    A web project in the Code Editor.

    The Research Centre’s guide acknowledges that there’s more work to be done to understand how and when to support learners with programming tasks through generative AI tools. (You can follow our ongoing seminar series on the topic.) In the meantime, you may choose to support established programming pedagogies with generative AI tools, such as prompting an AI chatbot to generate a PRIMM activity on a particular programming concept.

    As ethics and the impact of technology play an important part in any good Computing curriculum, the guide also shares ways to use generative AI tools as a focus for your classroom discussions about topics such as bias and inequality.

    Using generative AI tools to support teaching and learning

    Teachers have been using generative AI applications as productivity tools to support their teaching, and the Research Centre’s guide gives several examples you can try out yourself. Examples include creating summaries of textual materials for students, and creating sets of questions on particular topics. As the guide points out, when you use generative AI tools like this, it’s important to always check the accuracy of the generated materials before you give any of them to your students.

    Putting a school-wide policy in place

    Importantly, the Research Centre’s guide highlights the need for a school-wide acceptable use policy (AUP) that informs teachers, other school staff, and students on how they may use generative AI tools. This section of the guide suggests websites that offer sample AUPs that can be used as a starting point for your school. Your AUP should aim to keep users safe, covering e-safety, privacy, and security issues as well as offering guidance on being transparent about the use of generative tools.

    Teachers in discussion at a table.

    It’s not uncommon that schools look to specialist Computing teachers to act as the experts on questions around use of digital tools. However, for developing trust in how generative AI tools are used in the school, it’s important to encourage as wide a range of stakeholders as possible to be consulted in the process of creating an AUP.

    A source of support for teachers and schools

    As the Research Centre’s guide recognises, the landscape of AI and our thinking about it might change. In this uncertain context, the document offers a sensible and detailed overview of where we are now in understanding the current impact of generative AI on Computing as a subject, and on education more broadly. The example use cases and thought-provoking next steps on how this technology can be used and what its known risks and concerns are should be helpful for all interested educators and schools.

    I recommend that all Computing teachers read this new guide, and I hope you feel inspired about the key role that you can play in shaping the future of education affected by AI.

    Website: LINK

  • Four key learnings from teaching Experience AI lessons

    Four key learnings from teaching Experience AI lessons

    Reading Time: 4 minutes

    Developed by us and Google DeepMind, Experience AI provides teachers with free resources to help them confidently deliver lessons that inspire and educate young people about artificial intelligence (AI) and the role it could play in their lives.

    Tracy Mayhead is a computer science teacher at Arthur Mellows Village College in Cambridgeshire. She recently taught Experience AI to her KS3 pupils. In this blog post, she shares 4 key learnings from this experience.

    A photo of Tracy Mayhead in a classroom.

    1. Preparation saves time

    The Experience AI lesson plans provided a clear guide on how to structure our lessons.

    Each lesson includes teacher-facing intro videos, a lesson plan, a slide deck, activity worksheets, and student-facing videos that help to introduce each new AI concept. 

    It was handy to know in advance which websites needed unblocking so students could access them. 

    You can find a unit overview on the Experience AI website to get an idea of what is included in each lesson.

    “My favourite bit was making my own model, and choosing the training data. I enjoyed seeing how the amount of data affected the accuracy of the AI and testing the model.” – Student, Arthur Mellows Village College, UK 

    2. The lessons can be adapted to meet student’s needs 

    It was clear from the start that I could adapt the lessons to make them work for myself and my students.

    Having estimated times and corresponding slides for activities was beneficial for adjusting the lesson duration. The balance between learning and hands-on tasks was just right.

    A group of students at a desk in a classroom.

    I felt fairly comfortable with my understanding of AI basics. However, teaching it was a learning experience, especially in tailoring the lessons to cater to students with varying knowledge. Their misconceptions sometimes caught me off guard, like their belief that AI is never wrong. Adapting to their needs and expectations was a learning curve. 

    “It has definitely changed my outlook on AI. I went from knowing nothing about it to understanding how it works, why it acts in certain ways, and how to actually create my own AI models and what data I would need for that.” – Student, Arthur Mellows Village College, UK 

    3. Young people are curious about AI and how it works

    My students enjoyed the practical aspects of the lessons, like categorising apples and tomatoes. They found it intriguing how AI could sometimes misidentify objects, sparking discussions on its limitations. They also expressed concerns about AI bias, which these lessons helped raise awareness about. I didn’t always have all the answers, but it was clear they were curious about AI’s implications for their future.

    It’s important to acknowledge that as a teacher you won’t always have all the answers especially when teaching AI literacy, which is such a new area. This is something that can be explored in a class alongside students.

    There is an online course you can use that can help get you started teaching about AI if you are at all nervous.

    [youtube https://www.youtube.com/watch?v=gScgJf289Cs?feature=oembed&w=500&h=281]

    “I learned a lot about AI and the possibilities it holds to better our futures as well as how to train it and problems that may arise when training it.” – Student, Arthur Mellows Village College, UK

    4. Engaging young people with AI is important

    Students are fascinated by AI and they recognise its significance in their future. It is important to equip them with the knowledge and skills to fully engage with AI.

    Experience AI provides a valuable opportunity to explore these concepts and empower students to shape and question the technology that will undoubtedly impact their lives.

    “It has changed my outlook on AI because I now understand it better and feel better equipped to work with AI in my working life.” – Student, Arthur Mellows Village College, UK 

    A group of Year 10 students in a classroom.

    What is your experience of teaching Experience AI lessons?

    We completely agree with Tracy. AI literacy empowers people to critically evaluate AI applications and how they are being used. Our Experience AI resources help to foster critical thinking skills, allowing learners to use AI tools to address challenges they are passionate about. 

    We’re also really interested to learn what misconceptions students have about AI and how teachers are addressing them. If you come across misconceptions that surprise you while you’re teaching with the Experience AI lesson materials, please let us know via the feedback form linked in the final lesson of the six-lesson unit.

    If you would like to teach Experience AI lessons to your students, download the free resources from experience-ai.org

    Website: LINK

  • Imagining students’ progression in the era of generative AI

    Imagining students’ progression in the era of generative AI

    Reading Time: 6 minutes

    Generative artificial intelligence (AI) tools are becoming more easily accessible to learners and educators, and increasingly better at generating code solutions to programming tasks, code explanations, computing lesson plans, and other learning resources. This raises many questions for educators in terms of what and how we teach students about computing and AI, and AI’s impact on assessment, plagiarism, and learning objectives.

    Brett Becker.

    We were honoured to have Professor Brett Becker (University College Dublin) join us as part of our ‘Teaching programming (with or without AI)’ seminar series. He is uniquely placed to comment on teaching computing using AI tools, having been involved in many initiatives relevant to computing education at different levels, in Ireland and beyond.

    In a computing classroom, two girls concentrate on their programming task.

    Brett’s talk focused on what educators and education systems need to do to prepare all students — not just those studying Computing — so that they are equipped with sufficient knowledge about AI to make their way from primary school to secondary and beyond, whether it be university, technical qualifications, or work.

    How do AI tools currently perform?

    Brett began his talk by illustrating the increase in performance of large language models (LLMs) in solving first-year undergraduate programming exercises: he compared the findings from two recent studies he was involved in as part of an ITiCSE Working Group. In the first study — from 2021 — the results generated by GPT-3 were similar to those of students in the top quartile. By the second study in 2023, GPT-4’s performance matched that of a top student (Figure 1).

    A graph comparing exam scores.

    Figure 1: Student scores on Exam 1 and Exam 2, represented by circles. GPT-3’s 2021 score is represented by the blue ‘x’, and GPT-4’s 2023 score on the same questions is represented by the red ‘x’.

    Brett also explained that the study found some models were capable of solving current undergraduate programming assessments almost error-free, and could solve the Irish Leaving Certificate and UK A level Computer Science exams.

    What are challenges and opportunities for education?

    This level of performance raises many questions for computing educators about what is taught and how to assess students’ learning. To address this, Brett referred to his 2023 paper, which included findings from a literature review and a survey on students’ and instructors’ attitudes towards using LLMs in computing education. This analysis has helped him identify several opportunities as well as the ethical challenges education systems face regarding generative AI. 

    The opportunities include: 

    • The generation of unique content, lesson plans, programming tasks, or feedback to help educators with workload and productivity
    • More accessible content and tools generated by AI apps to make Computing more broadly accessible to more students
    • More engaging and meaningful student learning experiences, including using generative AI to enable creativity and using conversational agents to augment students’ learning
    • The impact on assessment practices, both in terms of automating the marking of current assessments as well as reconsidering what is assessed and how

    Some of the challenges include:

    • The lack of reliability and accuracy of outputs from generative AI tools
    • The need to educate everyone about AI to create a baseline level of understanding
    • The legal and ethical implications of using AI in computing education and beyond
    • How to deal with questionable or even intentionally harmful uses of AI and mitigating the consequences of such uses

    Programming as a basic skill for all subjects

    Next, Brett talked about concrete actions that he thinks we need to take in response to these opportunities and challenges. 

    He emphasised our responsibility to keep students safe. One way to do this is to empower all students with a baseline level of knowledge about AI, at an age-appropriate level, to enable them to keep themselves safe. 

    Secondary school age learners in a computing classroom.

    He also discussed the increased relevance of programming to all subjects, not only Computing, in a similar way to how reading and mathematics transcend the boundaries of their subjects, and the need he sees to adapt subjects and curricula to that effect. 

    As an example of how rapidly curricula may need to change with increasing AI use by students, Brett looked at the Irish Computer science specification for “senior cycle” (final two years of second-level, ages 16–18). This curriculum was developed in 2018 and remains a strong computing curriculum in Brett’s opinion. However, he pointed out that it only contains a single learning outcome on AI. 

    To help educators bridge this gap, in the book Brett wrote alongside Keith Quille to accompany the curriculum, they included two chapters dedicated to AI, machine learning, and ethics and computing. Brett believes these types of additional resources may be instrumental for teaching and learning about AI as resources are more adaptable and easier to update than curricula. 

    Generative AI in computing education

    Taking the opportunity to use generative AI to reimagine new types of programming problems, Brett and colleagues have developed Promptly, a tool that allows students to practise prompting AI code generators. This tool provides a combined approach to learning about generative AI while learning programming with an AI tool. 

    Promptly is intended to help students learn how to write effective prompts. It encourages students to specify and decompose the programming problem they want to solve, read the code generated, compare it with test cases to discern why it is failing (if it is), and then update their prompt accordingly (Figure 2). 

    An example of the Promptly interface.

    Figure 2: Example of a student’s use of Promptly.

    Early undergraduate student feedback points to Promptly being a useful way to teach programming concepts and encourage metacognitive programming skills. The tool is further described in a paper, and whilst the initial evaluation was aimed at undergraduate students, Brett positioned it as a secondary school–level tool as well. 

    Brett hopes that by using generative AI tools like this, it will be possible to better equip a larger and more diverse pool of students to engage with computing.

    Re-examining the concept of programming

    Brett concluded his seminar by broadening the relevance of programming to all learners, while challenging us to expand our perspectives of what programming is. If we define programming as a way of prompting a machine to get an output, LLMs allow all of us to do so without the need for learning the syntax of traditional programming languages. Taking that view, Brett left us with a question to consider: “How do we prepare for this from an educational perspective?”

    You can watch Brett’s presentation here:

    [youtube https://www.youtube.com/watch?v=n0BZq8uRutQ?feature=oembed&w=500&h=281]

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI. 

    For our next seminar on Tuesday 11 June at 17:00 to 18:30 GMT, we’re joined by Veronica Cucuiat (Raspberry Pi Foundation), who will talk about whether LLMs could be employed to help understand programming error messages, which can present a significant obstacle to anyone new to coding, especially young people.  

    To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • Teaching a generation of AI innovators in Malaysia with Experience AI

    Teaching a generation of AI innovators in Malaysia with Experience AI

    Reading Time: 4 minutes

    Today’s blog is from Aimy Lee, Chief Operating Officer at Penang Science Cluster, part of our global partner network for Experience AI.

    Artificial intelligence (AI) is transforming the world at an incredible pace, and at Penang Science Cluster, we are determined to be at the forefront of this fast-changing landscape.

    A teacher delivers a lesson in a classroom while students sit at their desks and listen.

    The Malaysian government is actively promoting AI literacy among citizens, demonstrating a commitment to the nation’s technological advancement. This dedication is further demonstrated by the Ministry of Education’s recent announcement to introduce AI basics into the primary school curriculum, starting in 2027. 

    Why we chose Experience AI

    At Penang Science Cluster, we firmly believe that AI is already an essential part of everybody’s future, especially for young people, for whom technologies such as search engines, AI chatbots, image generation, and facial recognition are already deeply ingrained in their daily experiences. It is vital that we equip young people with the knowledge to understand, harness, and even create AI solutions, rather than view AI with trepidation.

    A student uses a laptop in a classroom.

    With this in mind, we’re excited to be one of the first of many organisations to join the Experience AI global partner network. Experience AI is a free educational programme  offering cutting-edge resources on artificial intelligence and machine learning for teachers and students. Developed in collaboration between the Raspberry Pi Foundation and Google DeepMind, as a global partner we hope the programme will bring AI literacy to thousands of students across Malaysia.

    Our goal is to demystify AI and highlight its potential for positive change. The Experience AI programme resonated with our mission to provide accessible and engaging resources tailored for our beneficiaries, making it a natural fit for our efforts.

    Experience AI pilot: Results and student voices

    At the start of this year, we ran an Experience AI pilot with 56 students to discover how the programme resonated with young people. The positive feedback we received was incredibly encouraging! Students expressed excitement and a genuine shift in their understanding of AI. 

    Their comments, such as discovering the fun of learning about AI and seeing how AI can lead to diverse career paths, validated the effectiveness of the programme’s approach.  

    [youtube https://www.youtube.com/watch?v=zWO_xWOEw0k?feature=oembed&w=500&h=281]

    One student’s changed perspective — from fearing AI to recognising its potential — underscores the importance of addressing misconceptions. Providing accessible AI education empowers students to develop a balanced and informed outlook.

    “I learnt new things and it changed my mindset that AI is not going to take over the world.” – Student who took part in the Experience AI pilot

    Launching Experience AI in Malaysia

    The successful pilot paved the way for our official Experience AI launch in early April. Students who participated in the pilot were proud to be a part of the launch event, sharing their AI knowledge and experience with esteemed guests, including the Chief Minister of Penang, the Deputy Finance Minister of Malaysia, and the Director of the Penang State Education Department. The presence of these leaders highlights the growing recognition of the significance of AI education.

    Experience AI launch event in Malaysia

    Building a vibrant AI education community

    Following the launch, our immediate focus has shifted to empowering teachers. With the help of the Raspberry Pi Foundation, we’ll conduct teacher workshops to equip them with the knowledge and tools to bring Experience AI into their classrooms. Collaborating with education departments in Penang, Kedah, Perlis, Perak, and Selangor will be vital in teacher recruitment and building a vibrant AI education community.

    Inspiring the next generation of AI creators

    Experience AI marks an exciting start to integrating AI education within Malaysia, for both students and teachers. Our hope is to inspire a generation of young people empowered to shape the future of AI — not merely as consumers of the technology, but as active creators and innovators.

    We envision a future where AI education is as fundamental as mathematics education, providing students with the tools they need to thrive in an AI-driven world. The journey of AI exploration in Malaysia has only just begun, and we’re thrilled to play a part in shaping its trajectory.

    If you’re interested in partnering with us to bring Experience AI to students and teachers in your country, you can register your interest here.

    Website: LINK

  • Localising AI education: Adapting Experience AI for global impact

    Localising AI education: Adapting Experience AI for global impact

    Reading Time: 6 minutes

    It’s been almost a year since we launched our first set of Experience AI resources in the UK, and we’re now working with partner organisations to bring AI literacy to teachers and students all over the world.

    Developed by the Raspberry Pi Foundation and Google DeepMind, Experience AI provides everything that teachers need to confidently deliver engaging lessons that will inspire and educate young people about AI and the role that it could play in their lives.

    Over the past six months we have been working with partners in Canada, Kenya, Malaysia, and Romania to create bespoke localised versions of the Experience AI resources. Here is what we’ve learned in the process.

    Creating culturally relevant resources

    The Experience AI Lessons address a variety of real-world contexts to support the concepts being taught. Including real-world contexts in teaching is a pedagogical strategy we at the Raspberry Pi Foundation call “making concrete”. This strategy significantly enhances the learning experience for learners because it bridges the gap between theoretical knowledge and practical application. 

    Three learners and an educator do a physical computing activity.

    The initial aim of Experience AI was for the resources to be used in UK schools. While we put particular emphasis on using culturally relevant pedagogy to make the resources relatable to learners from backgrounds that are underrepresented in the tech industry, the contexts we included in them were for UK learners. As many of the resource writers and contributors were also based in the UK, we also unavoidably brought our own lived experiences and unintentional biases to our design thinking.

    Therefore, when we began thinking about how to adapt the resources for schools in other countries, we knew we needed to make sure that we didn’t just convert what we had created into different languages. Instead we focused on localisation.

    Educators doing an activity about networks using a piece of string.

    Localisation goes beyond translating resources into a different language. For example in educational resources, the real-world contexts used to make concrete the concepts being taught need to be culturally relevant, accessible, and engaging for students in a specific place. In properly localised resources, these contexts have been adapted to provide educators with a more relatable and effective learning experience that resonates with the students’ everyday lives and cultural background.

    Working with partners on localisation

    Recognising our UK-focused design process, we made sure that we made no assumptions during localisation. We worked with partner organisations in the four countries — Digital Moment, Tech Kidz Africa, Penang Science Cluster, and Asociația Techsoup — drawing on their expertise regarding their educational context and the real-world examples that would resonate with young people in their countries.

    Participants on a video call.
    A video call with educators in Kenya.

    We asked our partners to look through each of the Experience AI resources and point out the things that they thought needed to change. We then worked with them to find alternative contexts that would resonate with their students, whilst ensuring the resources’ intended learning objectives would still be met.

    Spotlight on localisation for Kenya

    Tech Kidz Africa, our partner in Kenya, challenged some of the assumptions we had made when writing the original resources.

    An Experience AI lesson plan in English and Swahili.
    An Experience AI resource in English and Swahili.

    Relevant applications of AI technology

    Tech Kidz Africa wanted the contexts in the lessons to not just be relatable to their students, but also to demonstrate real-world uses of AI applications that could make a difference in learners’ communities. They highlighted that as agriculture is the largest contributor to the Kenyan economy, there was an opportunity to use this as a key theme for making the Experience AI lessons more culturally relevant. 

    This conversation with Tech Kidz Africa led us to identify a real-world use case where farmers in Kenya were using an AI application that identifies disease in crops and provides advice on which pesticides to use. This helped the farmers to increase their crop yields.

    Training an AI model to classify healthy and unhealthy cassava plant photos.
    Training an AI model to classify healthy and unhealthy cassava plant photos.

    We included this example when we adapted an activity where students explore the use of AI for “computer vision”. A Google DeepMind research engineer, who is one of the General Chairs of the Deep Learning Indaba, recommended a data set of images of healthy and diseased cassava crops (1). We were therefore able to include an activity where students build their own machine learning models to solve this real-world problem for themselves.

    Access to technology

    While designing the original set of Experience AI resources, we made the assumption that the vast majority of students in UK classrooms have access to computers connected to the internet. This is not the case in Kenya; neither is it the case in many other countries across the world. Therefore, while we localised the Experience AI resources with our Kenyan partner, we made sure that the resources allow students to achieve the same learning outcomes whether or not they have access to internet-connected computers.

    An AI classroom discussion activity.
    An Experience AI activity related to farming.

    Assuming teachers in Kenya are able to download files in advance of lessons, we added “unplugged” options to activities where needed, as well as videos that can be played offline instead of being streamed on an internet-connected device.

    What we’ve learned

    The work with our first four Experience AI partners has given us with lots of localisation learnings, which we will use as we continue to expand the programme with more partners across the globe:

    • Cultural specificity: We gained insight into which contexts are not appropriate for non-UK schools, and which contexts all our partners found relevant. 
    • Importance of local experts: We know we need to make sure we involve not just people who live in a country, but people who have a wealth of experience of working with learners and understand what is relevant to them. 
    • Adaptation vs standardisation: We have learned about the balance between adapting resources and maintaining the same progression of learning across the Experience AI resources. 

    Throughout this process we have also reflected on the design principles for our resources and the choices we can make while we create more Experience AI materials in order to make them more amenable to localisation. 

    Join us as an Experience AI partner

    We are very grateful to our partners for collaborating with us to localise the Experience AI resources. Thank you to Digital Moment, Tech Kidz Africa, Penang Science Cluster, and Asociația Techsoup.

    We now have the tools to create resources that support a truly global community to access Experience AI in a way that resonates with them. If you’re interested in joining us as a partner, you can register your interest here.


    (1) The cassava data set was published open source by Ernest Mwebaze, Timnit Gebru, Andrea Frome, Solomon Nsumba, and Jeremy Tusubira. Read their research paper about it here.

    Website: LINK

  • Insights into students’ attitudes to using AI tools in programming education

    Insights into students’ attitudes to using AI tools in programming education

    Reading Time: 4 minutes

    Educators around the world are grappling with the problem of whether to use artificial intelligence (AI) tools in the classroom. As more and more teachers start exploring the ways to use these tools for teaching and learning computing, there is an urgent need to understand the impact of their use to make sure they do not exacerbate the digital divide and leave some students behind.

    A teenager learning computer science.

    Sri Yash Tadimalla from the University of North Carolina and Dr Mary Lou Maher, Director of Research Community Initiatives at the Computing Research Association, are exploring how student identities affect their interaction with AI tools and their perceptions of the use of AI tools. They presented findings from two of their research projects in our March seminar.

    How students interact with AI tools 

    A common approach in research is to begin with a preliminary study involving a small group of participants in order to test a hypothesis, ways of collecting data from participants, and an intervention. Yash explained that this was the approach they took with a group of 25 undergraduate students on an introductory Java programming course. The research observed the students as they performed a set of programming tasks using an AI chatbot tool (ChatGPT) or an AI code generator tool (GitHub Copilot). 

    The data analysis uncovered five emergent attitudes of students using AI tools to complete programming tasks: 

    • Highly confident students rely heavily on AI tools and are confident about the quality of the code generated by the tool without verifying it
    • Cautious students are careful in their use of AI tools and verify the accuracy of the code produced
    • Curious students are interested in exploring the capabilities of the AI tool and are likely to experiment with different prompts 
    • Frustrated students struggle with using the AI tool to complete the task and are likely to give up 
    • Innovative students use the AI tool in creative ways, for example to generate code for other programming tasks

    Whether these attitudes are common for other and larger groups of students requires more research. However, these preliminary groupings may be useful for educators who want to understand their students and how to support them with targeted instructional techniques. For example, highly confident students may need encouragement to check the accuracy of AI-generated code, while frustrated students may need assistance to use the AI tools to complete programming tasks.

    An intersectional approach to investigating student attitudes

    Yash and Mary Lou explained that their next research study took an intersectional approach to student identity. Intersectionality is a way of exploring identity using more than one defining characteristic, such as ethnicity and gender, or education and class. Intersectional approaches acknowledge that a person’s experiences are shaped by the combination of their identity characteristics, which can sometimes confer multiple privileges or lead to multiple disadvantages.

    A student in a computing classroom.

    In the second research study, 50 undergraduate students participated in programming tasks and their approaches and attitudes were observed. The gathered data was analysed using intersectional groupings, such as:

    • Students who were from the first generation in their family to attend university and female
    • Students who were from an underrepresented ethnic group and female 

    Although the researchers observed differences amongst the groups of students, there was not enough data to determine whether these differences were statistically significant.

    Who thinks using AI tools should be considered cheating? 

    Participating students were also asked about their views on using AI tools, such as “Did having AI help you in the process of programming?” and “Does your experience with using this AI tool motivate you to continue learning more about programming?”

    The same intersectional approach was taken towards analysing students’ answers. One surprising finding stood out: when asked whether using AI tools to help with programming tasks should be considered cheating, students from more privileged backgrounds agreed that this was true, whilst students with less privilege disagreed and said it was not cheating.

    This finding is only with a very small group of students at a single university, but Yash and Mary Lou called for other researchers to replicate this study with other groups of students to investigate further. 

    You can watch the full seminar here:

    [youtube https://www.youtube.com/watch?v=0oIGA7NJREI?feature=oembed&w=500&h=281]

    Acknowledging differences to prevent deepening divides

    As researchers and educators, we often hear that we should educate students about the importance of making AI ethical, fair, and accessible to everyone. However, simply hearing this message isn’t the same as truly believing it. If students’ identities influence how they view the use of AI tools, it could affect how they engage with these tools for learning. Without recognising these differences, we risk continuing to create wider and deeper digital divides. 

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI

    For our next seminar on Tuesday 16 April at 17:00 to 18:30 GMT, we’re joined by Brett A. Becker (University College Dublin), who will talk about how generative AI can be used effectively in secondary school programming education and how it can be leveraged so that students can be best prepared for continuing their education or beginning their careers. To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • Using an AI code generator with school-age beginner programmers

    Using an AI code generator with school-age beginner programmers

    Reading Time: 5 minutes

    AI models for general-purpose programming, such as OpenAI Codex, which powers the AI pair programming tool GitHub Copilot, have the potential to significantly impact how we teach and learn programming. 

    Learner in a computing classroom.

    The basis of these tools is a ‘natural language to code’ approach, also called natural language programming. This allows users to generate code using a simple text-based prompt, such as “Write a simple Python script for a number guessing game”. Programming-specific AI models are trained on vast quantities of text data, including GitHub repositories, to enable users to quickly solve coding problems using natural language. 

    As a computing educator, you might ask what the potential is for using these tools in your classroom. In our latest research seminar, Majeed Kazemitabaar (University of Toronto) shared his work in developing AI-assisted coding tools to support students during Python programming tasks.

    Evaluating the benefits of natural language programming

    Majeed argued that natural language programming can enable students to focus on the problem-solving aspects of computing, and support them in fixing and debugging their code. However, he cautioned that students might become overdependent on the use of ‘AI assistants’ and that they might not understand what code is being outputted. Nonetheless, Majeed and colleagues were interested in exploring the impact of these code generators on students who are starting to learn programming.

    Using AI code generators to support novice programmers

    In one study, the team Majeed works in investigated whether students’ task and learning performance was affected by an AI code generator. They split 69 students (aged 10–17) into two groups: one group used a code generator in an environment, Coding Steps, that enabled log data to be captured, and the other group did not use the code generator.

    A group of male students at the Coding Academy in Telangana.

    Learners who used the code generator completed significantly more authoring tasks — where students manually write all of the code — and spent less time completing them, as well as generating significantly more correct solutions. In multiple choice questions and modifying tasks — where students were asked to modify a working program — students performed similarly whether they had access to the code generator or not. 

    A test was administered a week later to check the groups’ performance, and both groups did similarly well. However, the ‘code generator’ group made significantly more errors in authoring tasks where no starter code was given. 

    Majeed’s team concluded that using the code generator significantly increased the completion rate of tasks and student performance (i.e. correctness) when authoring code, and that using code generators did not lead to decreased performance when manually modifying code. 

    Finally, students in the code generator group reported feeling less stressed and more eager to continue programming at the end of the study.

    Student perceptions when (not) using AI code generators

    Understanding how novices use AI code generators

    In a related study, Majeed and his colleagues investigated how novice programmers used the code generator and whether this usage impacted their learning. Working with data from 33 learners (aged 11–17), they analysed 45 tasks completed by students to understand:

    1. The context in which the code generator was used
    2. What learners asked for
    3. How prompts were written
    4. The nature of the outputted code
    5. How learners used the outputted code 

    Their analysis found that students used the code generator for the majority of task attempts (74% of cases) with far fewer tasks attempted without the code generator (26%). Of the task attempts made using the code generator, 61% involved a single prompt while only 8% involved decomposition of the task into multiple prompts for the code generator to solve subgoals; 25% used a hybrid approach — that is, some subgoal solutions being AI-generated and others manually written.

    In a comparison of students against their post-test evaluation scores, there were positive though not statistically significant trends for students who used a hybrid approach (see the image below). Conversely, negative though not statistically significant trends were found for students who used a single prompt approach.

    A positive correlation between hybrid programming and post-test scores

    Though not statistically significant, these results suggest that the students who actively engaged with tasks — i.e. generating some subgoal solutions, manually writing others, and debugging their own written code — performed better in coding tasks.

    Majeed concluded that while the data showed evidence of self-regulation, such as students writing code manually or adding to AI-generated code, students frequently used the output from single prompts in their solutions, indicating an over-reliance on the output of AI code generators.

    He suggested that teachers should support novice programmers to write better quality prompts to produce better code.  

    If you want to learn more, you can watch Majeed’s seminar:

    [youtube https://www.youtube.com/watch?v=ZmbQYE7hKhE?feature=oembed&w=500&h=281]

    You can read more about Majeed’s work on his personal website. You can also download and use the code generator Coding Steps yourself.

    Join our next seminar

    The focus of our ongoing seminar series is on teaching programming with or without AI. 

    For our next seminar on Tuesday 16 April at 17:00–18:30 GMT, we’re joined by Brett Becker (University College Dublin), who will discuss how generative AI may be effectively utilised in secondary school programming education and how it can be leveraged so that students can be best prepared for whatever lies ahead. To take part in the seminar, click the button below to sign up, and we will send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • The Experience AI Challenge: Find out all you need to know

    The Experience AI Challenge: Find out all you need to know

    Reading Time: 3 minutes

    We’re really excited to see that Experience AI Challenge mentors are starting to submit AI projects created by young people. There’s still time for you to get involved in the Challenge: the submission deadline is 24 May 2024. 

    The Experience AI Challenge banner.

    If you want to find out more about the Challenge, join our live webinar on Wednesday 3 April at 15:30 BST on our YouTube channel.

    [youtube https://www.youtube.com/watch?v=kH3BI70M0e0?feature=oembed&w=500&h=281]

    During the webinar, you’ll have the chance to:

    • Ask your questions live. Get any Challenge-related queries answered by us in real time. Whether you need clarification on any part of the Challenge or just want advice on your young people’s project(s), this is your chance to ask.
    • Get introduced to the submission process. Understand the steps of submitting projects to the Challenge. We’ll walk you through the requirements and offer tips for making your young people’s submission stand out.
    • Learn more about our project feedback. Find out how we will deliver our personalised feedback on submitted projects (UK only).
    • Find out how we will recognise your creators’ achievements. Learn more about our showcase event taking place in July, and the certificates and posters we’re creating for you and your young people to celebrate submitting your projects.

    Subscribe to our YouTube channel and press the ‘Notify me’ button to receive a notification when we go live. 

    Why take part? 

    The Experience AI Challenge, created by the Raspberry Pi Foundation in collaboration with Google DeepMind, guides young people under the age of 18, and their mentors, through the exciting process of creating their own unique artificial intelligence (AI) project. Participation is completely free.

    Central to the Challenge is the concept of project-based learning, a hands-on approach that gets learners working together, thinking critically, and engaging deeply with the materials. 

    A teacher and three students in a classroom. The teacher is pointing at a computer screen.

    In the Challenge, young people are encouraged to seek out real-world problems and create possible AI-based solutions. By taking part, they become problem solvers, thinkers, and innovators. 

    And to every young person based in the UK who creates a project for the Challenge, we will provide personalised feedback and a certificate of achievement, in recognition of their hard work and creativity. Any projects considered as outstanding by our experts will be selected as favourites and its creators will be invited to a showcase event in the summer. 

    Resources ready for your classroom or club

    You don’t need to be an AI expert to bring this Challenge to life in your classroom or coding club. Whether you’re introducing AI for the first time or looking to deepen your young people’s knowledge, the Challenge’s step-by-step resource pack covers all you and your young people need, from the basics of AI, to training a machine learning model, to creating a project in Scratch.  

    In the resource pack, you will find:

    • The mentor guide contains all you need to set up and run the Challenge with your young people 
    • The creator guide supports young people throughout the Challenge and contains talking points to help with planning and designing projects 
    • The blueprint workbook helps creators keep track of their inspiration, ideas, and plans during the Challenge 

    The pack offers a safety net of scaffolding, support, and troubleshooting advice. 

    Find out more about the Experience AI Challenge

    By bringing the Experience AI Challenge to young people, you’re inspiring the next generation of innovators, thinkers, and creators. The Challenge encourages young people to look beyond the code, to the impact of their creations, and to the possibilities of the future.

    You can find out more about the Experience AI Challenge, and download the resource pack, from the Experience AI website.

    Website: LINK

  • Teaching about AI explainability

    Teaching about AI explainability

    Reading Time: 6 minutes

    In the rapidly evolving digital landscape, students are increasingly interacting with AI-powered applications when listening to music, writing assignments, and shopping online. As educators, it’s our responsibility to equip them with the skills to critically evaluate these technologies.

    A woman teacher helps a young person with a coding project.

    A key aspect of this is understanding ‘explainability’ in AI and machine learning (ML) systems. The explainability of a model is how easy it is to ‘explain’ how a particular output was generated. Imagine having a job application rejected by an AI model, or facial recognition technology failing to recognise you — you would want to know why.

    Two teenage girls do coding activities at their laptops in a classroom.

    Establishing standards for explainability is crucial. Otherwise we risk creating a world where decisions impacting our lives are made by opaque systems we don’t understand. Learning about explainability is key for students to develop digital literacy, enabling them to navigate the digital world with informed awareness and critical thinking.

    Why AI explainability is important

    AI models can have a significant impact on people’s lives in various ways. For instance, if a model determines a child’s exam results, parents and teachers would want to understand the reasoning behind it.

    Two learners sharing a laptop in a coding session.

    Artists might want to know if their creative works have been used to train a model and could be at risk of plagiarism. Likewise, coders will want to know if their code is being generated and used by others without their knowledge or consent. If you came across an AI-generated artwork that features a face resembling yours, it’s natural to want to understand how a photo of you was incorporated into the training data. 

    Explainability is about accountability, transparency, and fairness, which are vital lessons for children as they grow up in an increasingly digital world.

    There will also be instances where a model seems to be working for some people but is inaccurate for a certain demographic of users. This happened with Twitter’s (now X’s) face detection model in photos; the model didn’t work as well for people with darker skin tones, who found that it could not detect their faces as effectively as their lighter-skinned friends and family. Explainability allows us not only to understand but also to challenge the outputs of a model if they are found to be unfair.

    In essence, explainability is about accountability, transparency, and fairness, which are vital lessons for children as they grow up in an increasingly digital world.

    Routes to AI explainability

    Some models, like decision trees, regression curves, and clustering, have an in-built level of explainability. There is a visual way to represent these models, so we can pretty accurately follow the logic implemented by the model to arrive at a particular output.

    By teaching students about AI explainability, we are not only educating them about the workings of these technologies, but also teaching them to expect transparency as they grow to be future consumers or even developers of AI technology.

    A decision tree works like a flowchart, and you can follow the conditions used to arrive at a prediction. Regression curves can be shown on a graph to understand why a particular piece of data was treated the way it was, although this wouldn’t give us insight into exactly why the curve was placed at that point. Clustering is a way of collecting similar pieces of data together to create groups (or clusters) with which we can interrogate the model to determine which characteristics were used to create the groupings.

    A decision tree that classifies animals based on their characteristics; you can follow these models like a flowchart

    However, the more powerful the model, the less explainable it tends to be. Neural networks, for instance, are notoriously hard to understand — even for their developers. The networks used to generate images or text can contain millions of nodes spread across thousands of layers. Trying to work out what any individual node or layer is doing to the data is extremely difficult.

    Learners in a computing classroom.

    Regardless of the complexity, it is still vital that developers find a way of providing essential information to anyone looking to use their models in an application or to a consumer who might be negatively impacted by the use of their model.

    Model cards for AI models

    One suggested strategy to add transparency to these models is using model cards. When you buy an item of food in a supermarket, you can look at the packaging and find all sorts of nutritional information, such as the ingredients, macronutrients, allergens they may contain, and recommended serving sizes. This information is there to help inform consumers about the choices they are making.

    Model cards attempt to do the same thing for ML models, providing essential information to developers and users of a model so they can make informed choices about whether or not they want to use it.

    Model cards include details such as the developer of the model, the training data used, the accuracy across diverse groups of people, and any limitations the developers uncovered in testing.

    Model cards should be accessible to as many people as possible.

    A real-world example of a model card is Google’s Face Detection model card. This details the model’s purpose, architecture, performance across various demographics, and any known limitations of their model. This information helps developers who might want to use the model to assess whether it is fit for their purpose.

    Transparency and accountability in AI

    As the world settles into the new reality of having the amazing power of AI models at our disposal for almost any task, we must teach young people about the importance of transparency and responsibility. 

    An educator points to an image on a student's computer screen.

    As a society, we need to have hard discussions about where and when we are comfortable implementing models and the consequences they might have for different groups of people. By teaching students about explainability, we are not only educating them about the workings of these technologies, but also teaching them to expect transparency as they grow to be future consumers or even developers of AI technology.

    Most importantly, model cards should be accessible to as many people as possible — taking this information and presenting it in a clear and understandable way. Model cards are a great way for you to show your students what information is important for people to know about an AI model and why they might want to know it. Model cards can help students understand the importance of transparency and accountability in AI.  


    This article also appears in issue 22 of Hello World, which is all about teaching and AI. Download your free PDF copy now.

    If you’re an educator, you can use our free Experience AI Lessons to teach your learners the basics of how AI works, whatever your subject area.

    Website: LINK

  • AI isn’t just robots: How to talk to young children about AI

    AI isn’t just robots: How to talk to young children about AI

    Reading Time: 5 minutes

    Young children have a unique perspective on the world they live in. They often seem oblivious to what’s going on around them, but then they will ask a question that makes you realise they did get some insight from a news story or a conversation they overheard. This happened to me with a class of ten-year-olds when one boy asked, with complete sincerity and curiosity, “And is that when the zombie apocalypse happened?” He had unknowingly conflated the Great Plague with television depictions of zombies taking over the world.

    How to talk to young people about AI

    Absorbing media and assimilating it into your existing knowledge is a challenge, and this is a concern when the media is full of big, scary headlines about artificial intelligence (AI) taking over the world, stealing jobs, and being sentient. As teachers and parents, you don’t need to know all the details about AI to answer young people’s questions, but you can avoid accidentally introducing alternate conceptions. This article offers some top tips to help you point those inquisitive minds in the right direction.

    AI is not a person

    Technology companies like to anthropomorphise their products and give them friendly names. Why? Because it makes their products seem more endearing and less scary, and makes you more likely to include them in your lives. However, when you think of AI as a human with a name who needs you to say ‘please’ or is ‘there to help you’, you start to make presumptions about how it works, what it ‘knows’, and its morality. This changes what we ask, how much we trust an AI device’s responses, and how we behave when using the device. The device, though, does not ‘see’ or ‘know’ anything; instead, it uses lots of data to make predictions. Think of word association: if I say “bread”, I predict that a lot of people in the UK will think “butter”. Here, I’ve used the data I’ve collected from years of living in this country to predict a reasonable answer. This is all AI devices are doing. 

    [AI] does not ‘see’ or ‘know’ anything; instead, it uses lots of data to make predictions.

    When talking to young children about AI, try to avoid using pronouns such as ‘she’ or ‘he’. Where possible, avoid giving devices human names, and instead call them “computer”, to reinforce the idea that humans and computers are very different. Let’s imagine that a child in your class says, “Alexa told me a joke at the weekend — she’s funny!” You could respond, “I love using computers to find new jokes! What was it?” This is just a micro-conversation, but with it, you are helping to surreptitiously challenge the child’s perception of Alexa and the role of AI in it.

    Where possible, avoid giving devices human names, and instead call them ‘computer’, to reinforce the idea that humans and computers are very different.

    Another good approach is to remember to keep your emotions separate from computers, so as not to give them human-like characteristics: don’t say that the computer ‘hates’ you, or is ‘deliberately ignoring’ you, and remember that it’s only ‘helpful’ because it was told to be. Language is important, and we need to continually practise avoiding anthropomorphism.

    AI isn’t just robots (actually, it rarely is)

    The media plays a huge role in what we imagine when we talk about AI. For the media, the challenge is how to make lines of code and data inside a computer look exciting and recognisable to their audiences. The answer? Robots! When learners hear about AI taking over the world, it’s easy for them to imagine robots like those you’d find in a Marvel movie. Yet the majority of AI exists within systems they’re already aware of and are using — you might just need to help draw their attention to it.

    Even better than just calling out uses of AI: try to have conversations about when things go wrong and AI systems suggest silly options.

    For example, when using a word processor, you can highlight to learners that the software sometimes predicts what word you want to type next, and that this is an example of the computer using AI. When learners are using streaming services for music or TV and the service predicts something that they might want to watch or listen to next, point out that this is using AI technology. When they see their parents planning a route using a satnav, explain that the satnav system uses data and AI to plan the best route.

    Even better than just calling out uses of AI: try to have conversations about when things go wrong and AI systems suggest silly options. This is a great way to build young people’s critical thinking around the use of computers. AI systems don’t always know best, because they’re just making predictions, and predictions can always be wrong.

    AI complements humans

    There’s a delicate balance between acknowledging the limitations of AI and portraying it as a problematic tool that we shouldn’t use. AI offers us great opportunities to improve the way we work, to get us started on a creative project, or to complete mundane tasks. However, it is just a tool, and tools complement the range of skills that humans already have. For example, if you gave an AI chatbot app the prompt, ‘Write a setting description using these four phrases: dark, scary, forest, fairy tale’, the first output from the app probably wouldn’t make much sense. As a human, though, you’d probably have to do far less work to edit the output than if you had had to write the setting description from scratch. Now, say you had the perfect example of a setting description, but you wanted 29 more examples, a different version for each learner in your class. This is where AI can help: completing a repetitive task and saving time for humans. 

    To help children understand how AI and humans complement each other, ask them the question, ‘What can’t a computer do?’ Answers that I have received before include, ‘Give me a hug’, ‘Make me laugh’, and ‘Paint a picture’, and these are all true. Can Alexa tell you a joke that makes you laugh? Yes — but a human created that joke. The computer is just the way in which it is being shared. Even with AI ‘creating’ new artwork, it is really only using data from something that someone else created. Humans are required. 

    Overall, we must remember that young children are part of a world that uses AI, and that it is likely to be ever more present in the future. We need to ensure that they know how to use AI responsibly, by minimising their alternate conceptions. With our youngest learners, this means taking care with the language you choose and the examples you use, and explaining AI’s role as a tool.

    To help children understand how AI and humans complement each other, ask them the question, ‘What can’t a computer do?’

    These simple approaches are the first steps to empowering children to go on to harness this technology. They also pave the way for you to simply introduce the core concepts of AI in later computing lessons without first having to untangle a web of alternate conceptions.


    This article also appears in issue 22 of Hello World, which is all about teaching and AI. Download your free PDF copy now.

    If you’re an educator, you can use our free Experience AI Lessons to teach your learners the basics of how AI works, whatever your subject area.

    Website: LINK

  • Experience AI: Making AI relevant and accessible

    Experience AI: Making AI relevant and accessible

    Reading Time: 7 minutes

    Google DeepMind’s Aimee Welch discusses our partnership on the Experience AI learning programme and why equal access to AI education is key. This article also appears in issue 22 of Hello World on teaching and AI.

    From AI chatbots to self-driving cars, artificial intelligence (AI) is here and rapidly transforming our world. It holds the potential to solve some of the biggest challenges humanity faces today — but it also has many serious risks and inherent challenges, like reinforcing existing patterns of bias or “hallucinating”, a term that describes AI making up false outputs that do not reflect real events or data.

    A teenager learning computer science.
    Young people need the knowledge and skills to navigate and shape AI.

    Teachers want to build young people’s AI literacy

    As AI becomes an integral part of our daily lives, it’s essential that younger generations gain the knowledge and skills to navigate and shape this technology. Young people who have a foundational understanding of AI are able to make more informed decisions about using AI applications in their daily lives, helping ensure safe and responsible use of the technology. This has been recognised for example by the UK government’s AI Council, whose AI Roadmap sets out the goal of ensuring that every child in the UK leaves school with a basic sense of how AI works.

    Learner in a computing classroom.
    Every young person should have access to learning AI literacy.

    But while AI literacy is a key skill in this new era, not every young person currently has access to sufficient AI education and resources. In a recent survey by the EdWeek Research Center in the USA, only one in 10 teachers said they knew enough about AI to teach its basics, and very few reported receiving any professional development related to the topic. Similarly, our work with the Raspberry Pi Computing Education Research Centre has suggested that UK-based teachers are eager to understand more about AI and how to engage their students in the topic.

    Bringing AI education into classrooms

    Ensuring broad access to AI education is also important to improve diversity in the field of AI to ensure safe and responsible development of the technology. There are currently stark disparities in the field and these start already early on, with school-level barriers contributing to underrepresentation of certain groups of people. By increasing diversity in AI, we bring diverse values, hopes, and concerns into the design and deployment of the technology — something that’s critical for AI to benefit everyone.

    Kenyan children work on a physical computing project.
    Bringing diverse values into AI is critical.

    By focusing on AI education from a young age, there is an opportunity to break down some of these long-standing barriers. That’s why we partnered with the Raspberry Pi Foundation to co-create Experience AI, a new learning programme with free lesson plans, slide decks, worksheets and videos, to address gaps in AI education and support teachers in engaging and inspiring young people in the subject.

    The programme aims to help young people aged 11–14 take their first steps in understanding the technology, making it relevant to diverse learners, and encouraging future careers in the field. All Experience AI resources are freely available to every school across the UK and beyond.

    A woman teacher helps a young person with a coding project.
    The Experience AI resources are free for every school.

    The partnership is built on a shared vision to make AI education more inclusive and accessible. Bringing together the Foundation’s expertise in computing education and our cutting-edge technical knowledge and industry insights has allowed us to create a holistic learning experience that connects theoretical concepts and practical applications.

    Experience AI: Informed by AI experts

    A group of 15 research scientists and engineers at Google DeepMind contributed to the development of the lessons. From drafting definitions for key concepts, to brainstorming interesting research areas to highlight, and even featuring in the videos included in the lessons, the group played a key role in shaping the programme in close collaboration with the Foundation’s educators and education researchers.

    Interview for Experience AI at Google DeepMind.
    Interviews with AI scientists and engineers at Google DeepMind are part of Experience AI.

    To bring AI concepts to life, the lessons include interactive activities as well as real-life examples, such as a project where Google DeepMind collaborated with ecologists and conservationists to develop machine learning methods to study the behaviour of an entire animal community in the Serengeti National Park and Grumeti Reserve in Tanzania.

    Elephants in the Serengeti.
    One of the Experience AI lessons focuses on an AI-enabled research project in the Serengeti.

    Member of the working group, Google DeepMind Research Scientist Petar Veličković, shares: “AI is a technology that is going to impact us all, and therefore educating young people on how to interact with this technology is likely going to be a core part of school education going forward. The project was eye-opening and humbling for me, as I learned of the challenges associated with making such a complex topic accessible — not only to every pupil, but also to every teacher! Observing the thoughtful approach undertaken by the Raspberry Pi Foundation left me deeply impressed, and I’m taking home many useful ideas that I hope to incorporate in my own AI teaching efforts going forward.”

    The lessons have been carefully developed to:

    • Follow a clear learning journey, underpinned by the SEAME framework which guides learners sequentially through key concepts and acts as a progression framework.
    • Build foundational knowledge and provide support for teachers. Focus on teacher training and support is at the core of the programme.
    • Embed ethics and responsibility. Crucially, key concepts in AI ethics and responsibility are woven into each lesson and progressively built on. Students are introduced to concepts like data bias, user-focused approaches, model cards, and how AI can be used for social good. 
    • Ensure cultural relevance and inclusion. Experience AI was designed with diverse learners in mind and includes a variety of activities to enable young people to pick topics that most interest them. 

    What teachers say about the Experience AI lessons

    To date, we estimate the resources have reached 200,000+ students in the UK and beyond. We’re thrilled to hear from teachers already using the resources about the impact they are having in the classroom, such as Mrs J Green from Waldegrave School in London, who says: “I thought that the lessons covered a really important topic. Giving the pupils an understanding of what AI is and how it works will become increasingly important as it becomes more ubiquitous in all areas of society. The lessons that we trialled took some of the ‘magic’ out of AI and started to give the students an understanding that AI is only as good as the data that is used to build it. It also started some really interesting discussions with the students around areas such as bias.”

    An educator points to an image on a student's computer screen.
    Experience AI offers support for teachers.

    At North Liverpool Academy, teacher Dave Cross tells us: “AI is such a current and relevant topic in society that [these lessons] will enable Key Stage 3 computing students [ages 11–14] to gain a solid foundation in something that will become more prevalent within the curriculum, and wider subjects too as more sectors adopt AI and machine learning as standard. Our Key Stage 3 computing students now feel immensely more knowledgeable about the importance and place that AI has in their wider lives. These lessons and activities are engaging and accessible to students and educators alike, whatever their specialism may be.”

    A stronger global AI community

    Our hope is that the Experience AI programme instils confidence in both teachers and students, helping to address some of the critical school-level barriers leading to underrepresentation in AI and playing a role in building a stronger, more inclusive AI community where everyone can participate irrespective of their background. 

    Children in a Code Club in India.

    Today’s young people are tomorrow’s leaders — and as such, educating and inspiring them about AI is valuable for everybody.

    Teachers can visit experience-ai.org to download all Experience AI resources for free.

    We are now building a network of educational organisations around the world to tailor and translate the Experience AI resources so that more teachers and students can engage with them and learn key AI literacy skills. Find out more.

    Website: LINK

  • AI literacy for teachers and students all over the world

    AI literacy for teachers and students all over the world

    Reading Time: 5 minutes

    I am delighted to announce that the Raspberry Pi Foundation and Google DeepMind are building a global network of educational organisations to bring AI literacy to teachers and students all over the world, starting with Canada, Kenya, and Romania.

    Learners in a classroom in Kenya.
    Learners around the world will gain AI literacy skills through Experience AI.

    Experience AI 

    We launched Experience AI in September 2022 to help teachers and students learn about AI technologies and how they are changing the world. 

    Developed by the Raspberry Pi Foundation and Google DeepMind, Experience AI provides everything that teachers need to confidently deliver engaging lessons that will inspire and educate young people about AI and the role that it could play in their lives.

    A group of young people investigate computer hardware together.
    Experience AI is designed to inspire learners about AI through real-world contexts.

    We provide lesson plans, classroom resources, worksheets, hands-on activities, and videos that introduce a wide range of AI applications and the underlying technologies that make them work. The materials are designed to be relatable to young people and can be taught by any teacher, whether or not they have a technical background. Alongside the classroom resources, we provide teacher professional development, including an online course that provides an introduction to machine learning and AI. 

    Part of Experience AI are video interviews with AI developers at Google DeepMind.

    The materials are grounded in real-world contexts and emphasise the potential for young people to positively change the world through a mastery of AI technologies. 

    Since launching the first resources, we have seen significant demand from teachers and students all over the world, with over 200,000 students already learning with Experience AI. 

    Experience AI network

    Building on that initial success and in response to huge demand, we are now building a global network of educational organisations to expand the reach and impact of Experience AI by translating and localising the materials, promoting them to schools, and supporting teacher professional development.

    Obum Ekeke OBE, Head of Education Partnerships at Google DeepMind, says:

    “We have been blown away by the interest we have seen in Experience AI since its launch and are thrilled to be working with the Raspberry Pi Foundation and local partners to expand the reach of the programme. AI literacy is a critical skill in today’s world, but not every young person currently has access to relevant education and resources. By making AI education more inclusive, we can help young people make more informed decisions about using AI applications in their daily lives, and encourage safe and responsible use of the technology.”

    Learner in a computing classroom.
    Experience AI helps learners understand how they might use AI to positively change the world.

    Today we are announcing the first three organisations that we are working with, each of which is already doing fantastic work to democratise digital skills in their part of the world. All three are already working in partnership with the Raspberry Pi Foundation and we are excited to be deepening and expanding our collaboration to include AI literacy.

    Digital Moment, Canada

    Digital Moment is a Montreal-based nonprofit focused on empowering young changemakers through digital skills. Founded in 2013, Digital Moment has a track record of supporting teachers and students across Canada to learn about computing, coding, and AI literacy, including through supporting one of the world’s largest networks of Code Clubs

    Digital Moment logo.

    “We’re excited to be working with the Raspberry Pi Foundation and Google DeepMind to bring Experience AI to teachers across Canada. Since 2018, Digital Moment has been introducing rich training experiences and educational resources to make sure that Canadian teachers have the support to navigate the impacts of AI in education for their students. Through this partnership, we will be able to reach more teachers and with more resources, to keep up with the incredible pace and disruption of AI.”

    Indra Kubicek, President, Digital Moment

    Tech Kidz Africa, Kenya

    Tech Kidz Africa is a Mobasa-based social enterprise that nurtures creativity in young people across Kenya through digital skills including coding, robotics, app and web development, and creative design thinking.

    Tech Kidz Africa logo.

    “With the retooling of teachers as a key objective of Tech Kidz Africa, working with Google DeepMind and the Raspberry Pi Foundation will enable us to build the capacity of educators to empower the 21st century learner, enhancing the teaching and learning experience to encourage innovation and  prepare the next generation for the future of work.”

    Grace Irungu, CEO, Tech Kidz Africa

    Asociația Techsoup, Romania

    Asociația Techsoup works with teachers and students across Romania and Moldova, training Computer Science, ICT, and primary school teachers to build their competencies around coding and technology. A longstanding partner of the Raspberry Pi Foundation, they foster a vibrant community of CoderDojos and support young people to participate in Coolest Projects and the European Astro Pi Challenge

    Asociata Techsoup logo.

    “We are enthusiastic about participating in this global partnership to bring high-quality AI education to all students, regardless of their background. Given the current exponential growth of AI tools and instruments in our daily lives, it is crucial to ensure that students and teachers everywhere comprehend and effectively utilise these tools to enhance their human, civic, and professional potential. Experience AI is the best available method for AI education for middle school students. We couldn’t be more thrilled to work with the Raspberry Pi Foundation and Google DeepMind to make it accessible in Romanian for teachers in Romania and the Republic of Moldova, and to assist teachers in fully integrating it into their classes.”

    Elena Coman, Director of Development, Asociația Techsoup

    Get involved

    These are the first of what will become a global network of organisations supporting tens of thousands of teachers to equip millions of students with a foundational understanding of AI technologies through Experience AI. If you want to get involved in inspiring the next generation of AI leaders, we would love to hear from you.

    Website: LINK