Schlagwort: AI literacy

  • The Experience AI Challenge: Make your own AI project

    The Experience AI Challenge: Make your own AI project

    Reading Time: 4 minutes

    We are pleased to announce a new AI-themed challenge for young people: the Experience AI Challenge invites and supports young people aged up to 18 to design and make their own AI applications. This is their chance to have a taste of getting creative with the powerful technology of machine learning. And equally exciting: every young creator will get feedback and encouragement from us at the Raspberry Pi Foundation.

    As you may have heard, we recently launched a series of classroom lessons called Experience AI in partnership with Google DeepMind. The lesson materials make it easy for teachers of all subjects to teach their learners aged up to 18 about artificial intelligence and machine learning. Now the Experience AI Challenge gives young people the opportunity to develop their skills further and build their own AI applications.

    Key information

    • Starts on 08 January 2024
    • Free to take part in
    • Designed for beginners, based on the tools Scratch and Machine Learning for Kids
    • Open for official submissions made by UK-based young people aged up to 18 and their mentors 
    • Young people and their mentors around the world are welcome to access the Challenge resources and make AI projects
    • Tailored resources for young people and mentors to support you to take part
    • Register your interest and we’ll send you a reminder email on the launch day

    The Experience AI Challenge

    For the Experience AI Challenge, you and the young people you work with will learn how to make a machine learning (ML) classifier that organises data types such as audio, text, or images into different groupings that you specify.

    A girl points excitedly at a project on the Raspberry Pi Foundation's projects site.

    The Challenge resources show young people the basic principles of using the tools and training ML models. Then they will use these new skills to create their own projects, and it’s a chance for their imaginations to run free. Here are some examples of projects your young tech creators could make:

    • An instrument classifier to identify the type of musical instrument being played in pieces of music
    • An animal sound identifier to determine which animal is making a particular sound
    • A voice command recogniser to detect voice commands like ‘stop’, ‘go’, ‘left’, and ‘right’
    • A photo classifier to identify what kind of food is shown in a photograph

    All creators will receive expert feedback on their projects.

    To make the Experience AI Challenge as familiar and accessible as possible for young people who may be new to coding, we designed it for beginners. We chose the free, easy-to-use, online tool Machine Learning for Kids for young people to train their machine learning models, and Scratch as the programming environment for creators to code their projects. If you haven’t used these tools before, don’t worry. The Challenge resources will provide all the support you need to get up to speed.

    Training an ML model and creating a project with it teaches many skills beyond coding, including computational thinking, ethical programming, data literacy, and developing a broader understanding of the influence of AI on society.

    The three Challenge stages

    Our resources for creators and mentors walk you through the three stages of the Experience AI Challenge.

    Stage 1: Explore and discover

    The first stage of the Challenge is designed to ignite young people’s curiosity. Through our resources, mentors let participants explore the world of AI and ML and discover how these technologies are revolutionising industries like healthcare and entertainment.

    Stage 2: Get hands-on

    In the second stage, young people choose a data type and embark on a guided example project. They create a training dataset, train an ML model, and develop a Scratch application as the user interface for their model. 

    Stage 3: Design and create

    In the final stage, mentors support young people to apply what they’ve learned to create their own ML project that addresses a problem they’re passionate about. They submit their projects to us online and receive feedback from our expert panel.

    Things to do today

    1. Visit our new Experience AI Challenge homepage to find out more details
    2. Register your interest so you receive a reminder email on launch day, 8 January
    3. Get your young people excited and thinking about what kind of AI project they might like to create

    We can’t wait to see how you and your young creators choose to engage with the Experience AI Challenge!

    Website: LINK

  • Hello World #22 out now: Teaching and AI

    Hello World #22 out now: Teaching and AI

    Reading Time: 2 minutes

    Recent developments in artificial intelligence are changing how the world sees computing and challenging computing educators to rethink their approach to teaching. In the brand-new issue of Hello World, out today for free, we tackle some big questions about AI and computing education. We also get practical with resources for your classroom.

    Cover of Hello World issue 22.

    Teaching and AI

    In their articles for issue 22, educators explore a range of topics related to teaching and AI, including what is AI literacy and how do we teach it; gender bias in AI and what we can do about it; how to speak to young children about AI; and why anthropomorphism hinders learners’ understanding of AI.

    Our feature articles also include a research digest on AI ethics for children, and of course hands-on examples of AI lessons for your learners.

    A snapshot of AI education

    Hello World issue 22 is a comprehensive snapshot of the current landscape of AI education. Ben Garside, Learning Manager for our Experience AI programme and guest editor of this issue, says:

    “When I was teaching in the classroom, I used to enjoy getting to grips with new technological advances and finding ways in which I could bring them into school and excite the students I taught. Occasionally, during the busiest of times, I’d also look longingly at other subjects and be jealous that their curriculum appeared to be more static than ours (probably a huge misconception on my behalf).”

    It’s inspiring for me to see how the education community is reacting to the opportunities that AI can provide.

    Ben Garside

    “It’s inspiring for me to see how the education community is reacting to the opportunities that AI can provide. Of course, there are elements of AI where we need to tread carefully and be very cautious in our approach, but what you’ll see in this magazine is educators who are thinking creatively in this space.”

    Download Hello World issue 22 for free

    AI is a topic we’ve addressed before in Hello World, and we’ll keep covering this rapidly evolving area in future. We hope this issue gives you plenty of ideas to take away and build upon.

    Also in issue 22:

    • Vocational training for young people
    • Making the most of online educator training
    • News about BBC micro:bit
    • An insight into the WiPSCE 2023 conference for teachers and educators
    • And much, much more

    You can download your free PDF issue now, or purchase a print copy from our store. UK-based subscribers for a free print edition can expect their copies to arrive in the mail this week.

    Send us a message or tag us on social media to let us know which articles have made you think and, most importantly, which will help you with your teaching.

    Website: LINK

  • What does AI mean for computing education?

    What does AI mean for computing education?

    Reading Time: 9 minutes

    It’s been less than a year since ChatGPT catapulted generative artificial intelligence (AI) into mainstream public consciousness, reigniting the debate about the role that these powerful new technologies will play in all of our futures.

    ‘Will AI save or destroy humanity?’ might seem like an extreme title for a podcast, particularly if you’ve played with these products and enjoyed some of their obvious limitations. The reality is that we are still at the foothills of what AI technology can achieve (think World Wide Web in the 1990s), and lots of credible people are predicting an astonishing pace of progress over the next few years, promising the radical transformation of almost every aspect of our lives. Comparisons with the Industrial Revolution abound.

    At the same time, there are those saying it’s all moving too fast; that regulation isn’t keeping pace with innovation. One of the UK’s leading AI entrepreneurs, Mustafa Suleyman, said recently: “If you don’t start from a position of fear, you probably aren’t paying attention.”

    In a computing classroom, a girl looks at a computer screen.
    What is AI literacy for young people?

    What does all this mean for education, and particularly for computing education? Is there any point trying to teach children about AI when it is all changing so fast? Does anyone need to learn to code anymore? Will teachers be replaced by chatbots? Is assessment as we know it broken?

    If we’re going to seriously engage with these questions, we need to understand that we’re talking about three different things:

    1. AI literacy: What it is and how we teach it
    2. Rethinking computer science (and possibly some other subjects)
    3. Enhancing teaching and learning through AI-powered technologies

    AI literacy: What it is and how we teach it

    For young people to thrive in a world that is being transformed by AI systems, they need to understand these technologies and the role they could play in their lives.

    In a computing classroom, a smiling girl raises her hand.
    Our SEAME model articulates the concepts, knowledge, and skills that are essential ingredients of any AI literacy curriculum.

    The first problem is defining what AI literacy actually means. What are the concepts, knowledge, and skills that it would be useful for a young person to learn?

    The reality is that — with a few notable exceptions — the vast majority of AI literacy resources available today are probably doing more harm than good.

    In the past couple of years there has been a huge explosion in resources that claim to help young people develop AI literacy. Our research team mapped and categorised over 500 resources, and undertaken a systematic literature review to understand what research has been done on K–12 AI classroom interventions (spoiler: not much). 

    The reality is that — with a few notable exceptions — the vast majority of AI literacy resources available today are probably doing more harm than good. For example, in an attempt to be accessible and fun, many materials anthropomorphise AI systems, using human terms to describe them and their functions and thereby perpetuating misconceptions about what AI systems are and how they work.

    What emerged from this work at the Raspberry Pi Foundation is the SEAME model, which articulates the concepts, knowledge, and skills that are essential ingredients of any AI literacy curriculum. It separates out the social and ethical, application, model, and engine levels of AI systems — all of which are important — and gets specific about age-appropriate learning outcomes for each. 

    This research has formed the basis of Experience AI (experience-ai.org), a suite of resources, lessons plans, videos, and interactive learning experiences created by the Raspberry Pi Foundation in partnership with Google DeepMind, which is already being used in thousands of classrooms.

    If we’re serious about AI literacy for young people, we have to get serious about AI literacy for teachers.

    Defining AI literacy and developing resources is part of the challenge, but that doesn’t solve the problem of how we get them into the hands and minds of every young person. This will require policy change. We need governments and education system leaders to grasp that a foundational understanding of AI technologies is essential for creating economic opportunity, ensuring that young people have the mindsets to engage positively with technological change, and avoiding a widening of the digital divide. We’ve messed this up before with digital skills. Let’s not do it again.

    Two smiling adults learn about computing at desktop computers.
    Teacher professional development is key to AI literacy for young people.

    More than anything, we need to invest in teachers and their professional development. While there are some fantastic computing teachers with computer science qualifications, the reality is that most of the computing lessons taught anywhere on the planet are taught by a non-specialist teacher. That is even more so the case for anything related to AI. If we’re serious about AI literacy for young people, we have to get serious about AI literacy for teachers. 

    Rethinking computer science 

    Alongside introducing AI literacy, we also need to take a hard look at computer science. At the very least, we need to make sure that computer science curricula include machine learning models, explaining how they constitute a new paradigm for computing, and give more emphasis to the role that data will play in the future of computing. Adding anything new to an already packed computer science curriculum means tough choices about what to deprioritise to make space.

    Elephants in the Serengeti.
    One of our Experience AI Lessons revolves around the us of AI technology to study the Serengeti ecosystem.

    And, while we’re reviewing curricula, what about biology, geography, or any of the other subjects that are just as likely to be revolutionised by big data and AI? As part of Experience AI, we are launching some of the first lessons focusing on ecosystems and AI, which we think should be at the heart of any modern biology curriculum. 

    Some are saying young people don’t need to learn how to code. It’s an easy political soundbite, but it just doesn’t stand up to serious scrutiny.

    There is already a lively debate about the extent to which the new generation of AI technologies will make programming as we know it obsolete. In January, the prestigious ACM journal ran an opinion piece from Matt Welsh, founder of an AI-powered programming start-up, in which he said: “I believe the conventional idea of ‘writing a program’ is headed for extinction, and indeed, for all but very specialised applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed.”

    Computer science students at a desktop computer in a classroom.
    Writing computer programs is an essential part of learning how to analyse problems in computational terms.

    With GitHub (now part of Microsoft) claiming that their pair programming technology, Copilot, is now writing 46 percent of developers’ code, it’s perhaps not surprising that some are saying young people don’t need to learn how to code. It’s an easy political soundbite, but it just doesn’t stand up to serious scrutiny. 

    Even if AI systems can improve to the point where they generate consistently reliable code, it seems to me that it is just as likely that this will increase the demand for more complex software, leading to greater demand for more programmers. There is historical precedent for this: the invention of abstract programming languages such as Python dramatically simplified the act of humans providing instructions to computers, leading to more complex software and a much greater demand for developers. 

    A child codes a Spiderman project at a laptop during a Code Club session.
    Learning to program will help young people understand how the world around them is being transformed by AI systems.

    However these AI-powered tools develop, it will still be essential for young people to learn the fundamentals of programming and to get hands-on experience of writing code as part of any credible computer science course. Practical experience of writing computer programs is an essential part of learning how to analyse problems in computational terms; it brings the subject to life; it will help young people understand how the world around them is being transformed by AI systems; and it will ensure that they are able to shape that future, rather than it being something that is done to them.

    Enhancing teaching and learning through AI-powered technologies

    Technology has already transformed learning. YouTube is probably the most important educational innovation of the past 20 years, democratising both the creation and consumption of learning resources. Khan Academy, meanwhile, integrated video instruction into a learning experience that gamified formative assessment. Our own edtech platform, Ada Computer Science, combines comprehensive instructional materials, a huge bank of questions designed to help learning, and automated marking and feedback to make computer science easier to teach and learn. Brilliant though these are, none of them have even begun to harness the potential of AI systems like large language models (LLMs).

    The challenge for all of us working in education is how we ensure that ethics and privacy are at the centre of the development of [AI-powered edtech].

    One area where I think we’ll see huge progress is feedback. It’s well-established that good-quality feedback makes a huge difference to learning, but a teacher’s ability to provide feedback is limited by their time. No one is seriously claiming that chatbots will replace teachers, but — if we can get the quality right — LLM applications could provide every child with unlimited, on-demand feedback. AI-powered feedback — not giving students the answers, but coaching, suggesting, and encouraging in the way that great teachers already do — could be transformational.

    Two adults learn about computing at desktop computers.
    The challenge for all of us working in education is how we ensure that ethics and privacy are at the centre of the development of AI-powered edtech.

    We are already seeing edtech companies racing to bring new products and features to market that leverage LLMs, and my prediction is that the pace of that innovation is going to increase exponentially over the coming years. The challenge for all of us working in education is how we ensure that ethics and privacy are at the centre of the development of these technologies. That’s important for all applications of AI, but especially so in education, where these systems will be unleashed directly on young people. How much data from students will an AI system need to access? Can that data — aggregated from millions of students — be used to train new models? How can we communicate transparently the limitations of the information provided back to students?

    Ultimately, we need to think about how parents, teachers, and education systems (the purchasers of edtech products) will be able to make informed choices about what to put in front of students. Standards will have an important role to play here, and I think we should be exploring ideas such as an AI kitemark for edtech products that communicate whether they meet a set of standards around bias, transparency, and privacy. 

    Realising potential in a brave new world

    We may very well be entering an era in which AI systems dramatically enhance the creativity and productivity of humanity as a species. Whether the reality lives up to the hype or not, AI systems are undoubtedly going to be a big part of all of our futures, and we urgently need to figure out what that means for education, and what skills, knowledge, and mindsets young people need to develop in order to realise their full potential in that brave new world. 

    That’s the work we’re engaged in at the Raspberry Pi Foundation, working in partnership with individuals and organisations from across industry, government, education, and civil society.

    If you have ideas and want to get involved in shaping the future of computing education, we’d love to hear from you.


    This article will also appear in issue 22 of Hello World magazine, which focuses on teaching and AI. We are publishing this new issue on Monday 23 October. Sign up for a free digital subscription to get the PDF straight to your inbox on the day.

    Website: LINK

  • Experience AI: Teach about AI, chatbots, and biology

    Experience AI: Teach about AI, chatbots, and biology

    Reading Time: 4 minutes

    New artificial intelligence (AI) tools have had a profound impact on many areas of our lives in the past twelve months, including on education. Teachers and schools have been exploring how AI tools can transform their work, and how they can teach their learners about this rapidly developing technology. As enabling all schools and teachers to help their learners understand computing and digital technologies is part of our mission, we’ve been working hard to support educators with high-quality, free teaching resources about AI through Experience AI, our learning programme in partnership with Google DeepMind.

    ""

    In this article, we take you through the updates we’ve made to the Experience AI Lessons based on teachers’ feedback, reveal two new lessons on large language models (LLMs) and biology, and give you the chance to shape the future of the Experience AI programme. 

    Updated lessons based on your feedback

    In April we launched the first Experience AI Lessons as a unit of six lessons for secondary school students (ages 11 to 14, Key Stage 3) that gives you everything you need to teach AI, including lesson plans, slide decks, worksheets, and videos. Since the launch, we’ve worked closely with teachers and learners to make improvements to the lesson materials.

    The first big update you’ll see now is an additional project for students to do across Lesson 5 and Lesson 6. Before, students could choose between two projects to create their own machine learning model, either to classify data from the world’s oceans or to identify fake news. The new project we’ve added gives students the chance to use images to train a machine learning model to identify whether or not an item is biodegradable and therefore suitable to be put in a food waste bin.

    Two teenagers sit at laptops and do coding activities.

    Our second big update is a new set of teacher-focused videos that summarise each lesson and highlight possible talking points. We hope these videos will help you feel confident and ready to deliver the Experience AI Lessons to your learners.

    A new lesson on large language models

    As well as updating the six existing lessons, we’ve just released a new seventh lesson consisting of a set of activities to help students learn about the capabilities, opportunities, and downsides of LLMs, the models that AI chatbots are based on.

    With the LLM lesson’s activities you can help your learners to:

    • Explore the purpose and functionality of LLMs and examine the critical aspect of trustworthiness of these models’ outputs
    • Examine the reasons why the output of LLMs may not always be reliable and understand that LLMs are machines that make predictions
    • Compare LLMs to other technologies to assess their suitability for different purposes
    • Evaluate the appropriateness of using LLMs in a variety of authentic scenarios
    A slide from an Experience AI Lesson about large language models.
    An example activity in our new LLM unit.

    All Experience AI Lessons are designed to be cross-curricular, and for England-based teachers, the LLM lesson is particularly useful for teaching PSHE (Personal, Social, Health and Economic education).

    The LLM lesson is designed as a set of five 10-minute activities, so you have the flexibility to teach the material as a single lesson or over a number of sessions. While we recommend that you teach the activities in the order they come, you can easily adapt them for your learners’ interests and needs. Feel free to take longer than our recommended time and have fun with them.

    A new lesson on biology: AI for the Serengeti

    We have also been working on an exciting new lesson to introduce AI to secondary school students (ages 11 to 14, Key Stage 3) in the biology classroom. This stand-alone lesson focuses on how AI can help conservationists with monitoring an ecosystem in the Serengeti.

    Elephants in the Serengeti.

    We worked alongside members of the Biology Education Research Group (BERG) at the UK’s Royal Society of Biology to make sure the lesson is relevant and accessible for Key Stage 3 teachers and their learners.

    Register your interest if you would like to be one of the first teachers to try out this thought-provoking lesson.  

    Webinars to support your teaching

    If you want to use the Experience AI materials but would like more support, our new webinar series will help you. You will get your questions answered by the people who created the lessons. Our first webinar covered the six-lesson unit and you can watch the recording now:

    [youtube https://www.youtube.com/watch?v=-65B3q-Jtok?feature=oembed&w=500&h=281]

    September’s webinar: How to use Machine Learning for Kids

    Join us to learn how to use Machine Learning for Kids (ML4K), a child-friendly tool for training AI models that is used for project work throughout the Experience AI Lessons. The September webinar will be with Dale Lane, who has spent his career developing AI technology and is the creator of ML4K.

    Help shape the future of AI education

    We need your feedback like a machine learning model needs data. Here are two ways you can share your thoughts:

    1. Fill in our form to tell us how you’ve used the Experience AI materials.
    2. Become part of our teacher feedback panel. We meet every half term, and our first session will be held mid-October. Email us to register your interest and we’ll be in touch.

    To find out more about how you can use Experience AI to teach AI and machine learning to your learners this school year, visit the Experience AI website.

    Website: LINK

  • How we’re learning to explain AI terms for young people and educators

    How we’re learning to explain AI terms for young people and educators

    Reading Time: 6 minutes

    What do we talk about when we talk about artificial intelligence (AI)? It’s becoming a cliche to point out that, because the term “AI” is used to describe so many different things nowadays, it’s difficult to know straight away what anyone means when they say “AI”. However, it’s true that without a shared understanding of what AI and related terms mean, we can’t talk about them, or educate young people about the field.

    A group of young people demonstrate a project at Coolest Projects.

    So when we started designing materials for the Experience AI learning programme in partnership with leading AI unit Google DeepMind, we decided to create short explanations of key AI and machine learning (ML) terms. The explanations are doubly useful:

    1. They ensure that we give learners and teachers a consistent and clear understanding of the key terms across all our Experience AI resources. Within the Experience AI Lessons for Key Stage 3 (age 11–14), these key terms are also correlated to the target concepts and learning objectives presented in the learning graph. 
    2. They help us talk about AI and AI education in our team. Thanks to sharing an understanding of what terms such as “AI”, “ML”, “model”, or “training” actually mean and how to best talk about AI, our conversations are much more productive.

    As an example, here is our explanation of the term “artificial intelligence” for learners aged 11–14:

    Artificial intelligence (AI) is the design and study of systems that appear to mimic intelligent behaviour. Some AI applications are based on rules. More often now, AI applications are built using machine learning that is said to ‘learn’ from examples in the form of data. For example, some AI applications are built to answer questions or help diagnose illnesses. Other AI applications could be built for harmful purposes, such as spreading fake news. AI applications do not think. AI applications are built to carry out tasks in a way that appears to be intelligent.

    You can find 32 explanations in the glossary that is part of the Experience AI Lessons. Here’s an insight into how we arrived at the explanations.

    Reliable sources

    In order to ensure the explanations are as precise as possible, we first identified reliable sources. These included among many others:

    Explaining AI terms to Key Stage 3 learners: Some principles

    Vocabulary is an important part of teaching and learning. When we use vocabulary correctly, we can support learners to develop their understanding. If we use it inconsistently, this can lead to alternate conceptions (misconceptions) that can interfere with learners’ understanding. You can read more about this in our Pedagogy Quick Read on alternate conceptions.

    Some of our principles for writing explanations of AI terms were that the explanations need to: 

    • Be accurate
    • Be grounded in education research best practice
    • Be suitable for our target audience (Key Stage 3 learners, i.e. 11- to 14-year-olds)
    • Be free of terms that have alternative meanings in computer science, such as “algorithm”

    We engaged in an iterative process of writing explanations, gathering feedback from our team and our Experience AI project partners at Google DeepMind, and adapting the explanations. Then we went through the feedback and adaptation cycle until we all agreed that the explanations met our principles.

    A real banana and an image of a banana shown on the screen of a laptop are both labelled "Banana".
    Image: Max Gruber / Better Images of AI / Ceci n’est pas une banane / CC-BY 4.0

    An important part of what emerged as a result, aside from the explanations of AI terms themselves, was a blueprint for how not to talk about AI. One aspect of this is avoiding anthropomorphism, detailed by Ben Garside from our team here.

    As part of designing the the Experience AI Lessons, creating the explanations helped us to:

    • Decide which technical details we needed to include when introducing AI concepts in the lessons
    • Figure out how to best present these technical details
    • Settle debates about where it would be appropriate, given our understanding and our learners’ age group, to abstract or leave out details

    Using education research to explain AI terms

    One of the ways education research informed the explanations was that we used semantic waves to structure each term’s explanation in three parts: 

    1. Top of the wave: The first one or two sentences are a high-level abstract explanation of the term, kept as short as possible, while introducing key words and concepts.
    2. Bottom of the wave: The middle part of the explanation unpacks the meaning of the term using a common example, in a context that’s familiar to a young audience. 
    3. Top of the wave: The final one or two sentences repack what was explained in the example in a more abstract way again to reconnect with the term. The end part should be a repeat of the top of the wave at the beginning of the explanation. It should also add further information to lead to another concept. 

    Most explanations also contain ‘middle of the wave’ sentences, which add additional abstract content, bridging the ‘bottom of the wave’ concrete example to the ‘top of the wave’ abstract content.

    Here’s the “artificial intelligence” explanation broken up into the parts of the semantic wave:

    • Artificial intelligence (AI) is the design and study of systems that appear to mimic intelligent behaviour. (top of the wave)
    • Some AI applications are based on rules. More often now, AI applications are built using machine learning that is said to ‘learn’ from examples in the form of data. (middle of the wave)
    • For example, some AI applications are built to answer questions or help diagnose illnesses. Other AI applications could be built for harmful purposes, such as spreading fake news (bottom of the wave)
    • AI applications do not think. (middle of the wave)
    • AI applications are built to carry out tasks in a way that appears to be intelligent. (top of the wave)
    Our "artificial intelligence" explanation broken up into the parts of the semantic wave.
    Our “artificial intelligence” explanation broken up into the parts of the semantic wave. Red = top of the wave; yellow = middle of the wave; green = bottom of the wave

    Was it worth our time?

    Some of the explanations went through 10 or more iterations before we agreed they were suitable for publication. After months of thinking about, writing, correcting, discussing, and justifying the explanations, it’s tempting to wonder whether I should have just prompted an AI chatbot to generate the explanations for me.

    A window of three images. On the right is a photo of a big tree in a green field in a field of grass and a bright blue sky. The two on the left are simplifications created based on a decision tree algorithm. The work illustrates a popular type of machine learning model: the decision tree. Decision trees work by splitting the population into ever smaller segments. I try to give people an intuitive understanding of the algorithm. I also want to show that models are simplifications of reality, but can still be useful, or in this case visually pleasing. To create this I trained a model to predict pixel colour values, based on an original photograph of a tree.
    Rens Dimmendaal & Johann Siemens / Better Images of AI / Decision Tree reversed / CC-BY 4.0

    I tested this idea by getting a chatbot to generate an explanation of “artificial intelligence” using the prompt “Explain what artificial intelligence is, using vocabulary suitable for KS3 students, avoiding anthropomorphism”. The result included quite a few inconsistencies with our principles, as well as a couple of technical inaccuracies. Perhaps I could have tweaked the prompt for the chatbot in order to get a better result. However, relying on a chatbot’s output would mean missing out on some of the value of doing the work of writing the explanations in collaboration with my team and our partners.

    The visible result of that work is the explanations themselves. The invisible result is the knowledge we all gained, and the coherence we reached as a team, both of which enabled us to create high-quality resources for Experience AI. We wouldn’t have gotten to know what resources we wanted to write without writing the explanations ourselves and improving them over and over. So yes, it was worth our time.

    What do you think about the explanations?

    The process of creating and iterating the AI explanations highlights how opaque the field of AI still is, and how little we yet know about how best to teach and learn about it. At the Raspberry Pi Foundation, we now know just a bit more about that and are excited to share the results with teachers and young people.

    You can access the Experience AI Lessons and the glossary with all our explanations at experience-ai.org. The glossary of AI explanations is just in its first published version: we will continue to improve it as we find out more about how to best support young people to learn about this field.

    Let us know what you think about the explanations and whether they’re useful in your teaching. Onwards with the exciting work of establishing how to successfully engage young people in learning about and creating with AI technologies.

    Website: LINK

  • Experience AI: The excitement of AI in your classroom

    Experience AI: The excitement of AI in your classroom

    Reading Time: 4 minutes

    We are delighted to announce that we’ve launched Experience AI, our new learning programme to help educators to teach, inspire, and engage young people in the subject of artificial intelligence (AI) and machine learning (ML).

    Experience AI is a new educational programme that offers cutting-edge secondary school resources on AI and machine learning for teachers and their students. Developed in partnership by the Raspberry Pi Foundation and DeepMind, the programme aims to support teachers in the exciting and fast-moving area of AI, and get young people passionate about the subject.

    [youtube https://www.youtube.com/watch?v=_YiauUzfxrQ?feature=oembed&w=500&h=281]

    The importance of AI and machine learning education

    Artificial intelligence and machine learning applications are already changing many aspects of our lives. From search engines, social media content recommenders, self-driving cars, and facial recognition software, to AI chatbots and image generation, these technologies are increasingly common in our everyday world.

    Young people who understand how AI works will be better equipped to engage with the changes AI applications bring to the world, to make informed decisions about using and creating AI applications, and to choose what role AI should play in their futures. They will also gain critical thinking skills and awareness of how they might use AI to come up with new, creative solutions to problems they care about.

    The AI applications people are building today are predicted to affect many career paths. In 2020, the World Economic Forum estimated that AI would replace some 85 million jobs by 2025 and create 97 million new ones. Many of these future jobs will require some knowledge of AI and ML, so it’s important that young people develop a strong understanding from an early age.

    A group of young people investigate computer hardware together.
     Develop a strong understanding of the concepts of AI and machine learning with your learners.

    Experience AI Lessons

    Something we get asked a lot is: “How do I teach AI and machine learning with my class?”. To answer this question, we have developed a set of free lessons for secondary school students (age 11 to 14) that give you everything you need including lesson plans, slide decks, worksheets, and videos.

    The lessons focus on relatable applications of AI and are carefully designed so that teachers in a wide range of subjects can use them. You can find out more about how we used research to shape the lessons and how we aim to avoid misconceptions about AI.

    The lessons are also for you if you’re an educator or volunteer outside of a school setting, such as in a coding club.

    The six lessons

    1. What is AI?: Learners explore the current context of artificial intelligence (AI) and how it is used in the world around them. Looking at the differences between rule-based and data-driven approaches to programming, they consider the benefits and challenges that AI could bring to society. 
    2. How computers learn: Learners focus on the role of data-driven models in AI systems. They are introduced to machine learning and find out about three common approaches to creating ML models. Finally the learners explore classification, a specific application of ML.
    3. Bias in, bias out: Learners create their own machine learning model to classify images of apples and tomatoes. They discover that a limited dataset is likely to lead to a flawed ML model. Then they explore how bias can appear in a dataset, resulting in biased predictions produced by a ML model.
    4. Decision trees: Learners take their first in-depth look at a specific type of machine learning model: decision trees. They see how different training datasets result in the creation of different ML models, experiencing first-hand what the term ‘data-driven’ means. 
    5. Solving problems with ML models: Learners are introduced to the AI project lifecycle and use it to create a machine learning model. They apply a human-focused approach to working on their project, train a ML model, and finally test their model to find out its accuracy.
    6. Model cards and careers: Learners finish the AI project lifecycle by creating a model card to explain their machine learning model. To finish off the unit, they explore a range of AI-related careers, hear from people working in AI research at DeepMind, and explore how they might apply AI and ML to their interests.

    As part of this exciting first phase, we’re inviting teachers to participate in research to help us further develop the resources. All you need to do is sign up through our website, download the lessons, use them in your classroom, and give us your valuable feedback.

    An educator points to an image on a student's computer screen.
     Ben Garside, one of our lead educators working on Experience AI, takes a group of students through one of the new lessons.

    Support for teachers

    We’ve designed the Experience AI lessons with teacher support in mind, and so that you can deliver them to your learners aged 11 to 14 no matter what your subject area is. Each of the lesson plans includes a section that explains new concepts, and the slide decks feature embedded videos in which DeepMind’s AI researchers describe and bring these concepts to life for your learners.

    We will also be offering you a range of new teacher training opportunities later this year, including a free online CPD course — Introduction to AI and Machine Learning — and a series of AI-themed webinars.

    Tell us your feedback

    We will be inviting schools across the UK to test and improve the Experience AI lessons through feedback. We are really looking forward to working with you to shape the future of AI and machine learning education.

    Visit the Experience AI website today to get started.

    Website: LINK

  • AI education resources: What do we teach young people?

    AI education resources: What do we teach young people?

    Reading Time: 6 minutes

    People have many different reasons to think that children and teenagers need to learn about artificial intelligence (AI) technologies. Whether it’s that AI impacts young people’s lives today, or that understanding these technologies may open up careers in their future — there is broad agreement that school-level education about AI is important.

    A young person writes Python code.

    But how do you actually design lessons about AI, a technical area that is entirely new to young people? That was the question we needed to answer as we started Experience AI, our exciting collaboration with DeepMind, a leading AI company.

    Our approach to developing AI education resources

    As part of Experience AI, we are creating a free set of lesson resources to help teachers introduce AI and machine learning (ML) to KS3 students (ages 11 to 14). In England this area is not currently part of the national curriculum, but it’s starting to appear in all sorts of learning materials for young people. 

    Two learners and a teacher in a physical computing lesson.

    While developing the six Experience AI lessons, we took a research-informed approach. We built on insights from the series of research seminars on AI and data science education we had hosted in 2021 and 2022, and on research we ourselves have been conducting at the Raspberry Pi Computing Education Research Centre.

    We reviewed over 500 existing resources that are used to teach AI and ML.

    As part of this research, we reviewed over 500 existing resources that are used to teach AI and ML. We found that the vast majority of them were one-off activities, and many claimed to be appropriate for learners of any age. There were very few sets of lessons, or units of work, that were tailored to a specific age group. Activities often had vague learning objectives, or none at all. We rarely found associated assessment activities. These were all shortcomings we wanted to avoid in our set of lessons.

    To analyse the content of AI education resources, we use a simple framework called SEAME. This framework is based on work I did in 2018 with Professor Paul Curzon at Queen Mary University of London, running professional development for educators on teaching machine learning.

    The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works).
    Click to enlarge.

    The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works). We hope that it will be a useful tool for anyone who is interested in looking at resources to teach AI. 

    What do AI education resources focus on?

    The four levels of the SEAME framework do not indicate a hierarchy or sequence. Instead, they offer a way for teachers, resource developers, and researchers to talk about the focus of AI learning activities.

    Social and ethical aspects (SE)

    The SE level covers activities that relate to the impact of AI on everyday life, and to its implications for society. Learning objectives and their related resources categorised at this level introduce students to issues such as privacy or bias concerns, the impact of AI on employment, misinformation, and the potential benefits of AI applications.

    A slide from a lesson about AI that describes an AI application related to timetables.
    An example activity in the Experience AI lessons where learners think about the social and ethical issues of an AI application that predicts what subjects they might want to study. This activity is mostly focused on the social and ethical level of the SEAME framework, but also links to the applications and models levels.

    Applications (A)

    The A level refers to activities related to applications and systems that use AI or ML models. At this level, learners do not learn how to train models themselves, or how such models work. Learning objectives at this level include knowing a range of AI applications and starting to understand the difference between rule-based and data-driven approaches to developing applications.

    Models (M)

    The M level concerns the models underlying AI and ML applications. Learning objectives at this level include learners understanding the processes used to train and test models. For example, through resources focused on the M level, students could learn about the different learning paradigms of ML (i.e., supervised, unsupervised, or reinforcement learning).

    A slide from a lesson about AI that describes an ML model to classify animals.
    An example activity in the Experience AI lessons where students learn about classification. This activity is mostly focused on the models level of the SEAME framework, but also links to the social and ethical and the applications levels.

    Engines (E)

    The E level is related to the engines that make AI models work. This is the most hidden and complex level, and for school-aged learners may need to be taught using unplugged activities and visualisations. Learning objectives could include understanding the basic workings of systems such as data-driven decision trees and artificial neural networks.

    Covering the four levels

    Some learning activities may focus on a single level, but activities can also span more than one level. For example, an activity may start with learners trying out an existing ‘rock-paper-scissors’ application that uses an ML model to recognise hand shapes. This would cover the applications level. If learners then move on to train the model to improve its accuracy by adding more image data, they work at the models level.

    A teacher helps a young person with a coding project.

    Other activities cover several SEAME levels to address a specific concept. For example, an activity focussed on bias might start with an example of the societal impact of bias (SE level). Learners could then discuss the AI applications they use and reflect on how bias impacts them personally (A level). The activity could finish with learners exploring related data in a simple ML model and thinking about how representative the data is of all potential application users (M level).

    The set of lessons on AI we are developing in collaboration with DeepMind covers all four levels of SEAME.

    The set of Experience AI lessons we are developing in collaboration with DeepMind covers all four levels of SEAME. The lessons are based on carefully designed learning objectives and specifically targeted to KS3 students. Lesson materials include presentations, videos, student activities, and assessment questions.

    We’re releasing the Experience AI lessons very soon — if you want to be the first to hear news about them, please sign up here.

    The SEAME framework as a tool for research on AI education

    For researchers, we think the SEAME framework will, for example, be useful to analyse school curriculum material to see whether some age groups have more learning activities available at one level than another, and whether this changes over time. We may find that primary school learners work mostly at the SE and A levels, and secondary school learners move between the levels with increasing clarity as they develop their knowledge. It may also be the case that some learners or teachers prefer activities focused on one level rather than another. However, we can’t be sure: research is needed to investigate the teaching and learning of AI and ML across all year groups.

    That’s why we’re excited to welcome Salomey Afua Addo to the Raspberry Pi Computing Education Research Centre. Salomey joined the Centre as a PhD student in January, and her research will focus on approaches to the teaching and learning of AI. We’re looking forward to seeing the results of her work.

    Website: LINK

  • Experience AI with the Raspberry Pi Foundation and DeepMind

    Experience AI with the Raspberry Pi Foundation and DeepMind

    Reading Time: 3 minutes

    I am delighted to announce a new collaboration between the Raspberry Pi Foundation and a leading AI company, DeepMind, to inspire the next generation of AI leaders.

    Young people work together to investigate computer hardware.

    The Raspberry Pi Foundation’s mission is to enable young people to realise their full potential through the power of computing and digital technologies. Our vision is that every young person — whatever their background — should have the opportunity to learn how to create and solve problems with computers.

    With the rapid advances in artificial intelligence — from machine learning and robotics, to computer vision and natural language processing — it’s increasingly important that young people understand how AI is affecting their lives now and the role that it can play in their future. 

    DeepMind logo.

    Experience AI is a new collaboration between the Raspberry Pi Foundation and DeepMind that aims to help young people understand how AI works and how it is changing the world. We want to inspire young people about the careers in AI and help them understand how to access those opportunities, including through their subject choices. 

    Experience AI 

    More than anything, we want to make AI relevant and accessible to young people from all backgrounds, and to make sure that we engage young people from backgrounds that are underrepresented in AI careers. 

    The program has two strands: Inspire and Experiment. 

    Inspire: To engage and inspire students about AI and its impact on the world, we are developing a set of free learning resources and materials including lesson plans, assembly packs, videos, and webinars, alongside training and support for educators. This will include an introduction to the technologies that enable AI; how AI models are trained; how to frame problems for AI to solve; the societal and ethical implications of AI; and career opportunities. All of this will be designed around real-world and relatable applications of AI, engaging a wide range of diverse interests and useful to teachers from different subjects.

    In a computing classroom, two girls concentrate on their programming task.

    Experiment: Building on the excitement generated through Inspire, we are also designing an AI challenge that will support young people to experiment with AI technologies and explore how these can be used to solve real-world problems. This will provide an opportunity for students to get hands-on with technology and data, along with support for educators. 

    Our initial focus is learners aged 11 to 14 in the UK. We are working with teachers, students, and DeepMind engineers to ensure that the materials and learning experiences are engaging and accessible to all, and that they reflect the latest AI technologies and their application.

    A woman teacher helps a young person with a coding project.

    As with all of our work, we want to be research-led and the Raspberry Pi Foundation research team has been working over the past year to understand the latest research on what works in AI education.

    Next steps 

    Development of the Inspire learning materials is underway now, and we will release the whole set of resources early in 2023. Throughout 2023, we will design and pilot the Experiment challenge.

    If you want to stay up to date with Experience AI, or if you’d like to be involved in testing the materials, fill in this form to register your interest.

    Website: LINK

  • AI literacy research: Children and families working together around smart devices

    AI literacy research: Children and families working together around smart devices

    Reading Time: 6 minutes

    Between September 2021 and March 2022, we’ve been partnering with The Alan Turing Institute to host a series of free research seminars about how to young people about AI and data science.

    In the final seminar of the series, we were excited to hear from Stefania Druga from the University of Washington, who presented on the topic of AI literacy for families. Stefania’s talk highlighted the importance of families in supporting children to develop AI literacy. Her talk was a perfect conclusion to the series and very well-received by our audience.

    Stefania Druga.
    Stefania Druga, University of Washington

    Stefania is a third-year PhD student who has been working on AI literacy in families, and since 2017 she has conducted a series of studies that she presented in her seminar talk. She presented some new work to us that was to be formally shared at the HCI conference in April, and we were very pleased to have a sneak preview of these results. It was a fascinating talk about the ways in which the interactions between parents and children using AI-based devices in the home, and the discussions they have while learning together, can facilitate an appreciation of the affordances of AI systems. You’ll find my summary as well as the seminar recording below.

    “AI literacy practices and skills led some families to consider making meaningful use of AI devices they already have in their homes and redesign their interactions with them. These findings suggest that family has the potential to act as a third space for AI learning.”

    – Stefania Druga

    AI literacy: Growing up with AI systems, growing used to them

    Back in 2017, interest in Alexa and other so-called ‘smart’, AI-based devices was just developing in the public, and such devices would have been very novel to most people. That year, Stefania and colleagues conducted a first pilot study of children’s and their parents’ interactions with ‘smart’ devices, including robots, talking dolls, and the sort of voice assistants we are used to now.

    A slide from Stefania Druga's AI literacy seminar. Content is described in the blog text.
    A slide from Stefania’s AI literacy seminar. Click to enlarge.

    Working directly with families, the researchers explored the level of understanding that children had about ‘smart’ devices, and were surprised by the level of insight very young children had into the potential of this type of technology.

    In this AI literacy pilot study, Stefania and her colleagues found that:

    • Children perceived AI-based agents (i.e. ‘smart’ devices) as friendly and truthful
    • They treated different devices (e.g. two different Alexas) as completely independent
    • How ‘smart’ they found the device was dependent on age, with older children more likely to describe devices as ‘smart’

    AI literacy: Influence of parents’ perceptions, influence of talking dolls

    Stefania’s next study, undertaken in 2018, showed that parents’ perceptions of the implications and potential of ‘smart’ devices shaped what their children thought. Even when parents and children were interviewed separately, if the parent thought that, for example, robots were smarter than humans, then the child did too.

    A slide from Stefania Druga's AI literacy seminar.
    A slide from Stefania’s AI literacy seminar. Click to enlarge.

    Another part of this study showed that talking dolls could influence children’s moral decisions (e.g. “Should I give a child a pillow?”). In some cases, these ‘smart’ toys would influence the child more than another human. Some ‘smart’ dolls have been banned in some European countries because of security concerns. In the light of these concerns, Stefania pointed out how important it is to help children develop a critical understanding of the potential of AI-based technology, and what its fallibility and the limits of its guidance are.

    A slide from Stefania Druga's AI literacy seminar.
    A slide from Stefania’s AI literacy seminar. Click to enlarge.

    AI literacy: Programming ‘smart’ devices, algorithmic bias

    Another study Stefania discussed involved children who programmed ‘smart’ devices. She used the children’s drawings to find out about their mental models of how the technology worked.

    She found that when children had the opportunity to train machine learning models or ‘smart’ devices, they became more sceptical about the appropriate use of these technologies and asked better questions about when and for what they should be used. Another finding was that children and adults had different ideas about algorithmic bias, particularly relating to the meaning of fairness.

    A parent and child work together at a Raspberry Pi computer.

    AI literacy: Kinaesthetic activities, sharing discussions

    The final study Stefania talked about was conducted with families online during the pandemic, when children were learning at home. 15 families, with in total 18 children (ages 5 to 11) and 16 parents, participated in five weekly sessions. A number of learning activities to demonstrate features of AI made up each of the sessions. These are all available at aiplayground.me.

    A slide from Stefania Druga's AI literacy seminar, describing two research questions about how children and parents learn about AI together, and about how to design learning supports for family AI literacies.
    A slide from Stefania’s AI literacy seminar. Click to enlarge.

    The fact that children and parents, or other family members, worked through the activities together seemed to generate fruitful discussions about the usefulness of AI-based technology. Many families were concerned about privacy and what was happening to their personal data when they were using ‘smart’ devices, and also expressed frustration with voice assistants that couldn’t always understand the way they spoke.

    A slide from Stefania Druga's AI literacy seminar. Content described in the blog text.
    A slide from Stefania’s AI literacy seminar. Click to enlarge.

    In one of the sessions, with a focus on machine learning, families were introduced to a kinaesthetic activity involving moving around their home to train a model. Through this activity, parents and children had more insight into the constraints facing machine learning. They used props in the home to experiment and find out ways of training the model better. In another session, families were encouraged to design their own devices on paper, and Stefania showed some examples of designs children had drawn.

    A slide from Stefania Druga's AI literacy seminar. Content described in the blog text.
    A slide from Stefania’s AI literacy seminar. Click to enlarge.

    This study identified a number of different roles that parents or other adults played in supporting children’s learning about AI, and found that embodied and tangible activities worked well for encouraging joint work between children and their families.

    Find out more

    You can catch up with Stefania’s seminar below in the video, and download her presentation slides.

    More about Stefania’s work can be learned in her paper on children’s training of ML models and also in her latest paper about the five weekly AI literacy sessions with families.

    Recordings and slides of all our previous seminars on AI education are available online for you, and you can see the list of AI education resources we’ve put together based on recommendations from seminar speakers and participants.

    Join our next free research seminar

    We are delighted to start a new seminar series on cross-disciplinary computing, with seminars in May, June, July, and September to look forward to. It’s not long now before we begin: Mark Guzdial will speak to us about task-specific programming languages (TSP) in history and mathematics classes on 3 May, 17.00 to 18.30pm local UK time. I can’t wait!

    Sign up to receive the Zoom details for the seminar with Mark:

    Website: LINK