Schlagwort: Artificial Intelligence

  • Should we teach AI and ML differently to other areas of computer science? A challenge

    Should we teach AI and ML differently to other areas of computer science? A challenge

    Reading Time: 6 minutes

    Between September 2021 and March 2022, we’re partnering with The Alan Turing Institute to host a series of free research seminars about how to teach AI and data science to young people.

    In the second seminar of the series, we were excited to hear from Professor Carsten Schulte, Yannik Fleischer, and Lukas Höper from the University of Paderborn, Germany, who presented on the topic of teaching AI and machine learning (ML) from a data-centric perspective. Their talk raised the question of whether and how AI and ML should be taught differently from other themes in the computer science curriculum at school.

    Machine behaviour — a new field of study?

    The rationale behind the speakers’ work is a concept they call hybrid interaction system, referring to the way that humans and machines interact. To explain this concept, Carsten referred to a 2019 article published in Nature by Iyad Rahwan and colleagues: Machine hehaviour. The article’s authors propose that the study of AI agents (complex and simple algorithms that make decisions) should be a separate, cross-disciplinary field of study, because of the ubiquity and complexity of AI systems, and because these systems can have both beneficial and detrimental impacts on humanity, which can be difficult to evaluate. (Our previous seminar by Mhairi Aitken highlighted some of these impacts.) The authors state that to study this field, we need to draw on scientific practices from across different fields, as shown below:

    Machine behaviour as a field sits at the intersection of AI engineering and behavioural science. Quantitative evidence from machine behaviour studies feeds into the study of the impact of technology, which in turn feeds questions and practices into engineering and behavioural science.
    The interdisciplinarity of machine behaviour. (Image taken from Rahwan et al [1])

    In establishing their argument, the authors compare the study of animal behaviour and machine behaviour, citing that both fields consider aspects such as mechanism, development, evolution and function. They describe how part of this proposed machine behaviour field may focus on studying individual machines’ behaviour, while collective machines and what they call ‘hybrid human-machine behaviour’ can also be studied. By focusing on the complexities of the interactions between machines and humans, we can think both about machines shaping human behaviour and humans shaping machine behaviour, and a sort of ‘co-behaviour’ as they work together. Thus, the authors conclude that machine behaviour is an interdisciplinary area that we should study in a different way to computer science.

    Carsten and his team said that, as educators, we will need to draw on the parameters and frameworks of this machine behaviour field to be able to effectively teach AI and machine learning in school. They argue that our approach should be centred on data, rather than on code. I believe this is a challenge to those of us developing tools and resources to support young people, and that we should be open to these ideas as we forge ahead in our work in this area.

    Ideas or artefacts?

    In the interpretation of computational thinking popularised in 2006 by Jeanette Wing, she introduces computational thinking as being about ‘ideas, not artefacts’. When we, the computing education community, started to think about computational thinking, we moved from focusing on specific technology — and how to understand and use it — to the ideas or principles underlying the domain. The challenge now is: have we gone too far in that direction?

    Carsten argued that, if we are to understand machine behaviour, and in particular, human-machine co-behaviour, which he refers to as the hybrid interaction system, then we need to be studying   artefacts as well as ideas.

    Throughout the seminar, the speakers reminded us to keep in mind artefacts, issues of bias, the role of data, and potential implications for the way we teach.

    Studying machine learning: a different focus

    In addition, Carsten highlighted a number of differences between learning ML and learning other areas of computer science, including traditional programming:

    1. The process of problem-solving is different. Traditionally, we might try to understand the problem, derive a solution in terms of an algorithm, then understand the solution. In ML, the data shapes the model, and we do not need a deep understanding of either the problem or the solution.
    2. Our tolerance of inaccuracy is different. Traditionally, we teach young people to design programs that lead to an accurate solution. However, the nature of ML means that there will be an error rate, which we strive to minimise. 
    3. The role of code is different. Rather than the code doing the work as in traditional programming, the code is only a small part of a real-world ML system. 

    These differences imply that our teaching should adapt too.

    A graphic demonstrating that in machine learning as compared to other areas of computer science, the process of problem-solving, tolerance of inaccuracy, and role of code is different.
    Click to enlarge.

    ProDaBi: a programme for teaching AI, data science, and ML in secondary school

    In Germany, education is devolved to state governments. Although computer science (known as informatics) was only last year introduced as a mandatory subject in lower secondary schools in North Rhine-Westphalia, where Paderborn is located, it has been taught at the upper secondary levels for many years. ProDaBi is a project that researchers have been running at Paderborn University since 2017, with the aim of developing a secondary school curriculum around data science, AI, and ML.

    The ProDaBi curriculum includes:

    • Two modules for 11- to 12-year-olds covering decision trees and data awareness (ethical aspects), introduced this year
    • A short course for 13-year-olds covering aspects of artificial intelligence, through the game Hexapawn
    • A set of modules for 14- to 15-year-olds, covering data science, data exploration, decision trees, neural networks, and data awareness (ethical aspects), using Jupyter notebooks
    • A project-based course for 18-year-olds, including the above topics at a more advanced level, using Codap and Jupyter notebooks to develop practical skills through projects; this course has been running the longest and is currently in its fourth iteration

    Although the ProDaBi project site is in German, an English translation is available.

    Learning modules developed as part of the ProDaBi project.
    Modules developed as part of the ProDaBi project

    Our speakers described example activities from three of the modules:

    • Hexapawn, a two-player game inspired by the work of Donald Michie in 1961. The purpose of this activity is to support learners in reflecting on the way the machine learns. Children can then relate the activity to the behavior of AI agents such as autonomous cars. An English version of the activity is available. 
    • Data cards, a series of activities to teach about decision trees. The cards are designed in a ‘Top Trumps’ style, and based on food items, with unplugged and digital elements. 
    • Data awareness, a module focusing on the amount of data an individual can generate as they move through a city, in this case through the mobile phone network. Children are encouraged to reflect on personal data in the context of the interaction between the human and data-driven artefact, and how their view of the world influences their interpretation of the data that they are given.

    Questioning how we should teach AI and ML at school

    There was a lot to digest in this seminar: challenging ideas and some new concepts, for me anyway. An important takeaway for me was how much we do not yet know about the concepts and skills we should be teaching in school around AI and ML, and about the approaches that we should be using to teach them effectively. Research such as that being carried out in Paderborn, demonstrating a data-centric approach, can really augment our understanding, and I’m looking forward to following the work of Carsten and his team.

    Carsten and colleagues ended with this summary and discussion point for the audience:

    “‘AI education’ requires developing an adequate picture of the hybrid interaction system — a kind of data-driven, emergent ecosystem which needs to be made explicitly to understand the transformative role as well as the technological basics of these artificial intelligence tools and how they are related to data science.”

    You can catch up on the seminar, including the Q&A with Carsten and his colleagues, here:

    [youtube https://www.youtube.com/watch?v=MlK7SgOTiCo?feature=oembed&w=500&h=281]

    Join our next seminar

    This seminar really extended our thinking about AI education, and we look forward to introducing new perspectives from different researchers each month. At our next seminar on Tuesday 2 November at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we will welcome Professor Matti Tedre and Henriikka Vartiainen (University of Eastern Finland). The two Finnish researchers will talk about emerging trajectories in ML education for K-12. We look forward to meeting you there.

    Carsten and their colleagues are also running a series of seminars on AI and data science: you can find out about these on their registration page.

    You can increase your own understanding of machine learning by joining our latest free online course!


    [1] Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., … & Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477-486.

    Website: LINK

  • Should we teach AI and ML differently to other areas of computer science? A challenge

    Should we teach AI and ML differently to other areas of computer science? A challenge

    Reading Time: 6 minutes

    Between September 2021 and March 2022, we’re partnering with The Alan Turing Institute to host a series of free research seminars about how to teach AI and data science to young people.

    In the second seminar of the series, we were excited to hear from Professor Carsten Schulte, Yannik Fleischer, and Lukas Höper from the University of Paderborn, Germany, who presented on the topic of teaching AI and machine learning (ML) from a data-centric perspective. Their talk raised the question of whether and how AI and ML should be taught differently from other themes in the computer science curriculum at school.

    Machine behaviour — a new field of study?

    The rationale behind the speakers’ work is a concept they call hybrid interaction system, referring to the way that humans and machines interact. To explain this concept, Carsten referred to a 2019 article published in Nature by Iyad Rahwan and colleagues: Machine hehaviour. The article’s authors propose that the study of AI agents (complex and simple algorithms that make decisions) should be a separate, cross-disciplinary field of study, because of the ubiquity and complexity of AI systems, and because these systems can have both beneficial and detrimental impacts on humanity, which can be difficult to evaluate. (Our previous seminar by Mhairi Aitken highlighted some of these impacts.) The authors state that to study this field, we need to draw on scientific practices from across different fields, as shown below:

    Machine behaviour as a field sits at the intersection of AI engineering and behavioural science. Quantitative evidence from machine behaviour studies feeds into the study of the impact of technology, which in turn feeds questions and practices into engineering and behavioural science.
    The interdisciplinarity of machine behaviour. (Image taken from Rahwan et al [1])

    In establishing their argument, the authors compare the study of animal behaviour and machine behaviour, citing that both fields consider aspects such as mechanism, development, evolution and function. They describe how part of this proposed machine behaviour field may focus on studying individual machines’ behaviour, while collective machines and what they call ‘hybrid human-machine behaviour’ can also be studied. By focusing on the complexities of the interactions between machines and humans, we can think both about machines shaping human behaviour and humans shaping machine behaviour, and a sort of ‘co-behaviour’ as they work together. Thus, the authors conclude that machine behaviour is an interdisciplinary area that we should study in a different way to computer science.

    Carsten and his team said that, as educators, we will need to draw on the parameters and frameworks of this machine behaviour field to be able to effectively teach AI and machine learning in school. They argue that our approach should be centred on data, rather than on code. I believe this is a challenge to those of us developing tools and resources to support young people, and that we should be open to these ideas as we forge ahead in our work in this area.

    Ideas or artefacts?

    In the interpretation of computational thinking popularised in 2006 by Jeanette Wing, she introduces computational thinking as being about ‘ideas, not artefacts’. When we, the computing education community, started to think about computational thinking, we moved from focusing on specific technology — and how to understand and use it — to the ideas or principles underlying the domain. The challenge now is: have we gone too far in that direction?

    Carsten argued that, if we are to understand machine behaviour, and in particular, human-machine co-behaviour, which he refers to as the hybrid interaction system, then we need to be studying   artefacts as well as ideas.

    Throughout the seminar, the speakers reminded us to keep in mind artefacts, issues of bias, the role of data, and potential implications for the way we teach.

    Studying machine learning: a different focus

    In addition, Carsten highlighted a number of differences between learning ML and learning other areas of computer science, including traditional programming:

    1. The process of problem-solving is different. Traditionally, we might try to understand the problem, derive a solution in terms of an algorithm, then understand the solution. In ML, the data shapes the model, and we do not need a deep understanding of either the problem or the solution.
    2. Our tolerance of inaccuracy is different. Traditionally, we teach young people to design programs that lead to an accurate solution. However, the nature of ML means that there will be an error rate, which we strive to minimise. 
    3. The role of code is different. Rather than the code doing the work as in traditional programming, the code is only a small part of a real-world ML system. 

    These differences imply that our teaching should adapt too.

    A graphic demonstrating that in machine learning as compared to other areas of computer science, the process of problem-solving, tolerance of inaccuracy, and role of code is different.
    Click to enlarge.

    ProDaBi: a programme for teaching AI, data science, and ML in secondary school

    In Germany, education is devolved to state governments. Although computer science (known as informatics) was only last year introduced as a mandatory subject in lower secondary schools in North Rhine-Westphalia, where Paderborn is located, it has been taught at the upper secondary levels for many years. ProDaBi is a project that researchers have been running at Paderborn University since 2017, with the aim of developing a secondary school curriculum around data science, AI, and ML.

    The ProDaBi curriculum includes:

    • Two modules for 11- to 12-year-olds covering decision trees and data awareness (ethical aspects), introduced this year
    • A short course for 13-year-olds covering aspects of artificial intelligence, through the game Hexapawn
    • A set of modules for 14- to 15-year-olds, covering data science, data exploration, decision trees, neural networks, and data awareness (ethical aspects), using Jupyter notebooks
    • A project-based course for 18-year-olds, including the above topics at a more advanced level, using Codap and Jupyter notebooks to develop practical skills through projects; this course has been running the longest and is currently in its fourth iteration

    Although the ProDaBi project site is in German, an English translation is available.

    Learning modules developed as part of the ProDaBi project.
    Modules developed as part of the ProDaBi project

    Our speakers described example activities from three of the modules:

    • Hexapawn, a two-player game inspired by the work of Donald Michie in 1961. The purpose of this activity is to support learners in reflecting on the way the machine learns. Children can then relate the activity to the behavior of AI agents such as autonomous cars. An English version of the activity is available. 
    • Data cards, a series of activities to teach about decision trees. The cards are designed in a ‘Top Trumps’ style, and based on food items, with unplugged and digital elements. 
    • Data awareness, a module focusing on the amount of data an individual can generate as they move through a city, in this case through the mobile phone network. Children are encouraged to reflect on personal data in the context of the interaction between the human and data-driven artefact, and how their view of the world influences their interpretation of the data that they are given.

    Questioning how we should teach AI and ML at school

    There was a lot to digest in this seminar: challenging ideas and some new concepts, for me anyway. An important takeaway for me was how much we do not yet know about the concepts and skills we should be teaching in school around AI and ML, and about the approaches that we should be using to teach them effectively. Research such as that being carried out in Paderborn, demonstrating a data-centric approach, can really augment our understanding, and I’m looking forward to following the work of Carsten and his team.

    Carsten and colleagues ended with this summary and discussion point for the audience:

    “‘AI education’ requires developing an adequate picture of the hybrid interaction system — a kind of data-driven, emergent ecosystem which needs to be made explicitly to understand the transformative role as well as the technological basics of these artificial intelligence tools and how they are related to data science.”

    You can catch up on the seminar, including the Q&A with Carsten and his colleagues, here:

    [youtube https://www.youtube.com/watch?v=MlK7SgOTiCo?feature=oembed&w=500&h=281]

    Join our next seminar

    This seminar really extended our thinking about AI education, and we look forward to introducing new perspectives from different researchers each month. At our next seminar on Tuesday 2 November at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we will welcome Professor Matti Tedre and Henriikka Vartiainen (University of Eastern Finland). The two Finnish researchers will talk about emerging trajectories in ML education for K-12. We look forward to meeting you there.

    Carsten and their colleagues are also running a series of seminars on AI and data science: you can find out about these on their registration page.

    You can increase your own understanding of machine learning by joining our latest free online course!


    [1] Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., … & Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477-486.

    Website: LINK

  • Educating young people in AI, machine learning, and data science: new seminar series

    Educating young people in AI, machine learning, and data science: new seminar series

    Reading Time: 6 minutes

    A recent Forbes article reported that over the last four years, the use of artificial intelligence (AI) tools in many business sectors has grown by 270%. AI has a history dating back to Alan Turing’s work in the 1940s, and we can define AI as the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.

    A woman explains a graph on a computer screen to two men.
    Recent advances in computing technology have accelerated the rate at which AI and data science tools are coming to be used.

    Four key areas of AI are machine learning, robotics, computer vision, and natural language processing. Other advances in computing technology mean we can now store and efficiently analyse colossal amounts of data (big data); consequently, data science was formed as an interdisciplinary field combining mathematics, statistics, and computer science. Data science is often presented as intertwined with machine learning, as data scientists commonly use machine learning techniques in their analysis.

    Venn diagram showing the overlaps between computer science, AI, machine learning, statistics, and data science.
    Computer science, AI, statistics, machine learning, and data science are overlapping fields. (Diagram from our forthcoming free online course about machine learning for educators)

    AI impacts everyone, so we need to teach young people about it

    AI and data science have recently received huge amounts of attention in the media, as machine learning systems are now used to make decisions in areas such as healthcare, finance, and employment. These AI technologies cause many ethical issues, for example as explored in the film Coded Bias. This film describes the fallout of researcher Joy Buolamwini’s discovery that facial recognition systems do not identify dark-skinned faces accurately, and her journey to push for the first-ever piece of legislation in the USA to govern against bias in the algorithms that impact our lives. Many other ethical issues concerning AI exist and, as highlighted by UNESCO’s examples of AI’s ethical dilemmas, they impact each and every one of us.

    Three female teenagers and a teacher use a computer together.
    We need to make sure that young people understand AI technologies and how they impact society and individuals.

    So how do such advances in technology impact the education of young people? In the UK, a recent Royal Society report on machine learning recommended that schools should “ensure that key concepts in machine learning are taught to those who will be users, developers, and citizens” — in other words, every child. The AI Roadmap published by the UK AI Council in 2020 declared that “a comprehensive programme aimed at all teachers and with a clear deadline for completion would enable every teacher confidently to get to grips with AI concepts in ways that are relevant to their own teaching.” As of yet, very few countries have incorporated any study of AI and data science in their school curricula or computing programmes of study.

    A teacher and a student work on a coding task at a laptop.
    Our seminar speakers will share findings on how teachers can help their learners get to grips with AI concepts.

    Partnering with The Alan Turing Institute for a new seminar series

    Here at the Raspberry Pi Foundation, AI, machine learning, and data science are important topics both in our learning resources for young people and educators, and in our programme of research. So we are delighted to announce that starting this autumn we are hosting six free, online seminars on the topic of AI, machine learning, and data science education, in partnership with The Alan Turing Institute.

    A woman teacher presents to an audience in a classroom.
    Everyone with an interest in computing education research is welcome at our seminars, from researchers to educators and students!

    The Alan Turing Institute is the UK’s national institute for data science and artificial intelligence and does pioneering work in data science research and education. The Institute conducts many different strands of research in this area and has a special interest group focused on data science education. As such, our partnership around the seminar series enables us to explore our mutual interest in the needs of young people relating to these technologies.

    This promises to be an outstanding series drawing from international experts who will share examples of pedagogic best practice […].

    Dr Matt Forshaw, The Alan Turing Institute

    Dr Matt Forshaw, National Skills Lead at The Alan Turing Institute and Senior Lecturer in Data Science at Newcastle University, says: “We are delighted to partner with the Raspberry Pi Foundation to bring you this seminar series on AI, machine learning, and data science. This promises to be an outstanding series drawing from international experts who will share examples of pedagogic best practice and cover critical topics in education, highlighting ethical, fair, and safe use of these emerging technologies.”

    Our free seminar series about AI, machine learning, and data science

    At our computing education research seminars, we hear from a range of experts in the field and build an international community of researchers, practitioners, and educators interested in this important area. Our new free series of seminars runs from September 2021 to February 2022, with some excellent and inspirational speakers:

    • Tues 7 September: Dr Mhairi Aitken from The Alan Turing Institute will share a talk about AI ethics, setting out key ethical principles and how they apply to AI before discussing the ways in which these relate to children and young people.
    • Tues 5 October: Professor Carsten Schulte, Yannik Fleischer, and Lukas Höper from Paderborn University in Germany will use a series of examples from their ProDaBi programme to explore whether and how AI and machine learning should be taught differently from other topics in the computer science curriculum at school. The speakers will suggest that these topics require a paradigm shift for some teachers, and that this shift has to do with the changed role of algorithms and data, and of the societal context.
    • Tues 3 November: Professor Matti Tedre and Dr Henriikka Vartiainen from the University of Eastern Finland will focus on machine learning in the school curriculum. Their talk will map the emerging trajectories in educational practice, theory, and technology related to teaching machine learning in K-12 education.
    • Tues 7 December: Professor Rose Luckin from University College London will be looking at the breadth of issues impacting the teaching and learning of AI.
    • Tues 11 January: We’re delighted that Dr Dave Touretzky and Dr Fred Martin (Carnegie Mellon University and University of Massachusetts Lowell, respectively) from the AI4K12 Initiative in the USA will present some of the key insights into AI that the researchers hope children will acquire, and how they see K-12 AI education evolving over the next few years.
    • Tues 1 February: Speaker to be confirmed

    How you can join our online seminars

    All seminars start at 17:00 UK time (18:00 Central European Time, 12 noon Eastern Time, 9:00 Pacific Time) and take place in an online format, with a presentation, breakout discussion groups, and a whole-group Q&A.

    Sign up now and we’ll send you the link to join on the day of each seminar — don’t forget to put the dates in your diary!

    In the meantime, you can explore some of our educational resources related to machine learning and data science:

    Website: LINK

  • Machine learning and depth estimation using Raspberry Pi

    Machine learning and depth estimation using Raspberry Pi

    Reading Time: 3 minutes

    One of our engineers, David Plowman, describes machine learning and shares news of a Raspberry Pi depth estimation challenge run by ETH Zürich (Swiss Federal Institute of Technology).

    Spoiler alert – it’s all happening virtually, so you can definitely make the trip and attend, or maybe even enter yourself.

    What is Machine Learning?

    Machine Learning (ML) and Artificial Intelligence (AI) are some of the top engineering-related buzzwords of the moment, and foremost among current ML paradigms is probably the Artificial Neural Network (ANN).

    They involve millions of tiny calculations, merged together in a giant biologically inspired network – hence the name. These networks typically have millions of parameters that control each calculation, and they must be optimised for every different task at hand.

    This process of optimising the parameters so that a given set of inputs correctly produces a known set of outputs is known as training, and is what gives rise to the sense that the network is “learning”.

    A popular type of ANN used for processing images is the Convolutional Neural Network. Many small calculations are performed on groups of input pixels to produce each output pixel
    A popular type of ANN used for processing images is the Convolutional Neural Network. Many small calculations are performed on groups of input pixels to produce each output pixel

    Machine Learning frameworks

    A number of well known companies produce free ML frameworks that you can download and use on your own computer. The network training procedure runs best on machines with powerful CPUs and GPUs, but even using one of these pre-trained networks (known as inference) can be quite expensive.

    One of the most popular frameworks is Google’s TensorFlow (TF), and since this is rather resource intensive, they also produce a cut-down version optimised for less powerful platforms. This is TensorFlow Lite (TFLite), which can be run effectively on Raspberry Pi.

    Depth estimation

    ANNs have proven very adept at a wide variety of image processing tasks, most notably object classification and detection, but also depth estimation. This is the process of taking one or more images and working out how far away every part of the scene is from the camera, producing a depth map.

    Here’s an example:

    Depth estimation example using a truck

    The image on the right shows, by the brightness of each pixel, how far away the objects in the original (left-hand) image are from the camera (darker = nearer).

    We distinguish between stereo depth estimation, which starts with a stereo pair of images (taken from marginally different viewpoints; here, parallax can be used to inform the algorithm), and monocular depth estimation, working from just a single image.

    The applications of such techniques should be clear, ranging from robots that need to understand and navigate their environments, to the fake bokeh effects beloved of many modern smartphone cameras.

    Depth Estimation Challenge

    C V P R conference logo with dark blue background and the edge of the earth covered in scattered orange lights connected by white lines

    We were very interested then to learn that, as part of the CVPR (Computer Vision and Pattern Recognition) 2021 conference, Andrey Ignatov and Radu Timofte of ETH Zürich were planning to run a Monocular Depth Estimation Challenge. They are specifically targeting the Raspberry Pi 4 platform running TFLite, and we are delighted to support this effort.

    For more information, or indeed if any technically minded readers are interested in entering the challenge, please visit:

    The conference and workshops are all taking place virtually in June, and we’ll be sure to update our blog with some of the results and models produced for Raspberry Pi 4 by the competing teams. We wish them all good luck!

    Website: LINK

  • Raspberry Pi LEGO sorter

    Raspberry Pi LEGO sorter

    Reading Time: 3 minutes

    Raspberry Pi is at the heart of this AI–powered, automated sorting machine that is capable of recognising and sorting any LEGO brick.

    And its maker Daniel West believes it to be the first of its kind in the world!

    [youtube https://www.youtube.com/watch?v=04JkdHEX3Yk?feature=oembed&w=500&h=281]

    Best ever

    This mega-machine was two years in the making and is a LEGO creation itself, built from over 10,000 LEGO bricks.

    A beast of 10,000 bricks

    It can sort any LEGO brick you place in its input bucket into one of 18 output buckets, at the rate of one brick every two seconds.

    While Daniel was inspired by previous LEGO sorters, his creation is a huge step up from them: it can recognise absolutely every LEGO brick ever created, even bricks it has never seen before. Hence the ‘universal’ in the name ‘universal LEGO sorting machine’.

    Hardware

    There we are, tucked away, just doing our job

    Software

    The artificial intelligence algorithm behind the LEGO sorting is a convolutional neural network, the go-to for image classification.

    What makes Daniel’s project a ‘world first’ is that he trained his classifier using 3D model images of LEGO bricks, which is how the machine can classify absolutely any LEGO brick it’s faced with, even if it has never seen it in real life before.

    [youtube https://www.youtube.com/watch?v=-UGl0ZOCgwQ?feature=oembed&w=500&h=281]

    We LOVE a thorough project video, and we love TWO of them even more

    Daniel has made a whole extra video (above) explaining how the AI in this project works. He shouts out all the open source software he used to run the Raspberry Pi Camera Module and access 3D training images etc. at this point in the video.

    LEGO brick separation

    The vibration plate in action, feeding single parts into the scanner

    Daniel needed the input bucket to carefully pick out a single LEGO brick from the mass he chucks in at once.

    This is achieved with a primary and secondary belt slowly pushing parts onto a vibration plate. The vibration plate uses a super fast LEGO motor to shake the bricks around so they aren’t sitting on top of each other when they reach the scanner.

    Scanning and sorting

    A side view of the LEFO sorting machine showing a large white chute built from LEGO bricks
    The underside of the beast

    A Raspberry Pi Camera Module captures video of each brick, which Raspberry Pi 3 Model B+ then processes and wirelessly sends to a more powerful computer able to run the neural network that classifies the parts.

    The classification decision is then sent back to the sorting machine so it can spit the brick, using a series of servo-controlled gates, into the right output bucket.

    Extra-credit homework

    A front view of the LEGO sorter with the sorting boxes visible underneath
    In all its bricky beauty, with the 18 output buckets visible at the bottom

    Daniel is such a boss maker that he wrote not one, but two further reading articles for those of you who want to deep-dive into this mega LEGO creation:

    Website: LINK

  • AI-Man: a handy guide to video game artificial intelligence

    AI-Man: a handy guide to video game artificial intelligence

    Reading Time: 9 minutes

    Discover how non-player characters make decisions by tinkering with this Unity-based Pac-Man homage. Paul Roberts wrote this for the latest issue of Wireframe magazine.

    From the first video game to the present, artificial intelligence has been a vital part of the medium. While most early games had enemies that simply walked left and right, like the Goombas in Super Mario Bros., there were also games like Pac-Man, where each ghost appeared to move intelligently. But from a programming perspective, how do we handle all the different possible states we want our characters to display?

    Here’s AI-Man, our homage to a certain Namco maze game. You can switch between AI types to see how they affect the ghosts’ behaviours.

    For example, how do we control whether a ghost is chasing Pac-Man, or running away, or even returning to their home? To explore these behaviours, we’ll be tinkering with AI-Man – a Pac-Man-style game developed in Unity. It will show you how the approaches discussed in this article are implemented, and there’s code available for you to modify and add to. You can freely download the AI-Man project here. One solution to managing the different states a character can be in, which has been used for decades, is a finite state machine, or FSM for short. It’s an approach that describes the high-level actions of an agent, and takes its name simply from the fact that there are a finite number of states from which to transition between, with each state only ever doing one thing.

    Altered states

    To explain what’s meant by high level, let’s take a closer look at the ghosts in Pac-Man. The highlevel state of a ghost is to ‘Chase’ Pac-Man, but the low level is how the ghost actually does this. In Pac-Man, each ghost has its own behaviour in which it hunts the player down, but they’re all in the same high-level state of ‘Chase’. Looking at Figure 1, you can see how the overall behaviour of a ghost can be depicted extremely easily, but there’s a lot of hidden complexity. At what point do we transition between states? What are the conditions on moving between states across the connecting lines? Once we have this information, the diagram can be turned into code with relative ease. You could use simple switch statements to achieve this, or we could achieve the same using an object-oriented approach.

    Figure 1: A finite state machine

    Using switch statements can quickly become cumbersome the more states we add, so I’ve used the object-oriented approach in the accompanying project, and an example code snippet can be seen in Code Listing 1. Each state handles whether it needs to transition into another state, and lets the state machine know. If a transition’s required, the Exit() function is called on the current state, before calling the Enter() function on the new state. This is done to ensure any setup or cleanup is done, after which the Update() function is called on whatever the current state is. The Update()function is where the low-level code for completing the state is processed. For a project as simple as Pac-Man, this only involves setting a different position for the ghost to move to.

    Hidden complexity

    Extending this approach, it’s reasonable for a state to call multiple states from within. This is called a hierarchical finite state machine, or HFSM for short. An example is an agent in Call of Duty: Strike Team being instructed to seek a stealthy position, so the high-level state is ‘Find Cover’, but within that, the agent needs to exit the dumpster he’s currently hiding in, find a safe location, calculate a safe path to that location, then repeatedly move between points on that path until he reaches the target position.

    FSMs can appear somewhat predictable as the agent will always transition into the same state. This can be accommodated for by having multiple options that achieve the same goal. For example, when the ghosts in our Unity project are in the ‘Chase’ state, they can either move to the player, get in front of the player, or move to a position behind the player. There’s also an option to move to a random position. The FSM implemented has each ghost do one of these, whereas the behaviour tree allows all ghosts to switch between the options every ten seconds. A limitation of the FSM approach is that you can only ever be in a single state at a particular time. Imagine a tank battle game where multiple enemies can be engaged. Simply being in the ‘Retreat’ state doesn’t look smart if you’re about to run into the sights of another enemy. The worst-case scenario would be our tank transitions between ‘Attack’ and ‘Retreat’ states on each frame – an issue known as state thrashing – and gets stuck, and seemingly confused about what to do in this situation. What we need is away to be in multiple states at the same time: ideally retreating from tank A, whilst attacking tank B. This is where fuzzy finite state machines, or FFSM for short, come in useful.

    This approach allows you to be in a particular state to a certain degree. For example, my tank could be 80% committed to the Retreat state (avoid tank A), and 20% committed to the Attack state (attack tank B). This allows us to both Retreat and Attack at the same time. To achieve this, on each update, your agent needs to check each possible state to determine its degree of commitment, and then call each of the active states’ updates. This differs from a standard FSM, where you can only ever be in a single state. FFSMs can be in none, one, two, or however many states you like at one time. This can prove tricky to balance, but it does offer an alternative to the standard approach.

    No memory

    Another potential issue with an FSM is that the agent has no memory of what they were previously doing. Granted, this may not be important: in the example given, the ghosts in Pac-Man don’t care about what they were doing, they only care about what they are doing, but in other games, memory can be extremely important. Imagine instructing a character to gather wood in a game like Age of Empires, and then the character gets into a fight. It would be extremely frustrating if the characters just stood around with nothing to do after the fight had concluded, and for the player to have to go back through all these characters and reinstruct them after the fight is over. It would be much better for the characters to return to their previous duties.

    “FFSMs can be in one, none,

    two, or however many states

    you like.”

    We can incorporate the idea of memory quite easily by using the stack data structure. The stack will hold AI states, with only the top-most element receiving the update. This in effect means that when a state is completed, it’s removed from the stack and the previous state is then processed. Figure 2 depicts how this was achieved in our Unity project. To differentiate the states from the FSM approach, I’ve called them tasks for the stackbased implementation. Looking at Figure 2, it shows how (from the bottom), the ghost was chasing the player, then the player collected a power pill, which resulted in the AI adding an Evade_Task – this now gets the update call, not the Chase_Task. While evading the player, the ghost was then eaten.

    At this point, the ghost needed to return home, so the appropriate task was added. Once home, the ghost needed to exit this area, so again, the relevant task was added. At the point the ghost exited home, the ExitHome_Task was removed, which drops processing back to MoveToHome_Task. This was no longer required, so it was also removed. Back in the Evade_Task, if the power pill was still active, the ghost would return to avoiding the player, but if it had worn off, this task, in turn, got removed, putting the ghost back in its default task of Chase_Task, which will get the update calls until something else in the world changes.

    Figure 2: Stack-based finite state machine.

    Behaviour trees

    In 2002, Halo 2 programmer Damian Isla expanded on the idea of HFSM in a way that made it more scalable and modular for the game’s AI. This became known as the behaviour tree approach. It’s now a staple in AI game development. The behaviour tree is made up of nodes, which can be one of three types – composite, decorator, or leaf nodes. Each has a different function within the tree and affects the flow through the tree. Figure 3 shows how this approach is set up for our Unity project. The states we’ve explored so far are called leaf nodes. Leaf nodes end a particular branch of the tree and don’t have child nodes – these are where the AI behaviours are located. For example, Leaf_ExitHome, Leaf_Evade, and Leaf_ MoveAheadOfPlayer all tell the ghost where to move to. Composite nodes can have multiple child nodes and are used to determine the order in which the children are called. This could be in the order in which they’re described by the tree, or by selection, where the children nodes will compete, with the parent node selecting which child node gets the go-ahead. Selector_Chase allows the ghost to select a single path down the tree by choosing a random option, whereas Sequence_ GoHome has to complete all the child paths to complete its behaviour.

    Code Listing 2 shows how simple it is to choose a random behaviour to use – just be sure to store the index for the next update. Code Listing 3 demonstrates how to go through all child nodes, and to return SUCCESS only when all have completed, otherwise the status RUNNING is returned. FAILURE only gets returned when a child node itself returns a FAILURE status.

    Complex behaviours

    Although not used in our example project, behaviour trees can also have nodes called decorators. A decorator node can only have a single child, and can modify the result returned. For example, a decorator may iterate the child node for a set period, perhaps indefinitely, or even flip the result returned from being a success to a failure. From what first appears to be a collection of simple concepts, complex behaviours can then develop.

    Figure 3: Behaviour tree

    Video game AI is all about the illusion of intelligence. As long as the characters are believable in their context, the player should maintain their immersion in the game world and enjoy the experience we’ve made. Hopefully, the approaches introduced here highlight how even simple approaches can be used to develop complex characters. This is just the tip of the iceberg: AI development is a complex subject, but it’s also fun and rewarding to explore.

    Wireframe #43, with the gorgeous Sea of Stars on the cover.

    The latest issue of Wireframe Magazine is out now. available in print from the Raspberry Pi Press onlinestore, your local newsagents, and the Raspberry Pi Store, Cambridge.

    You can also download the PDF directly from the Wireframe Magazine website.

    Website: LINK

  • A hand-following AI task lamp for your desk

    A hand-following AI task lamp for your desk

    Reading Time: < 1 minute

    A hand-following AI task lamp for your desk

    Arduino TeamJuly 15th, 2020

    As you work on a project, lighting needs change dynamically. This can mean manual adjustment after manual adjustment, making do with generalized lighting, or having a helper hold a flashlight. Harry Gao, however, has a different solution in the form of a novel robotic task lamp.

    Gao’s 3D-printed device uses a USB camera to take images of the work area, and a Python image processing routine running on a PC to detect hand positions. This sends instructions to an Arduino Nano, which commands a pair of small stepper motors to extend and rotate the light fixture via corresponding driver boards.

    The solution means that he’ll always have proper illumination, as long as he stays within the light-bot’s range!

    [youtube https://www.youtube.com/watch?v=yNJGleCFyB8?feature=oembed&w=500&h=281]

    Website: LINK

  • Learning AI at school — a peek into the black box

    Learning AI at school — a peek into the black box

    Reading Time: 5 minutes

    “In the near future, perhaps sooner than we think, virtually everyone will need a basic understanding of the technologies that underpin machine learning and artificial intelligence.” — from the 2018 Informatics Europe & EUACM report about machine learning

    As the quote above highlights, AI and machine learning (ML) are increasingly affecting society and will continue to change the landscape of work and leisure — with a huge impact on young people in the early stages of their education.

    But how are we preparing our young people for this future? What skills do they need, and how do we teach them these skills? This was the topic of last week’s online research seminar at the Raspberry Pi Foundation, with our guest speaker Juan David Rodríguez Garcia. Juan’s doctoral studies around AI in school complement his work at the Ministry of Education and Vocational Training in Spain.

    Juan David Rodríguez Garcia

    Juan’s LearningML tool for young people

    Juan started his presentation by sharing numerous current examples of AI and machine learning, which young people can easily relate to and be excited to engage with, and which will bring up ethical questions that we need to be discussing with them.

    Of course, it’s not enough for learners to be aware of AI applications. While machine learning is a complex field of study, we need to consider what aspects of it we can make accessible to young people to enable them to learn about the concepts, practices, and skills underlying it. During his talk Juan demonstrated a tool called LearningML, which he has developed as a practical introduction to AI for young people.

    Screenshot of a demo of Juan David Rodríguez Garcia's LearningML tool

    Juan demonstrates image recognition with his LearningML tool

    LearningML takes inspiration from some of the other in-development tools around machine learning for children, such as Machine Learning for Kids, and it is available in one integrated platform. Juan gave an enticing demo of the tool, showing how to use visual image data (lots of pictures of Juan with hats, glasses on, etc.) to train and test a model. He then demonstrated how to use Scratch programming to also test the model and apply it to new data. The seminar audience was very positive about the LearningML, and of course we’d like it translated into English!

    Juan’s talk generated many questions from the audience, from technical questions to the key question of the way we use the tool to introduce children to bias in AI. Seminar participants also highlighted opportunities to bring machine learning to other school subjects such as science.

    AI in schools — what and how to teach

    Machine learning demonstrates that computers can learn from data. This is just one of the five big ideas in AI that the AI4K12 group has identified for teaching AI in school in order to frame this broad domain:

    1. Perception: Computers perceive the world using sensors
    2. Representation & reasoning: Agents maintain models/representations of the world and use them for reasoning
    3. Learning: Computers can learn from data
    4. Natural interaction: Making agents interact comfortably with humans is a substantial challenge for AI developers
    5. Societal impact: AI applications can impact society in both positive and negative ways

    One general concern I have is that in our teaching of computing in school (if we touch on AI at all), we may only focus on the fifth of the ‘big AI ideas’: the implications of AI for society. Being able to understand the ethical, economic, and societal implications of AI as this technology advances is indeed crucial. However, the principles and skills underpinning AI are also important, and how we introduce these at an age-appropriate level remains a significant question.

    Illustration of AI, Image by Seanbatty from Pixabay

    There are some great resources for developing a general understanding of AI principles, including unplugged activities from Computer Science For Fun. Yet there’s a large gap between understanding what AI is and has the potential to do, and actually developing the highly mathematical skills to program models. It’s not an easy issue to solve, but Juan’s tool goes a little way towards this. At the Raspberry Pi Foundation, we’re also developing resources to bridge this educational gap, including new online projects building on our existing machine learning projects, and an online course. Watch this space!

    AI in the school curriculum and workforce

    All in all, we seem to be a long way off introducing AI into the school curriculum. Looking around the world, in the USA, Hong Kong, and Australia there have been moves to introduce AI into K-12 education through pilot initiatives, and hopefully more will follow. In England, with a computing curriculum that was written in 2013, there is no requirement to teach any AI or machine learning, or even to focus much on data.

    Let’s hope England doesn’t get left too far behind, as there is a massive AI skills shortage, with millions of workers needing to be retrained in the next few years. Moreover, a recent House of Lords report outlines that introducing all young people to this area of computing also has the potential to improve diversity in the workforce — something we should all be striving towards.

    We look forward to hearing more from Juan and his colleagues as this important work continues.

    Next up in our seminar series

    If you missed the seminar, you can find Juan’s presentation slides and a recording of his talk on our seminars page.

    In our next seminar on Tuesday 2 June at 17:00–18:00 BST / 12:00–13:00 EDT / 9:00–10:00 PDT / 18:00–19:00 CEST, we’ll welcome Dame Celia Hoyles, Professor of Mathematics Education at University College London. Celia will be sharing insights from her research into programming and mathematics. To join the seminar, simply sign up with your name and email address and we’ll email the link and instructions. If you attended Juan’s seminar, the link remains the same.

    Website: LINK

  • Machine learning and rock, paper, scissors

    Machine learning and rock, paper, scissors

    Reading Time: 3 minutes

    Use a Raspberry Pi and a Pi Camera Module to build your own machine learning–powered rock paper scissors game!

    Rock-Paper-Scissors game using computer vision and machine learning on Raspberry Pi

    A Rock-Paper-Scissors game using computer vision and machine learning on the Raspberry Pi. Project GitHub page: https://github.com/DrGFreeman/rps-cv PROJECT ORIGIN: This project results from a challenge my son gave me when I was teaching him the basics of computer programming making a simple text based Rock-Paper-Scissors game in Python.

    Virtual rock paper scissors

    Here’s why you should always leave comments on our blog: this project from Julien de la Bruère-Terreault instantly had our attention when he shared it on our recent Android Things post.

    Julien and his son were building a text-based version of rock paper scissors in Python when his son asked him: “Could you make a rock paper scissors game that uses the camera to detect hand gestures?” Obviously, Julien really had no choice but to accept the challenge.

    “The game uses a Raspberry Pi computer and Raspberry Pi Camera Module installed on a 3D-printed support with LED strips to achieve consistent images,” Julien explains in the tutorial for the build. “The pictures taken by the camera are processed and fed to an image classifier that determines whether the gesture corresponds to ‘Rock’, ‘Paper’, or ‘Scissors’ gestures.”

    How does it work?

    Physically, the build uses a Pi 3 Model B and a Camera Module V2 alongside 3D-printed parts. The parts are all green, since a consistent colour allows easy subtraction of background from the captured images. You can download the files for the setup from Thingiverse.

    rock paper scissors raspberry pi

    To illustrate how the software works, Julien has created a rather delightful pipeline demonstrating where computer vision and machine learning come in.

    rock paper scissors using raspberry pi

    The way the software works means the game doesn’t need to be limited to the standard three hand signs. If you wanted to, you could add other signs such as ‘lizard’ and ‘Spock’! Or ‘fire’ and ‘water balloon’. Or any other alterations made to the game in your pop culture favourites.

    rock paper scissors lizard spock

    Check out Julien’s full tutorial to build your own AI-powered rock paper scissors game here on Julien’s GitHub. Massive kudos to Julien for spending a year learning the skills required to make it happen. And a massive thank you to Julien’s son for inspiring him! This is why it’s great to do coding and digital making with kids — they have the best project ideas!

    Sharing is caring

    If you’ve built your own project using Raspberry Pi, please share it with us in the comments below, or via social media. As you can tell from today’s blog post, we love to see them and share them with the whole community!

    Website: LINK