Kategorie: PC

  • Ocean Prompting Process: How to get the results you want from an LLM

    Ocean Prompting Process: How to get the results you want from an LLM

    Reading Time: 5 minutes

    Have you heard of ChatGPT, Gemini, or Claude, but haven’t tried any of them yourself? Navigating the world of large language models (LLMs) might feel a bit daunting. However, with the right approach, these tools can really enhance your teaching and make classroom admin and planning easier and quicker. 

    That’s where the OCEAN prompting process comes in: it’s a straightforward framework designed to work with any LLM, helping you reliably get the results you want. 

    The great thing about the OCEAN process is that it takes the guesswork out of using LLMs. It helps you move past that ‘blank page syndrome’ — that moment when you can ask the model anything but aren’t sure where to start. By focusing on clear objectives and guiding the model with the right context, you can generate content that is spot on for your needs, every single time.

    5 ways to make LLMs work for you using the OCEAN prompting process

    OCEAN’s name is an acronym: objective, context, examples, assess, negotiate — so let’s begin at the top.

    1. Define your objective

    Think of this as setting a clear goal for your interaction with the LLM. A well-defined objective ensures that the responses you get are focused and relevant.

    Maybe you need to:

    • Draft an email to parents about an upcoming school event
    • Create a beginner’s guide for a new Scratch project
    • Come up with engaging quiz questions for your next science lesson

    By knowing exactly what you want, you can give the LLM clear directions to follow, turning a broad idea into a focused task.

    2. Provide some context 

    This is where you give the LLM the background information it needs to deliver the right kind of response. Think of it as setting the scene and providing some of the important information about why, and for whom, you are making the document.

    You might include:

    • The length of the document you need
    • Who your audience is — their age, profession, or interests
    • The tone and style you’re after, whether that’s formal, informal, or somewhere in between

    All of this helps the LLM include the bigger picture in its analysis and tailor its responses to suit your needs.

    3. Include examples

    By showing the LLM what you’re aiming for, you make it easier for the model to deliver the kind of output you want. This is called one-shot, few-shot, or many-shot prompting, depending on how many examples you provide.

    You can:

    • Include URL links 
    • Upload documents and images (some LLMs don’t have this feature)
    • Copy and paste other text examples into your prompt

    Without any examples at all (zero-shot prompting), you’ll still get a response, but it might not be exactly what you had in mind. Providing examples is like giving a recipe to follow that includes pictures of the desired result, rather than just vague instructions — it helps to ensure the final product comes out the way you want it.

    4. Assess the LLM’s response

    This is where you check whether what you’ve got aligns with your original goal and meets your standards.

    Keep an eye out for:

    • Hallucinations: incorrect information that’s presented as fact
    • Misunderstandings: did the LLM interpret your request correctly?
    • Bias: make sure the output is fair and aligned with diversity and inclusion principles

    A good assessment ensures that the LLM’s response is accurate and useful. Remember, LLMs don’t make decisions — they just follow instructions, so it’s up to you to guide them. This brings us neatly to the next step: negotiate the results.

    5. Negotiate the results

    If the first response isn’t quite right, don’t worry — that’s where negotiation comes in. You should give the LLM frank and clear feedback and tweak the output until it’s just right. (Don’t worry, it doesn’t have any feelings to be hurt!) 

    When you negotiate, tell the LLM if it made any mistakes, and what you did and didn’t like in the output. Tell it to ‘Add a bit at the end about …’ or ‘Stop using the word “delve” all the time!’ 

    How to get the tone of the document just right

    Another excellent tip is to use descriptors for the desired tone of the document in your negotiations with the LLM, such as, ‘Make that output slightly more casual.’

    In this way, you can guide the LLM to be:

    • Approachable: the language will be warm and friendly, making the content welcoming and easy to understand
    • Casual: expect laid-back, informal language that feels more like a chat than a formal document
    • Concise: the response will be brief and straight to the point, cutting out any fluff and focusing on the essentials
    • Conversational: the tone will be natural and relaxed, as if you’re having a friendly conversation
    • Educational: the language will be clear and instructive, with step-by-step explanations and helpful details
    • Formal: the response will be polished and professional, using structured language and avoiding slang
    • Professional: the tone will be business-like and precise, with industry-specific terms and a focus on clarity

    Remember: LLMs have no idea what their output says or means; they are literally just very powerful autocomplete tools, just like those in text messaging apps. It’s up to you, the human, to make sure they are on the right track. 

    Don’t forget the human edit 

    Even after you’ve refined the LLM’s response, it’s important to do a final human edit. This is your chance to make sure everything’s perfect, checking for accuracy, clarity, and anything the LLM might have missed. LLMs are great tools, but they don’t catch everything, so your final touch ensures the content is just right.

    At a certain point it’s also simpler and less time-consuming for you to alter individual words in the output, or use your unique expertise to massage the language for just the right tone and clarity, than going back to the LLM for a further iteration. 

    Ready to dive in? 

    Now it’s time to put the OCEAN process into action! Log in to your preferred LLM platform, take a simple prompt you’ve used before, and see how the process improves the output. Then share your findings with your colleagues. This hands-on approach will help you see the difference the OCEAN method can make!

    Sign up for a free account at one of these platforms:

    • ChatGPT (chat.openai.com)
    • Gemini (gemini.google.com)

    By embracing the OCEAN prompting process, you can quickly and easily make LLMs a valuable part of your teaching toolkit. The process helps you get the most out of these powerful tools, while keeping things ethical, fair, and effective.

    If you’re excited about using AI in your classroom preparation, and want to build more confidence in integrating it responsibly, we’ve got great news for you. You can sign up for our totally free online course on edX called ‘Teach Teens Computing: Understanding AI for Educators’ (helloworld.cc/ai-for-educators). In this course, you’ll learn all about the OCEAN process and how to better integrate generative AI into your teaching practice. It’s a fantastic way to ensure you’re using these technologies responsibly and ethically while making the most of what they have to offer. Join us and take your AI skills to the next level!

    A version of this article also appears in Hello World issue 25.

    Website: LINK

  • Putting AI to use

    Putting AI to use

    Reading Time: < 1 minute

    Lucy Hattersley has all the AI kit and an urge to build something real

  • Indie Selects for November (and Some Cool Announcements!)

    Indie Selects for November (and Some Cool Announcements!)

    Reading Time: 14 minutes

    The holiday season is upon us, and the ID@Xbox team is thrilled to showcase this month’s picks for Indie Selects. But first, we want to extend our heartfelt thanks to everyone for supporting independent games on Xbox over the past year.  Indie Selects is our special platform to celebrate the incredibly weird and unique. Someone’s perfect game isn’t necessarily a massive blockbuster IP, but may be a polished and emotionally resonant story, a brand new spin on an existing idea, or just something truly unusual. Thank you for checking out our recommendations and supporting amazing game creators on our platform.

    Secondly, instead of giving you a one-week December Selects collection, we decided to build out 3 weeks’ worth of 2024 Indie Select Highlights.  Each week, starting December 11, we’ll be highlighting and promoting 18 of our favorite titles that launched this last year.  Be sure to check out the Indie Selects section in the Xbox Store (head to the Games Home tab and a few rows down) every Wednesday for more must-have indies.

    November Indie Selects Image

    Finally, on January 28, 2025, we will celebrate the one-year anniversary of this program with a 2024 Anniversary Collection. Our team will pick six top Indie Selects from the previous year that we believe are absolute must-haves and place them in this prestigious shrine for all to witness. We have more surprises, new sales, and new collections planned for 2025, but for now, let’s focus on the present.

    This month is stuffed.  There were so many titles to choose from that truly pushed the envelope in terms of unique design, mysterious mechanics, dramatic storytelling, and the art of fun.  We all made some strong cases for what we wanted to champion.  Some of us were still in our October bag and needed something terrifying to get the heart pumping.  A lot of us, understandably, wanted to build a bond with a giant wolf that we could ride on to our next adventure.  Several of us wanted to get lost in something: a mysterious world, impossible choices, or maybe just our own creativity.  We ended up with a list that thankfully kept everyone happy here and we’re sure that there’s something here that you won’t soon forget. Here’s what we’ve got for you this month (in no particular order):

    Neva Art

    I don’t know if any of you are looking for more reasons to get emotional, but Neva will hit you right in the feels. I played through last weekend and found myself in awe of its beautiful presentation, visual splendor, and haunting soundtrack. I was also reduced to tears on occasion. It’s an incredibly moving piece of software that also happens to be quite fun and compelling to play.

    This follow-up to the gorgeous and emotionally charged Gris from Nomada Studio shows a studio honing in on their craft. Gris was an impactful work of art; Neva takes that core formula and adds satisfying yet simple platforming and combat to the mix. The game is only about 5 hours from start to finish but those hours will make you reflect inwardly and touch your heart in ways that are rare for games. If the combat scenarios get a bit tricky for you, the game offers a “Story Mode” difficulty that removes the health bar entirely and allows you to play without the fear of dying.

    Neva

    Devolver Digital

    104

    $19.99

    Neva is an emotionally-charged action adventure from the visionary team behind the critically acclaimed GRIS. Neva chronicles the story of Alba, a young woman bound to a curious wolf cub following a traumatic encounter with dark forces. Together they embark on a perilous journey through a once-beautiful world as it slowly decays around them. Over time, their relationship will evolve as they learn to work together, helping one another to brave increasingly dangerous situations. The wolf will grow from a rebellious cub to an imposing adult seeking to forge his own identity, testing Alba’s love and their commitment to one another. As the cursed world threatens to overwhelm them, Alba and her courageous companion will do whatever it takes to survive and make a new home, together.

    Animal Well Art

    Animal Well is an illusive, beautiful, 2D pixel art Metroidvania full of abstract puzzles in haunting, living environments.  You play as a squishy little blob in a dark and wet world that offers up its own interpretive language on how to proceed beyond to the next area.  As you progress through new areas, you’ll gain access to more routes as you find mysterious tools and understand their mechanics. The creatures populating the well create an interesting dynamic as you discover their purpose.  Some animals are essential to solving complex problems and because there’s no real combat, other animals often must be cleverly avoided which in-and-of-itself can be its own puzzle.  Even as you traverse area after area, the game is packed with so much to do and so many more secrets to find – well after the “ending”. 

    This is one of those special games that leaves an imprint. It took me some time to nail down what exactly it was that gave me goosebumps during each minute of my playthrough.  Finally, it hit me.  Your tiny, bright little blob, though very alive and cute, isn’t really the main character: The environment is! There are so many small moments and interactions with the environment that are so wonderfully satisfying, and they just keep happening one after the other. Delving into the deepest mysteries of the environment felt like I was asking the developer directly, “What were you trying to do here?” At some point, my goals for what I wanted to get out of the game shifted as I uncovered more and more secrets and felt rewarded for examining every pixel shift.  If you relish finding everything in a game without looking for help online, prepare to put your skills to the test and enjoy the ride.

    ANIMAL WELL

    Bigmode

    99

    $24.99

    Hatch from your flower and spelunk through the beautiful and sometimes haunting world of Animal Well, a pixelated wonder rendered in intricate audio and visual detail. Encounter lively creatures small and large, helpful and ominous as you discover unconventional upgrades and unravel the well’s secrets. This is a truly unique experience that can make you laugh in fear, surprise, or delight.

    Planet Coaster 2 Art

    Planet Coaster 2 is gratifying for anyone who loves theme park and strategy simulations. As someone who loves both strategy and creativity, this game hits me in all the right ways. From the beginning, you enter a very vibrant world with spectacular visuals. The in-game characters are great at guiding you through the UI, tasks, and management of the park. There are so many possibilities on how you can design and build your dream theme park –and I can spend hours doing this! The coaster builder is very intuitive and allows for lots of creativity, letting you craft rides that are as crazy and wild as your heart desires. The management aspect pulls you right back to reality as a park owner, as you need to balance staff, finances, and guest satisfaction to keep everything running smoothly.

    Being able to share your creation with other players – and being able to download theirs – adds another level of collaboration and fun to the community. It’s astonishing to see just how creative others can be! And even better than that, you can incorporate their vision into your own park. Whether you’ve spent endless hours managing parks before or new to the genre, Planet Coaster 2 provides the perfect mix of community, strategy, and creativity!

    Planet Coaster 2

    Frontier Developments

    192

    $49.99

    Create a splash with Planet Coaster 2 – sequel to the world’s best coaster park simulator!   Reach new heights of creativity, management, and sharing as you construct the theme parks of your dreams combining epic water rides and coasters to delight and thrill your park guests.  Dive in now!  EMBRACE CREATIVITY  Expand your imagination: Enhanced and improved building and pathing tools let you create spectacular, true-to-life theme parks, complete with sprawling plazas. Make your mark on the world as you shape terrain and populate your park with stunning swimming pools and thrilling rides. It’s never been easier to bring your dream park to life.   Next-level customisation: Embark on the coaster park experience of a lifetime as you push the boundaries of creativity like never before!  Unleash your imagination as you build the most awe-inspiring themed creations with unparalleled customisation tools! For the first time, intuitively add scalable scenery and objects to every ride to elevate your park and give your guests a day to remember.  Combine epic water and coaster rides: Create the ultimate experience for your thrill-seeking guests using piece-by-piece construction. Build a vibrant array of exhilarating rollercoasters and jaw-dropping thrill rides. Design a seamlessly interconnected park paradise with glistening swimming pools and twisting water flumes.    Create unforgettable experiences: Surprise and amaze as you bring the dramatic flair of real-world theme parks to your guests with the brand-new event sequencer tool, while sophisticated global illumination lighting brings the theme park experience to your home with stunning visuals and striking authenticity.  MASTER MANAGEMENT  Find the secret to success: Balance thrilling your guests with managing your budget – populate your park with amazing efficiently powered attractions and the right amenities to boost your rating and become a theme park master.  Take care: Provide for your guests’ health, well-being, and happiness so your park can thrive. With all-new waterparks comes the need for shade, sunscreen, lifeguards, changing rooms, and more to ensure everyone has their best day both in and out of the pool.  Satisfaction guaranteed: Understanding the wants and needs of your guests has never been easier! The new ‘heat maps’ feature lets you understand the wider park’s needs at a glance before diving deeper into the details -simply select any guest to see how they and their immediate group are enjoying your theme park.   SHARE THE RIDE  Create together: Unleash your collective creativity in Sandbox Mode. Take turns to work on building a park with players around the world using a shared cross-platform save!  Play together: For the first time ever, collaborate with friends in Franchise Mode to create the best theme park empire and compete with others to reach the top of worldwide leaderboards!  Visit together: Soak up the fun and get up close and personal, as you experience other players theme park creations first hand – explore their worlds in first person as a park guest!  Share creations: Share anything and everything! Upload coasters, rides, or even full theme parks for other players, and download and delight in other’s creations. All are easily accessible from the cross-platform Frontier Workshop both in-game and online. 

    Phasmophobia Art

    Phasmophobia is a truly engaging horror experience in which you need to figure out what kind of ghost is haunting the locations you are investigating by using various ghost detecting tools. Each type of ghost creates specific supernatural phenomena that you need to identify and to puzzle out in your journal. If you get the correct ghost type, you win.

    The secret sauce is in the experience you have while searching for the clues. The game masterfully uses classical horror cues like sound or light to create the feeling of a haunted house, reacting to your actions and making you always fear for your life. As you cannot really fight back, your only option is to get things done as quickly as possible while avoiding the danger.

    Finally, the game is a great one to play with a few friends. Nothing beats hearing your friends screaming in your voice chat that they are about to die and the silence afterwards. That’s both the funniest part of the game – as well as the scariest.

    Phasmophobia (Game Preview)

    Kinetic Games

    568

    $19.99

    Free Trial

    This game is a work in progress. It may or may not change over time or release as a final product. Purchase only if you are comfortable with the current state of the unfinished game. INVESTIGATE
    Immersive Experience: Realistic graphics and sounds as well as a minimal user interface ensure a totally immersive experience that will keep you on your toes.
    Unique Ghosts: Identify over 20 different ghost types, each with unique traits, personalities, and abilities to make each investigation feel different from the last.
    Equipment: Use well-known ghost-hunting equipment such as EMF Readers, Spirit Boxes, Thermometers, and Night Vision Cameras to find clues and gather as much paranormal evidence as you can. Find Cursed Possessions that grant information or abilities in exchange for your sanity. PLAY YOUR WAY
    Locations: Choose from over 10 different haunted locations, each with unique twists, hiding spots, and layouts.
    Game Modes: With 5 default difficulties and daily and weekly challenges, there are plenty of ways to test your skills.
    Teamwork: Dive in head first, get your hands dirty searching for evidence while fighting for your life. If you’re not feeling up to the task, play it safe and support your team from the truck by monitoring the investigation with CCTV and motion sensors.
    Custom Difficulty: Create your own games to tailor the difficulty to your or your group’s needs, with proportional rewards and come up with crazy game modes of your own! MULTIPLAYER
    Co-operate: Play alongside your friends with up to 4 players in this co-op horror where teamwork is key to your success.
    Play together: Phasmophobia supports all players together, play with your friends with any combination of input types.
    Cross-play: Play alongside your friends on other platforms. Full details on the latest status of the game, how you can give feedback and report issues can be found at https://kineticgames.co.uk/.

    Fear the Spotlight Art

    Fear the Spotlight is a retro, ’90s-inspired horror game focused on puzzle solving, gripping story, and avoiding detection. Vivian and Amy sneak into their school at night for a séance which, of course, goes horribly wrong and Amy disappears.  Now Vivian must search the school for her friend and uncover the mystery of the school’s past. 

    During the search you’ll be stalked by a monster – more specifically, a man with a spotlight for a head.  There’s no combat, so you’ll need to avoid detection at all costs while you explore and solve puzzles which allow you acquire key items to progress further.  This isn’t the type of horror game that is full of heart-pounding jump scares, but a slow build of creeping, uneasy, tension that adds to a great narrative experience.

    Fear the Spotlight

    Blumhouse Games

    39

    $19.99

    Fear the Spotlight is an atmospheric third-person horror adventure with a disturbing mystery to unravel. Sneak into school after hours with Vivian and Amy, survive a séance gone wrong, solve tactile puzzles, and, whatever you do, stay out of the spotlight… Sunnyside High has a dark history. When Vivian enters the deserted corridors for a seance with the rebellious Amy, she suddenly ends up alone, and at the mercy of the monster who wanders the halls. Vivian must avoid its gaze, find her friend, and uncover the disturbing, murderous truth of a decades old tragedy. Fear the Spotlight is a creepy love letter to classic 90s horror experiences with a focus of rich storytelling, puzzle solving, and a tense atmosphere. This is a perfect narrative horror game for those new to the genre. ESCAPE THE NIGHTMARE The séance went terribly wrong. Amy has disappeared, and the school has transformed into a nightmarish version of its former self. Play as Vivian to uncover a dark hidden past while attempting to save both you and your friend DELIGHTFULLY HORRIFYING GAMEPLAY * Sneak to avoid detection from an unknown entity in tense hand-crafted stealth moments as Vivian explores the ominous darkness of the school corridors * Solve highly tactile puzzles that pay homage to classic genre favorites while evolving gameplay to satisfy modern horror fans * Explore the eerie setting of a derelict Sunnyside High with a flashlight, screwdriver, wrench, and more to uncover a web of mysteries 1990s ATMOSPHERIC DREAD Explore the hellish version of Sunnyside High as Vivian in this ode to classic 90’s teen horror. A retro art style, tense audio design, and a nostalgic setting will delight both veteran fans and those new to the horror genre AN EXPANDED STORY Dive even deeper into the dark world of Fear the Spotlight with an extensive additional storyline Fear the Spotlight is the first title to be published by Blumhouse Games, a video games label within Blumhouse Productions focused on championing the most creative and unique takes on the horror genre.

    Slay the Princess Art

    This is a love story. Right? Slay the Princess is a visual novel… but more like a choose your own adventure with the goal to slay the princess. But why? That’s what you need to find out by exploring several dozen prompts from the voice of a “narrator”, who sheds more light on the situation. The genius here is that your approach will determine your path, invite more voices to join the narrator, and alter the perception of each situation.

    As your behavior becomes more defined, the princess’s physical form will manifest accordingly. This all culminates in the ending of one story and the beginning of another – again and again – as you progress through the hidden story beneath it all. The black and white pencil art is fantastic, the musical score is intense, and the voice acting is top of its class.  It’s dark, creepy, charming and incredibly intriguing. Oh, and you’ll die a lot. Don’t worry about that. Just keep going. It’s worth it.

    Slay the Princess – The Pristine Cut

    Serenity Forge

    204

    $17.99

    You’re on a path in the woods, and at the end of that path is a cabin. And in the basement of that cabin is a Princess. You’re here to slay her. If you don’t, it will be the end of the world. She will do everything in her power to stop you. She’ll charm, and she’ll lie, and she’ll promise you the world, and if you let her, she’ll kill you a dozen times over. You can’t let that happen. You do care about the fate of the world, right? Slay the Princess – The Pristine Cut is a fully voice-acted horror tragicomedy featuring the impeccable talents of Jonathan Sims and Nichole Goodnight, with lovingly hand-penciled frames by Ignatz-winning graphic novelist Abby Howard. This game features a princess. She’s very bad and you have to get rid of her for all our sakes. She’s just an ordinary human Princess, and you can definitely slay her as long as you put your mind to it. Hopefully you won’t die. But if you do, you’ll die a lot. Be careful and stay focused on the task at hand! What you say and what you believe will shift the story in dramatic ways. Ah, but what’s in the Pristine Cut? Are you sure you want to find out? If you do, you’re likely to discover things that are best left experienced directly. Seriously, turn back. … Oh, you’re still reading? Very well, I shall tell you. The Pristine Cut contains, precisely: – 3 brand new chapters replete with mysteries—and consequences.
    – Never before seen Princesses who will all murder you without a second thought.
    – Expansions to familiar routes: The Den, The Apotheosis and The Fury each have more than doubled in length.
    – Over 35% more content overall—and all of it filled with great opportunities to listen to me and do your job.
    – A new ending. And hopefully one that saves the world rather than damns it. – Gallery to Track Your Progress: Cherish your memories, relive your exploits, and uncover deeply hidden secrets with the new gallery feature.
    – Over 1,200 New Hand-Penciled Frames hand-illustrated by Abby Howard.
    – Over 2,500 New Lines of Dialogue fully voiced dialogue by the impeccable Jonathan Sims and Nichole Goodnight. Yes, prepare yourself for an expanded journey filled with new perils, difficult choices, and unforgettable encounters. Will you stay on the path to slay the Princess, or will the new chapters alter your fate? (That said, while I wish free-will weren’t a thing here, I cannot emphasize this enough: you have to slay her. Please.)

    Website: LINK

  • PiDog robot review

    PiDog robot review

    Reading Time: 3 minutes

    The first thing to decide is which Raspberry Pi model to use before assembling the kit. PiDog will work with Raspberry Pi 4, 3B+, 3B, and Zero 2 W. Using a Raspberry Pi 5 is not recommended since its extra power requirements put too much of a strain on the battery power – PiDog uses a lot of current when standing or moving – so it’s likely to suffer from under-voltage. We opted for a Raspberry Pi 4, although even then we did have a few issues with crashes when the battery level was low.

    Canine construction

    With a kit comprising a huge array of parts, building a PiDog is no mean feat. We reckon it took us around five to six hours, although we were taking our time to get it right. The printed diagram-based instructions are easy to follow, however, and there are online videos if you get stuck. Apart from a few fiddly bits, including manipulating some tiny screws and nuts, it’s an enjoyable process. Helpfully, the fixtures and fittings – including numerous sizes of screws and plastic rivets – come in labelled bags. The kit includes a couple of screwdrivers too.

    The main chassis is built from aluminium alloy panels, giving this dog a shiny and robust ‘coat’. There are also several acrylic pieces, including some to build a stand to place PiDog on when calibrating its leg servos. A nice touch.

    PiDog takes a while to build from the kit, but is a lot of fun to play with and program in Python

    Raspberry Pi sits on a sound direction sensor module and is then mounted with a Robot HAT which handles all the servos (via PWM pins), sensor inputs, and battery management. Portable power is supplied by a custom battery pack comprising two 18650 batteries with a capacity of 2000mAh, which takes a couple of hours to charge fully.

    Doggy-do code

    Once you’ve assembled the kit, it’s time to fine-tune the calibration of the servos with a script. You’ll have used a zeroing script during assembly to get the rough positions right, so will have already installed the PiDog libraries and software in Raspberry Pi OS.

    Detailed online documentation guides you through everything, including running a script to enable I2S sound from the robot’s speaker. It also covers a good range of Python example programs that showcase what PiDog can do.

    In patrol mode, for instance, PiDog walks forward and stops to bark when it detects something ahead. The react demo sees it rear up and bark when approached from the front, but roll its head and wag its tail when you pet the touch sensor on its neck. There’s also a balance demo to showcase its 6DOF IMU module that enables PiDog to self-balance when walking on a tilting tabletop.

    Control PiDog remotely from an app, with a customisable widget layout, and view its camera feed

    There are a few examples using the camera module with OpenCV computer vision. A face-tracking demo generates a web server, enabling you to see the camera view on a web page. There’s also the option to control PiDog with an iOS or Android app, complete with live camera feed.

    You can even communicate with your PiDog via GPT-4o AI, using text or spoken commands – with a USB mic (not supplied) equipped. It takes a bit of setting up, using an API key, but the online guide takes you through the process.

    Verdict

    9/10

    Great fun to play with, this smart canine companion has an impressive feature set and lots of possibilities for further training.

    Specs

    Features: 12 × metal-gear servos, Robot HAT, camera module, RGB LED strip

    Sensors: Sound direction, 6-DOF IMU, dual touch, ultrasonic distance

    Works with: Raspberry Pi 4, 3B+, 3B, Zero 2 W

    Power: USC-C, rechargeable 2×18650 battery pack

  • Crescent County Is the Witch-Tech Racing-Delivery-Life-Sim You Didn’t Know You Needed

    Crescent County Is the Witch-Tech Racing-Delivery-Life-Sim You Didn’t Know You Needed

    Reading Time: 8 minutes

    Before I played an early version of Crescent County, I didn’t know that an in-game version of a motorized, magical broomstick could feel right. But here I am, drawing wide arcs through the gently swaying grass of the Isle of Morah, identifying the perfect hillock to glide off from, and intuitively following paths of flowers to shortcuts across its open world. You’d think there’d no right way to depict something as bizarre as this, but as I feel a leyline-powered boost ignite the rumble of my controller, I start to think, “well, maybe.”

    The debut game from Electric Saint – a two-person development team made up of Anna Hollinrake (Fall Guys) and Pavle Mihajlović (Erica) – Crescent County is part-open world exploration, part-dating game, part-gig economy delivery challenge, part-racer, part-life sim, and all-centered around that motorbroom experience. It’s ambitious, and with so many moving parts, you might expect it to have come together in pieces – but the real origin point was straightforward.

    Crescent County Concept Art
    One of the earliest pieces of art that inspired Crescent County. Credit: Anna Hollinrake

    Hollinrake has been painting images of what she dubs “witch-tech” for years, building a following as people fall in love with the bright, curious worlds she creates in static form. Choosing to leave the world of AAA development behind, and contacting Mihajlović to create the tech, there was only one setting they wanted to bring to life together.

    “The number one piece of feedback I get when I’m at conventions selling art based on this world, or on social media when I post images of it, is that people wish that they could live in the paintings I create,” she tells me. “I’m an art generalist for games and have worked along the whole art pipeline, but my specialty is infusing moreish worldbuilding into my work, from concepts to full 3D environments, that give a sense of place, with little story hints throughout. I really want to give people the opportunity to step into a lovingly crafted, painterly space that feels both joyful and a little melancholy and that, critically, they feel at home within.”

    It means that Crescent County wasn’t built out of disparate mechanical ideas that the developers wanted to jam into one playspace – every choice has been made because it fits the theme. Even in the early form of the game I play, that comes across. As main character Lu, your motorbroom is key to everything you do – you arrive on the island and take part in a race, then become the island’s delivery courier. That job allows you to meet characters you can get to know (and romance), afford furniture for your apartment, and to customize your broom and go further, faster. In Crescent County’s world, motorbrooms aren’t a vehicle, they’re a culture.

    “Motorbroom racing is an underground sport, practised by a small group of the coolest people you know,” says Hollinrake. “It’s very inspired by roller derby and the roller skating community (I’m an avid quad skater myself!), and we wanted to capture that punk, do-it-yourself attitude within motorbroom subculture.”

    “In terms of racing though, it’s more about friends challenging each other in playful ways (like seeing who can get up the mountain first), than it is about big formal races with sponsors and crowds,” adds Mihajlović. “If you win, you can expect to learn some secrets about the island, or maybe get a hot tip on how to get a particular broom part, but you can also choose to lay back and spend some more quality time with a racer you have a crush on.”

    That idea, that every activity can affect another, seems key to Crescent County. You’re building a life for Lu on the island – a race can lead to romance, a delivery job could net you new decorations, and even the house creation element (so often a side activity) can have effects on the wider game.

    “We’re really interested in how we can take classic, cozy house decoration and make it push our story forward rather than being purely for aesthetics,” explains Hollinrake. “In that classic scouring-Facebook-Marketplace way, you can do jobs around the island for people who’ll pay you back with a couch they have in the shed and aren’t using. Inspired by our own experiences living in crappy house shares in our early twenties, we know how big of a difference each single piece of furniture can have on your social life or sense of place – you can’t have a dinner party without a dinner table, and getting your new friends around it lets you chat late into the night and deepen those relationships. Even if the spaghetti bolognese you made was terrible.”

    It leads to what promises a very satisfying loop – the more you play, and the more you engage, the more opportunities await you. Again, building Crescent County as a living world rather than a sandbox, is the key. The game’s organized into days and nights that pass based on what you choose to do (you can deliver by day, and race by night), rather than through a linear cycle, which gives you an incentive to choose the interesting thing rather than the efficient thing.

    Crescent County Screenshot

    “Each day brings a host of new opportunities to earn some cash, make your flat less sad, and learn more island drama,” says Mihajlović. “You’ll get to pick who you want to help that day – whether it’s because you want to know a particular bit of gossip, you want to get a specific broom upgrade, or because your friend Rava has promised you she’ll give you her unwanted and admittedly ugly couch if you help round up her wayward sheep. You can either plan out your route carefully, or take it a bit more casually and ride around and see what you get up to. At the end of the day you can take your weird couch home, pick where to place it, and invite your friends over for a movie night – who point out you don’t actually own a TV.”

    All of this would be moot if the brooms themselves didn’t feel quite so good, and the Isle of Morah wasn’t quite so fascinating a location. That connection to Hollinrake’s art means that this world is a deeply interesting place – unfamiliar silhouettes clutter the horizon, and the sheer fun meant that I spent as much time simply going places than doing, well, Lu’s job. The final piece in the puzzle, then, is in creating a motorbroom that suits you.

    “Broom customization is both about building a motorbroom that looks amazing and feels just like you, but it’s also in how you decide how you’re going to navigate the island,” says Mihajlović. “Whether you want to speed down the straights, cut across a field, or glide over a canyon, different broom setups will open different paths and playstyles. You can also put Sigil Stickers on your broom that give you weird and wonderful powers, like an offensive sideways phase shift that you can use to bump your rivals off the track, or a more forgiving 10 second rewind that lets you retake a corner if you didn’t get it quite right.”

    Crescent County Screenshot

    The way the team is winding together mechanical and narrative benefits to the player isn’t just fascinating – it’s unusual. It’s the kind of thing that might have been hard to pitch at their previous studios, meaning self-publishing with ID@Xbox has been a boon:

    “We’re huge fans of the ID@Xbox program – and before that Xbox Live Arcade,” says Mihajlović. “It birthed or enabled so many of the games that we love, and in a lot of ways the whole indie wave that got us into the games industry in the first place. I actually remember the first Summer of Arcade, and how exciting and validating it was as a teenager to see indie games on a console, so it’s incredible to now be a part of the program.”

    With a two-person team, the game still has a way to go until release – and they’re not set on a release date just yet – but the early version I play makes abundantly clear how wild, weird, and ambitious Electric Saint is getting. Just like its motorbrooms, Crescent County might be unfamiliar, but it’s feeling just right already.

    Crescent County is coming to Xbox Series X|S, Xbox One, and PC. You can wishlist the game now.

    Crescent County

    Electric Saint

    Discover this beautiful open world racing on the back of your very own motorbroom. In Crescent County you play as Lu, as you move to the island under tense pretences, eager to start afresh. It’s a game about finding home in a brand-new witch-tech universe. During the day you’re a motorbroom courier; delivering packages, herding sheep, and setting off fireworks. You find yourself building a life through helping the locals; getting to know their struggles, and their island home. Plan your day every morning by picking your jobs, and then zoom around the island getting things done! The better your broom is the more you can do, and the more of the gossip you uncover. After you’ve made some change, head down to Bo’s workshop to upgrade your motobroom and make it your own! Replace parts to improve your broom’s handling, top speed, or gliding ability, and pop some Sigil Stickers on it to enable special powers such as Phase Shifting and Time Rewind. At night, use your customised motorbroom to defeat your new friends in improvised races around the island. Discover shortcuts on ancient ley lines, and sprint through abandoned power stations in rebellious, secret races to win new broom parts. Start in your cousin’s empty bedsit and collect furniture from the locals to scrap together a cosy new life for yourself on the island. The better your home is, the more activities you can do with your new friends. Can’t have a dinner party with a table, or a date without a couch!

    Website: LINK

  • Indiana Jones and the Great Circle: Bringing ’80s Movie Magic to a 2024 Game

    Indiana Jones and the Great Circle: Bringing ’80s Movie Magic to a 2024 Game

    Reading Time: 14 minutes

    Indiana Jones has a feeling. It’s not just in the more tangible elements – the stories, the hero, or the music – it’s also in the way it was filmed, the minutiae of choreography, and the tone. Those ineffable qualities are what have made this series so beloved, and so lasting. And that’s a very difficult thing to recreate in a video game.

    It presented Indiana Jones and the Great Circle developer MachineGames with an extra challenge – not only did the team have to create a fantastic, modern-feeling game, but one that simultaneously captures the magic that swirls around the movies. It comes down to a question of balance: making a compelling game that still looks, feels, acts, and sounds like the movies it’s drawing inspiration from.

    In speaking to developers across MachineGames, it’s fascinating to hear how that was achieved, mixing modern game design with traditional filmmaking techniques, all in service of creating something that hits the sweet spot MachineGames has been striving for.

    [youtube https://www.youtube.com/watch?v=Qj9KoBhp11M?feature=oembed&w=500&h=281]

    Perhaps one of the best examples of the deep thinking applied comes out of a single scene – one you might even describe as incidental.

    ‘Indiana Jones and the Raiders of the Lost Ark’ contains perhaps one of cinema’s best-known jokes. A crowd parts, and our hero is faced with a menacing swordsman, brandishing a scimitar. He chuckles darkly, passing his sword from hand to hand, before twirling it with expert precision – a show of how tough this fight will be. Indy grimaces, pulls out his revolver, and drops him with a single shot. What we thought was about to be a fight scene becomes a punchline. It’s perfect.

    And it’s exactly the kind of scene that shouldn’t work in a video game. This is effectively the intro to a boss battle – this guy should have multiple attack patterns, three different health bars, the works. As it turns out, that very scene may have started as the challenge the team faced – but it became part of the solution:

    “That scene is a very good example of type of humor that one can experience in the classic Indy movies – priceless!”, says Creative Director Axel Torvenius. “What we absolutely have been inspired by from that, and similar scenes, is that very humor. To have varied, engaging and rewarding combat encounters has been very important – but to make sure we spice them up with the Indy-humor has been equally important.”

    Taken on a wider level, this tells us a lot about MachineGames’ approach – in almost every regard, the team has gone the extra mile to help capture the movies’ magic, even if they’re not an immediately natural fit for gaming, in a new form. And as you’ll see, this is just the tip of the iceberg.

    Indiana Jones Screenshot

    Matinée Idol

    “Sitting as close as possible to the original look and feel of ’80s cinema was something we wanted to get right from the beginning,” explains Torvenius. “There was never an interest in reinventing the look or feel of Indiana Jones – the core ambition was always to make sure it really hit home in terms of having a style close to ‘Raiders of the Lost Ark’.”

    You might be surprised at just how deep that effort goes. The team scrutinized the early films, not just for their tone and writing, but for technical detail. What color palettes and film grading were used? Which kind of film stock was in the cameras? How did the original audio team record sound effects? What kind of stunt work was done? And from there, the hard work began – translating those original techniques into not just a modern context, but an entirely different medium.

    Some of the stories here are fascinating. Torvenius explains that the team studied how the original film teams created their sets, and applied those rules to locations in the game:

    “Obviously in games, the big challenge is that you can constantly peek behind the curtain and go ‘backstage’ – you can roam freely and break the composition. But there are many locations throughout the game where we know from which direction the player will come, or where they will exit and what type of scenery they will see. So we identified those early and pushed those further so we can set the scene more in certain places.”

    For cutscenes, which are naturally more controlled, the team could go further: “Another big thing we did for this project was to have a Director of Photography on set for all the cinematic filming in the motion capture studio,” continues Torvenius. “We had the talented Kyle Klütz helping us and working in the mocap studio with this huge, heavy camera dolly rolling around to make sure we captured the right amount of velocity in pan, angular movement, composition and framing. Once we transfer this data into the cutscene shots in the game engine, it gives us a very solid start in terms of a camera work that feels reminiscent of the early Indiana Jones movies.”

    Indiana Jones Screenshot

    Pitch Perfect

    Sound is just as important as look for Indiana Jones, of course. From the iconic John Williams score, to the “feel” of its effects, to the iconic Wilhelm Scream (yes, it’s in the game), the soundscape of the movies is just as nostalgic as the look and story.

    “The first thing we did was try to identify the core elements of that Indiana Jones sound,” says Audio Director, Pete Ward. “What did we have to nail to evoke the feeling of playing as Indy, in a cinematic way? We sat down as a team and watched all the Indy movies again, and we realized there were several things we absolutely had to get right – Indy’s voice likeness, the musical score, the whip, the revolver, and the punches. There were other things too, like the sound of the puzzles, and the fantastical elements, where we constantly referenced the original movies and [original Indiana Jones sound designer] Ben Burtt’s sound design.”

    It led Ward’s team down some unexpected paths. The aim wasn’t to reuse sound effects directly from the game, but reproduce them as faithfully as possible to serve the game’s needs – which in some cases meant returning to techniques used by the original team more than 40 years ago.

    “We did hundreds of hours of original recordings, using props like the whip, the fedora, the leather jacket, and lots of different shoe types on lots of different surfaces,” continues Ward. “For impacts in particular, we also used techniques originally used by Ben Burtt and his team, like beating up leather jackets with baseball bats. We also used practical effects where possible, like plucking metal springs with contact mics attached, to get some of that old-school vibe in our spectacular set pieces.”

    The result is a game that sounds reminiscent of an ’80’s movie – it’s still naturalistic, but listen closely and you’ll find it comes across in a different way from most modern games.

    The same went for the score – John Williams’ soundtracks are among the most recognizable in cinema history, but the aim was never simply to impersonate them. MachineGames brought in composer Gordy Haab to achieve that – a fitting choice given that he’s won awards for his work on multiple Star Wars games by drawing heavy inspiration from Williams, while making them his own.

     “Gordy was such a great composer to work with for this project – he really nailed the style and tone, and was able to emulate and seamlessly extend the original score where needed, while also creating entirely new themes for our story and characters that fit perfectly within the Indiana Jones universe,” enthuses Ward. “We were very careful about where and when we first hear certain themes as well – the Raider’s March is the iconic, instantly recognizable theme for Indiana Jones, and we wanted to incorporate it at the right moments, but also develop our own musical story with our own new themes.”

    But the risk of creating new elements amid such an iconic score is that they’ll stick out –and again, MachineGames went the extra mile to ensure that this didn’t happen. Haab and Ward researched how the original soundtracks were recorded, and even recorded in the same studio, Abbey Road. Amazingly, they even found out that they’d created accidental connections to the original along the way:

    “We even had a couple of session musicians who played on the original sessions for Raiders,” explains Ward. “It was a lovely moment when they came to the control room after the session was finished and told us that!”

    Indiana Jones Screenshot

    Telling the Tale

    But where look and sound allowed the team to look back at what had come before, Indiana Jones and the Great Circle’s story needed to be something entirely new, yet totally fitting for both the franchise – not to mention the game’s setting between ‘Raiders of the Lost Ark’ and ‘The Last Crusade’. For Lead Narrative Designer Tommy Tordsson Björk, it required a different kind of research.

    “Indiana Jones has an incredibly rich lore with movies, comic books, games and more that we could dig into and use in different ways, not only for immersing the player in     Indy’s world, but also to connect the different stories and characters. In this regard, our great working relationship with Lucasfilm Games helped us enormously.

    “From there, a lot of our work when developing the worldbuilding has been devoted to researching the 1930s, and then filtering it through lens of what we call an ’Indy matinée adventure’ to make it feel both authentic and true to the story of this world.”

    You’ll see that commitment not just to the Indy series itself, but for the time period in which it’s set, in the ways characters talk, the world around you, and even down to the era-appropriate spelling of Gizeh. MachineGames – and the connection of many of its developers, including Björk, to the acclaimed Starbreeze – means that the team has a lot of experience working with established franchises, from The Chronicles of Riddick to The Darkness, and it’s an experience that guided them in this new endeavour.

    “The approach that we’ve had on all of our games is to make them as true to what made the originals so great. We don’t want to retread what has already been told, but instead move into new territory that evokes the same tone and spirit,” says Björk. “I think what the development of Indy has taught us is the importance of letting the character control the path of both the story and the gameplay, because this franchise is so much defined by Indy and who he is to an even greater extent than the previous games we’ve worked on.”

    Indiana Jones Screenshot

    Playing With History

    And that leads us to the final piece of the development puzzle – turning the history of a movie series into a playable experience. How do you capture the excitement of a tightly-edited, linear movie in an interactive experience, where every player will choose to do things slightly differently, and take their Indy in different directions?

    Part of that is in returning to the movie making of it all, by grounding so much of what we play in real-life performance:

    “We have done so much motion capture for this game! I think this is most motion capture and stunts we have ever done,” says Torvenius. “And some of the scenes we have in the game are quite wild from a stunt perspective. We shot a number of scenes at Goodbye Kansas in Stockholm, which has a ceiling height of almost 8 meters, just because some scenes required stunts to be performed from that height.

    “We’ve been working with some very talented stuntmen and women throughout the production and together with our talent director Tom Keegan I dare to say we have some of the strongest action scenes from a MachineGames perspective yet. When it comes to capturing the look and feel of the stunts and action sequences in the early Indiana Jones movies, it has been a combined effort from various members within MG; obviously our Animation Director Henrik Håkansson and Cinematic Director Markus Söderqvist has an important part to play here for look and feel of animations. And then the audio work from Audio director Pete Ward and his department also plays an important part in making sure everything sounds true to the movie.”

    But even the smallest elements have been scrutinized, like throwing a simple punch, for example:

    “It has been very important to make sure the combat feels fun and rewarding and easy enough to get drawn into but then hard to master for the ones that likes to crank up the difficulty settings,” explains Torvenius. “We definitely wanted to capture the cinematic feel of the melee combat! Getting those heavy cinematic impact sounds in, having a good response from the spray of sweat and saliva as you punch someone in the face, interesting animations, and the behaviour of a hulking opponent coming towards you.”

    This depth of thought is everywhere in the game. Puzzles have been designed with the spirit of whether they might feel right for the movies; locations given the buzz not just of real-life, but a film set; and even the ability to use almost any disposable item as both a distraction and a weapon is drawn from the comic spirit of the movies.

    “One of the core ingredients in Indiana Jones is definitely humour. It is something we have worked hard with across every aspect of the game: environmental storytelling, script and VO, in cutscene and story beats, and it absolutely needs to be conveyed in the minute-to-minute gameplay, such as combat. And it is not only the tools you use but also a lot of hard work from the engineer and animation teams to make sure we have interesting, rewarding and fun take down animations. And on top of all of that you also need the best possible audio! And when that cocktail is shaken just the right amount, voilà – out comes something very delicious and fun!”

    Which brings us all the way back round to that iconic scene with the swordsman. In a normal game, no, that scene might not make sense when translated to a video game context. But in Indiana Jones and the Great Circle? Well, MachineGames has put in the research, the work, and the commitment to ensure that, while you’re playing this game – from solving spectacular puzzles to near-slapstick combat – it’ll feel worthy of those classic movies.


    Indiana Jones and the Great Circle comes to Xbox Series X|S and Windows PC (with Game Pass), or Steam on December 9. Premium and Collector’s Editions will offer up to 3 days of early access from December 6. 

    Indiana Jones and the Great Circle™: Digital Premium Edition

    Bethesda Softworks

    $99.99

    Pre-order now or Play on Game Pass* to receive The Last Crusade™ Pack with the Traveling Suit Outfit and Lion Tamer Whip, as seen in The Last Crusade™. *** Live the adventure with the Premium Edition of Indiana Jones and the Great Circle™! INCLUDES: • Base Game (digital code) • Up to 3-Day Early Access** • Indiana Jones and the Great Circle: The Order of Giants Story DLC† • Digital Artbook • Temple of Doom™ Outfit ***
    Uncover one of history’s greatest mysteries in Indiana Jones and the Great Circle, a first-person, single-player adventure set between the events of Raiders of the Lost Ark™ and The Last Crusade. The year is 1937, sinister forces are scouring the globe for the secret to an ancient power connected to the Great Circle, and only one person can stop them – Indiana Jones™. You’ll become the legendary archaeologist in this cinematic action-adventure game from MachineGames, the award-winning studio behind the recent Wolfenstein series, and executive produced by Hall of Fame game designer Todd Howard. YOU ARE INDIANA JONES Live the adventure as Indy in a thrilling story full of exploration, immersive action, and intriguing puzzles. As the brilliant archaeologist – famed for his keen intellect, cunning resourcefulness, and trademark humor – you will travel the world in a race against enemy forces to discover the secrets to one of the greatest mysteries of all time. A WORLD OF MYSTERY AWAITS Travel from the halls of Marshall College to the heart of the Vatican, the pyramids of Egypt, the sunken temples of Sukhothai, and beyond. When a break-in in the dead of night ends in a confrontation with a mysterious colossal man, you must set out to discover the world-shattering secret behind the theft of a seemingly unimportant artifact. Forging new alliances and facing familiar enemies, you’ll engage with intriguing characters, use guile and wits to solve ancient riddles, and survive intense set-pieces. WHIP-CRACKING ACTION Indiana’s trademark whip remains at the heart of his gear and can be used to distract, disarm, and attack enemies. But the whip isn’t just a weapon, it’s Indy’s most valuable tool for navigating the environment. Swing over unsuspecting patrols and scale walls as you make your way through a striking world. Combine stealth infiltration, melee combat, and gunplay to combat the enemy threat and unravel the mystery. THE SPIRIT OF DISCOVERY Venture through a dynamic mix of linear, narrative-driven gameplay and open-area maps. Indulge your inner explorer and unearth a world of fascinating secrets, deadly traps and fiendish puzzles, where anything could potentially hide the next piece of the mystery – or snakes. Why did it have to be snakes? *Game Pass members get access to all pre-order content as long as Game Pass subscription is active. **Actual play time depends on purchase date and applicable time zone differences, subject to possible outages. †DLC availability to be provided at a later date.

    Indiana Jones and the Great Circle™ Standard Edition

    Bethesda Softworks

    $69.99

    Pre-order now or Play on Game Pass* to receive The Last Crusade™ Pack with the Traveling Suit Outfit and Lion Tamer Whip, as seen in The Last Crusade™.
    ***
    Uncover one of history’s greatest mysteries in Indiana Jones and the Great Circle™, a first-person, single-player adventure set between the events of Raiders of the Lost Ark™ and The Last Crusade. The year is 1937, sinister forces are scouring the globe for the secret to an ancient power connected to the Great Circle, and only one person can stop them – Indiana Jones™. You’ll become the legendary archaeologist in this cinematic action-adventure game from MachineGames, the award-winning studio behind the recent Wolfenstein series, and executive produced by Hall of Fame game designer Todd Howard. YOU ARE INDIANA JONES Live the adventure as Indy in a thrilling story full of exploration, immersive action, and intriguing puzzles. As the brilliant archaeologist – famed for his keen intellect, cunning resourcefulness, and trademark humor – you will travel the world in a race against enemy forces to discover the secrets to one of the greatest mysteries of all time. A WORLD OF MYSTERY AWAITS Travel from the halls of Marshall College to the heart of the Vatican, the pyramids of Egypt, the sunken temples of Sukhothai, and beyond. When a break-in in the dead of night ends in a confrontation with a mysterious colossal man, you must set out to discover the world-shattering secret behind the theft of a seemingly unimportant artifact. Forging new alliances and facing familiar enemies, you’ll engage with intriguing characters, use guile and wits to solve ancient riddles, and survive intense set-pieces. WHIP-CRACKING ACTION Indiana’s trademark whip remains at the heart of his gear and can be used to distract, disarm, and attack enemies. But the whip isn’t just a weapon, it’s Indy’s most valuable tool for navigating the environment. Swing over unsuspecting patrols and scale walls as you make your way through a striking world. Combine stealth infiltration, melee combat, and gunplay to combat the enemy threat and unravel the mystery. THE SPIRIT OF DISCOVERY Venture through a dynamic mix of linear, narrative-driven gameplay and open-area maps. Indulge your inner explorer and unearth a world of fascinating secrets, deadly traps and fiendish puzzles, where anything could potentially hide the next piece of the mystery – or snakes. Why did it have to be snakes? *Game Pass members get access to all pre-order content as long as Game Pass subscription is active.

    Website: LINK

  • Fably bedtime storyteller

    Fably bedtime storyteller

    Reading Time: 3 minutes

    Childhood wonder

    Stefano’s first computer, a Commodore Vic20, was something he could program himself and opened up a world of possibilities. Most importantly, this first computer awakened Stefano to the idea of tinkering and eventually led to him pursuing a degree in electronic engineering. Over the past 20 years he has worked with many tech startups and software companies, often with Apache Frontier Foundation, where he became a fellow and met many passionate inventors. Fably, however, was very much inspired by Stefano’s own family, particularly his nine-year-old daughter who kept asking him to invent new stories.

    Stefano had encountered LLMs (large language models) while working at Google Research and wondered whether he could use one to create a storytelling machine. Stefano found the command of language impressive but the LLM “felt like talking to a person that spoke like a college professor but had the understanding of the world of a five-year-old. It was a jarring experience especially when they confidently made stuff up.” The phenomenon is often referred to as ‘hallucination’ but Stefano says some colleagues at Google call it ‘fabulism’. He prefers this term and it is the origin of his Raspberry Pi project’s name. Importantly, ‘fably’ is also a word the text to speech synthesis API can pronounce.

    As well as making more sense than an overconfident LLM, the smart storyteller needed to come up with compelling stories that engaged the listener and be sufficiently autonomous that it could be used without continuous adult supervision. Being an ambitious, entrepreneurial type, Stefano also wondered about the commercial possibilities and whether Fably could be made at a sufficiently low cost to build a community around it. He notes that children are demanding users being both “impatient and used to interactivity as a foundational aspect of learning”. It would be critical that the “time to first speech” (the time from the last word the child said and the first word coming out of the machine) could not be more than a few seconds.

    Every cloud

    Since LLMs are very resource-intensive (as he knew from working on machine learning at Google), Stefano chose a cloud API-based approach to address the need for speed, and Raspberry Pi to keep costs down so other technically minded makers could create their own. Raspberry Pi felt like the best choice because of its price, availability, fantastic and very active community, and because it runs Linux directly – a development environment Stefano felt right at home in. Additional hardware such as a microphone could also be added easily. Stefano praised Raspberry Pi’s “relatively stable” I/O pinout across versions in ensuring “a healthy and diverse ecosystem of extension boards”, which could prove important should Fably become a commercial product.

    Fably makes full use of OpenAI cloud APIs, alongside a text-to-speech synthesiser with a warm and cosy voice. Stefano’s daughter enjoys the fact that she hears a slightly different story even if she makes the same request. Using a cloud setup means each story costs a few cents, but Fably can be set up to cache stories as well as to cap cloud costs.

  • Gear Guide 2025 in The MagPi magazine issue 148

    Gear Guide 2025 in The MagPi magazine issue 148

    Reading Time: 2 minutes

    Gear Guide 2025

    Gear Guide 2025!

    Our Gear Guide 2025! has your back. Discover a treasure trove of Raspberry Pi devices and great accessories taking us into a glittering new year.

    Gift a project

    Gift a project

    Sometimes the perfect gift is one you made yourself. Christmas Elf, Rob Zwetsloot has a fantastic feature for constructing your gifts using Raspberry Pi technology. On a budget? These projects break down the pricing so you can decide on what project to put together.

    Bumpin' Sticker

    Bumpin Sticker

    This issue is packed with amazing projects. Our favourite is this Bumpin Sticker that attaches an 11.3-inch LCD display to the bumper of a car and hooks up to the car radio. It displays the song and artist that you are listening to by scraping data from last.fm. It’s fun, but a serious demonstration of different technologies.

    Bluetooth bakelite phone headset

    Bluetooth Bakelite phone headset

    This Bluetooth headset is built into the body of a Dutch phone from 1950, simply called a ‘type 1950’. It’s powered by an ESP32 development board, and it works well enough that its creator, Jouke Waleson, can use it in a professional setting.

    PiDog

    PiDog tested

    Featuring 12 servos, PiDog is a metal marvel that can do (almost) anything a real dog can do. Walk, sit, lie down, doze, bark, howl, pant, scratch, shake a paw… Equipped with a bunch of sensors, it can self-balance, discern the direction of sounds, detect obstacles, and see where it’s going. You can even get a dog’s-eye view from its nose-mounted camera via a web page or companion app.

    You’ll find all this and much more in the latest edition of The MagPi magazine. Pick up your copy today from our store, or subscribe to get every issue delivered to your door.

  • Win one of three Thumby Color game systems

    Win one of three Thumby Color game systems

    Reading Time: < 1 minute

    A ton of supporting products launcged with Raspberry Pi Pico 2 and the RP2350, including a lot of items that were powered by RP2350. One of these included the excellent Thumby Color game system, and we finally have a few for a competition – enter below…

  • Ada Computer Science: What have we learnt so far

    Ada Computer Science: What have we learnt so far

    Reading Time: 3 minutes

    It’s been over a year since we launched Ada Computer Science, and we continue to see the numbers of students and teachers using the platform all around the world grow. Our recent year in review shared some of the key developments we’ve made since launching, many of which are a direct result of feedback from our community.

    Today, we are publishing an impact report that includes some of this feedback, along with what users are saying about the impact Ada Computer Science is having.

    Computer science students at a desktop computer in a classroom.

    Evaluating Ada Computer Science

    Ada Computer Science is a free learning platform for computer science students and teachers. It provides high-quality, online learning materials to use in the classroom, for homework, and for revision. Our experienced team has created resources that cover every topic in the leading GCSE and A level computer science specifications.

    From May to July 2024, we invited users to provide feedback via an online survey, and we got responses from 163 students and 27 teachers. To explore the feedback further, we also conducted in-depth interviews with three computer science teachers in September 2024.

    How is Ada being used?

    The most common ways students use Ada Computer Science — as reported by more than two thirds of respondents — is for revision and/or to complete work set by their teacher. Similarly, teachers most commonly said that they direct students to use Ada outside the classroom.

    “I recommend my students use Ada Computer Science as their main textbook.” — Teacher

    What is users’ experience of using Ada?

    Most respondents agreed or strongly agreed that Ada is useful for learning (82%) and high quality (79%).

    “Ada Computer Science has been very effective for independent revision, I like how it provides hints and pointers if you answer a question incorrectly.” — Student

    Ada users were generally positive about their overall experience of the platform and using it to find the information they were looking for.

    “Ada is one of the best for hitting the nail on the head. They’ve really got it in tune with the depth that exam boards want.” — Ian Robinson, computer science teacher (St Alban’s Catholic High School, UK)

    What impact is Ada having?

    Around half of the teachers agreed that Ada had reduced their workload and/or increased their subject knowledge. Across all respondents, teachers estimated that the average weekly time saving was 1 hour 8 minutes.

    Additionally, 81% of students agreed that as a result of using Ada, they had become better at understanding computer science concepts. Other benefits were reported too, with most students agreeing that they had become better problem-solvers, for example.

    “I love Ada! It is an extremely helpful resource… The content featured is very comprehensive and detailed, and the visual guides… are particularly helpful to aid my understanding.” — Student

    Future developments

    Since receiving this feedback, we have already released updated site navigation and new question finder designs. In 2025, we are planning improvements to the markbook (for example, giving teachers an overview of the assignments they’ve set) and to how assignments can be created.

    If you’d like to read more about the findings, there’s a full report for you to download. Thank you to everyone who took the time to take part — we really value your feedback!

    Website: LINK

  • Celebrating the community: Prabhath

    Celebrating the community: Prabhath

    Reading Time: 5 minutes

    We love hearing from members of the community and sharing the stories of amazing young people, volunteers, and educators who are using their passion for technology to create positive change in the world around them.

    An educator sits in a library.

    Prabhath, the founder of the STEMUP Educational Foundation, began his journey into technology at an early age, influenced by his cousin, Harindra.

    “He’s the one who opened up my eyes. Even though I didn’t have a laptop, he had a computer, and I used to go to their house and practise with it. That was the turning point in my life.”

    [youtube https://www.youtube.com/watch?v=gNRn6SmdBek?feature=oembed&w=500&h=281]

    This early exposure to technology, combined with support from his parents to leave his rural home in search of further education, set Prabhath on a path to address a crucial issue in Sri Lanka’s education system: the gap in opportunities for students, especially in STEM education. 

    “There was a gap between the kids who are studying in Sri Lanka versus the kids in other developed markets. We tried our best to see how we can bridge this gap with our own capacity, with our own strengths.” 

    Closing the gap through STEMUP

    Recognising the need to close this gap in opportunities, Prabhath, along with four friends who worked with him in his day job as a Partner Technology Strategist, founded the STEMUP Educational Foundation in 2016.  STEMUP’s mission is straightforward but ambitious — it seeks to provide Sri Lankan students with equal access to STEM education, with a particular focus on those from underserved communities.

    A group of people stands together, engaged in a lively discussion.

    To help close the gap, Prabhath and his team sought to establish coding clubs for students across the country. Noting the lack of infrastructure and access to resources in many parts of Sri Lanka, they partnered with Code Club at the Raspberry Pi Foundation to get things moving. 

    Their initiative started small with a Code Club in the Colombo Public Library, but things quickly gained traction. 

    What began with just a handful of friends has now grown into a movement involving over 1,500 volunteers who are all working to provide free education in coding and emerging technologies to students who otherwise wouldn’t have access.

    An educator helps a young person at a Code Club.

    A key reason for STEMUP’s reach has been the mobilisation of university students to serve as mentors at the Code Clubs. Prabhath believes this partnership has not only helped the success of Code Club Sri Lanka, but also given the university students themselves a chance to grow, granting them opportunities to develop the life skills needed to thrive in the workforce. 

    “The main challenge we see here today, when it comes to graduate students, is that they have the technology skills, but they don’t have soft skills. They don’t know how to do a presentation, how to manage a project from A to Z, right? By being a volunteer, that particular student can gain 360-degree knowledge.” 

    Helping rural communities

    STEMUP’s impact stretches beyond cities and into rural areas, where young people often have even fewer opportunities to engage with technology. The wish to address this imbalance  is a big motivator for the student mentors.

    “When we go to rural areas, the kids don’t have much exposure to tech. They don’t know about the latest technologies. What are the new technologies for that development? And what subjects can they  study for the future job market? So I think I can help them. So I actually want to teach someone what I know.” – Kasun, Student and Code Club mentor

    This lack of access to opportunities is precisely what STEMUP aims to change, giving students a platform to explore, innovate, and connect with the wider world.

    Coolest Projects Sri Lanka

    STEMUP recently held the first Coolest Projects Sri Lanka, a showcase for the creations of young learners. Prabhath first encountered Coolest Projects while attending the Raspberry Pi Foundation Asia Partner summit in Malaysia. 

    “That was my first experience with the Coolest Projects,” says Prabhath, “and when I came back, I shared the idea with our board and fellow volunteers. They were all keen to bring it to Sri Lanka.” 

    For Prabhath, the hope is that events like these will open students’ eyes to new possibilities. The first event certainly lived up to his hope. There was a lot of excitement, especially in rural areas, with multiple schools banding together and hiring buses to attend the event. 

    “That kind of energy… because they do not have these opportunities to showcase what they have built, connect with like minded people, and connect with the industry.”

    Building a better future

    Looking ahead, Prabhath sees STEMUP’s work as a vital part of shaping the future of education in Sri Lanka. By bringing technology to public libraries, engaging university students as mentors, and giving kids hands-on experience with coding and emerging technologies, STEMUP is empowering the next generation to thrive in a digital world. 

    “These programmes are really helpful for kids to win the future, be better citizens, and bring this country forward.”

    Young people showcase their tech creations at Coolest Projects.

    STEMUP is not just bridging a gap — it’s building a brighter, more equitable future for all students in Sri Lanka. We can’t wait to see what they achieve next!

    Inspire the next generation of young coders

    To find out how you and young creators you know can get involved in Coolest Projects, visit coolestprojects.org. If the young people in your community are just starting out on their computing journey, visit our projects site for free, fun beginner coding projects.

    For more information to help you set up a Code Club in your community, visit codeclub.org.

    Help us celebrate Prabhath and his inspiring journey with STEMUP by sharing this story on X, LinkedIn, and Facebook.

    Website: LINK

  • Pibo the bipedal robot review

    Pibo the bipedal robot review

    Reading Time: 2 minutes

    It comes fully assembled, which is very nice as putting together the various motors and other components together correctly has been a pain with similar products in the past. All you need to do is turn it on and get it connected to your Wi-Fi network, either via a wireless access point the robot creates, or via a wired connection if you have a USB to Ethernet adapter handy.

    The whole thing is powered by a Raspberry Pi Compute Module 4, so it has plenty of oomph – especially needed for the computer vision and voice recognition tasks.

    A cute little robot – well, it’s 40cm tall which isn’t that little

    I have control

    The robot itself is made in Korea, and most of the surrounding documentation and such are in Korean as a result. However, the tools and IDE (integrated development environment) can be switched to English just fine, and we didn’t experience any language issues.

    The tools allow you to play around with the various functions of the robot. Changing the colours of the eyes (independently if you wish), checking if the motion-sensing and touch inputs are working, recording sounds, playing sounds, moving the various motors – you can get a great feel for what the robot can do. With a solid grasp of this, you can then start programming the robot in the IDE.

    There’s a couple of programming methods – one is a block-based flow a little like NODE-Red, which also helps you understand the coding logic and variables of Pibo, and then there’s the Python programming mode which allows for full control.

    The functionality is huge, and we were really impressed by the object detection built into the camera. We also like making little messages and images on small LED screens, so having interactive elements that worked with the 128×64 display scratched a specific itch for us.

    Pibo comes pre-made in this fancy briefcase. Just pop on the antenna

    Learning for all ages

    While the whole system may not be useful to teach people on their very first steps into coding, or even maybe robotics, it’s a great next step thanks to its intuitive design that lets you play with its features, and block based programming that can lead into Python. The price is a little hefty, and some English features are still incoming, but we had a great time using Pibo either way – one for the little desk display we think.

    Specs

    Dimensions: 250(w) × 395(h) × 125(d) mm, 2.2kg

    Inputs: Touch sensor, MEMS microphone, PIR sensor, USB 2.0 port

    Outputs: 2x speakers, 128×64 OLED display, USB 2.0 port

    Verdict

    9/10

    A cute and very easy to use robot with a ton of functionality that will take some time to fully discover.

  • Exploring how well Experience AI maps to UNESCO’s AI competency framework for students

    Exploring how well Experience AI maps to UNESCO’s AI competency framework for students

    Reading Time: 9 minutes

    During this year’s annual Digital Learning Week conference in September, UNESCO launched their AI competency frameworks for students and teachers. 

    What is the AI competency framework for students? 

    The UNESCO competency framework for students serves as a guide for education systems across the world to help students develop the necessary skills in AI literacy and to build inclusive, just, and sustainable futures in this new technological era.

    It is an exciting document because, as well as being comprehensive, it’s the first global framework of its kind in the area of AI education.

    The framework serves three specific purposes:

    • It offers a guide on essential AI concepts and skills for students, which can help shape AI education policies or programs at schools
    • It aims to shape students’ values, knowledge, and skills so they can understand AI critically and ethically
    • It suggests a flexible plan for when and how students should learn about AI as they progress through different school grades

    The framework is a starting point for policy-makers, curriculum developers, school leaders, teachers, and educational experts to look at how it could apply in their local contexts. 

    It is not possible to create a single curriculum suitable for all national and local contexts, but the framework flags the necessary competencies for students across the world to acquire the values, knowledge, and skills necessary to examine and understand AI critically from a holistic perspective.

    How does Experience AI compare with the framework?

    A group of researchers and curriculum developers from the Raspberry Pi Foundation, with a focus on AI literacy, attended the conference and afterwards we tasked ourselves with taking a deep dive into the student framework and mapping our Experience AI resources to it. Our aims were to:

    • Identify how the framework aligns with Experience AI
    • See how the framework aligns with our research-informed design principles
    • Identify gaps or next steps

    Experience AI is a free educational programme that offers cutting-edge resources on artificial intelligence and machine learning for teachers, and their students aged 11 to 14. Developed in collaboration with the Raspberry Pi Foundation and Google DeepMind, the programme provides everything that teachers need to confidently deliver engaging lessons that will teach, inspire, and engage young people about AI and the role that it could play in their lives. The current curriculum offering includes a ‘Foundations of AI’ 6-lesson unit, 2 standalone lessons (‘AI and ecosystems’ and ‘Large language models’), and the 3 newly released AI safety resources. 

    Working through each lesson objective in the Experience AI offering, we compared them with each curricular goal to see where they overlapped. We have made this mapping publicly available so that you can see this for yourself: Experience AI – UNESCO AI Competency framework students – learning objective mapping (rpf.io/unesco-mapping)

    The first thing we discovered was that the mapping of the objectives did not have a 1:1 basis. For example, when we looked at a learning objective, we often felt that it covered more than one curricular goal from the framework. That’s not to say that the learning objective fully met each curricular goal, rather that it covers elements of the goal and in turn the student competency. 

    Once we had completed the mapping process, we analysed the results by totalling the number of objectives that had been mapped against each competency aspect and level within the framework.

    This provided us with an overall picture of where our resources are positioned against the framework. Whilst the majority of the objectives for all of the resources are in the ‘Human-centred mindset’ category, the analysis showed that there is still a relatively even spread of objectives in the other three categories (Ethics of AI, ML techniques and applications, and AI system design). 

    As the current resource offering is targeted at the entry level to AI literacy, it is unsurprising to see that the majority of the objectives were at the level of ‘Understand’. It was, however, interesting to see how many objectives were also at the ‘Apply’ level. 

    It is encouraging to see that the different resources from Experience AI map to different competencies in the framework. For example, the 6-lesson foundations unit aims to give students a basic understanding of how AI systems work and the data-driven approach to problem solving. In contrast, the AI safety resources focus more on the principles of Fairness, Accountability, Transparency, Privacy, and Security (FATPS), most of which fall more heavily under the ethics of AI and human-centred mindset categories of the competency framework. 

    What did we learn from the process? 

    Our principles align 

    We built the Experience AI resources on design principles based on the knowledge curated by Jane Waite and the Foundation’s researchers. One of our aims of the mapping process was to see if the principles that underpin the UNESCO competency framework align with our own.

    Avoiding anthropomorphism 

    Anthropomorphism refers to the concept of attributing human characteristics to objects or living beings that aren’t human. For reasons outlined in the blog I previously wrote on the issue, a key design principle for Experience AI is to avoid anthropomorphism at all costs. In our resources, we are particularly careful with the language and images that we use. Putting the human in the process is a key way in which we can remind students that it is humans who design and are responsible for AI systems. 

    Young people use computers in a classroom.

    It was reassuring to see that the UNESCO framework has many curricular goals that align closely to this, for example:

    • Foster an understanding that AI is human-led
    • Facilitate an understanding on the necessity of exercising sufficient human control over AI
    • Nurture critical thinking on the dynamic relationship between human agency and machine agency

    SEAME

    The SEAME framework created by Paul Curzon and Jane Waite offers a way for teachers, resource developers, and researchers to talk about the focus of AI learning activities by separating them into four layers: Social and Ethical (SE), Application (A), Models (M), and Engines (E). 

    The SEAME model and the UNESCO AI competency framework take two different approaches to categorising AI education — SEAME describes levels of abstraction for conceptual learning about AI systems, whereas the competency framework separates concepts into strands with progression. We found that although the alignment between the frameworks is not direct, the same core AI and machine learning concepts are broadly covered across both. 

    Computational thinking 2.0 (CT2.0)

    The concept of computational thinking 2.0 (a data-driven approach) stems from research by Professor Matti Tedre and Dr Henriikka Vartiainen from the University of Eastern Finland. The essence of this approach establishes AI as a different way to solve problems using computers compared to a more traditional computational thinking approach (a rule-based approach). This does not replace the traditional computational approach, but instead requires students to approach the problem differently when using AI as a tool. 

    An educator points to an image on a student's computer screen.

    The UNESCO framework includes many references within their curricular goals that places the data-driven approach at the forefront of problem solving using AI, including:

    • Develop conceptual knowledge on how AI is trained based on data 
    • Develop skills on assessing AI systems’ need for data, algorithms, and computing resources

    Where we slightly differ in our approach is the regular use of the term ‘algorithm’, particularly in the Understand and Apply levels of the framework. We have chosen to differentiate AI systems from traditional computational thinking approaches by avoiding the term ‘algorithm’ at the foundational stage of AI education. We believe the learners need a firm mental model of data-driven systems before students can understand that the Model and Engines of the SEAME model refer to algorithms (which would possibly correspond to the Create stage of the UNESCO framework). 

    We can identify areas for exploration

    As part of the international expansion of Experience AI, we have been working with partners from across the globe to bring AI literacy education to students in their settings. Part of this process has involved working with our partners to localise the resources, but also to provide training on the concepts covered in Experience AI. During localisation and training, our partners often have lots of queries about the lesson on bias. 

    As a result, we decided to see if mapping taught us anything about this lesson in particular, and if there was any learning we could take from it. At close inspection, we found that the lesson covers two out of the three curricular goals for the Understand element of the ‘Ethics of AI’ category (Embodied ethics). 

    Specifically, we felt the lesson:

    • Illustrates dilemmas around AI and identifies the main reasons behind ethical conflicts
    • Facilitates scenario-based understandings of ethical principles on AI and their personal implications

    What we felt isn’t covered in the lesson is:

    • Guide the embodied reflection and internalisation of ethical principles on AI

    Exploring this further, the framework describes this curricular goal as:

    Guide students to understand the implications of ethical principles on AI for their human rights, data privacy, safety, human agency, as well as for equity, inclusion, social justice and environmental sustainability. Guide students to develop embodied comprehension of ethical principles; and offer opportunities to reflect on personal attitudes that can help address ethical challenges (e.g. advocating for inclusive interfaces for AI tools, promoting inclusion in AI and reporting discriminatory biases found in AI tools).

    We realised that this doesn’t mean that the lesson on bias is ineffective or incomplete, but it does help us to think more deeply about the learning objective for the lesson. This may be something we will look to address in future iterations of the foundations unit or even in the development of new resources. What we have identified is a process that we can follow, which will help us with our decision making in the next phases of resource development. 

    How does this inform our next steps?

    As part of the analysis of the resources, we created a simple heatmap of how the Experience AI objectives relate to the UNESCO progression levels. As with the barcharts, the heatmap indicated that the majority of the objectives sit within the Understand level of progression, with fewer in Apply, and fewest in Create. As previously mentioned, this is to be expected with the resources being “foundational”. 

    The heatmap has, however, helped us to identify some interesting points about our resources that warrant further thought. For example, under the ‘Human-centred mindset’ competency aspect, there are more objectives under Apply than there are Understand. For ‘AI system design’, architecture design is the least covered aspect of Apply. 

    By identifying these areas for investigation, again it shows that we’re able to add the learnings from the UNESCO framework to help us make decisions.

    What next? 

    This mapping process has been a very useful exercise in many ways for those of us working on AI literacy at the Raspberry Pi Foundation. The process of mapping the resources gave us an opportunity to have deep conversations about the learning objectives and question our own understanding of our resources. It was also very satisfying to see that the framework aligns well with our own researched-informed design principles, such as the SEAME model and avoiding anthropomorphisation. 

    The mapping process has been a good starting point for us to understand UNESCO’s framework and we’re sure that it will act as a useful tool to help us make decisions around future enhancements to our foundational units and new free educational materials. We’re looking forward to applying what we’ve learnt to our future work! 

    Website: LINK

  • Using generative AI to teach computing: Insights from research

    Using generative AI to teach computing: Insights from research

    Reading Time: 7 minutes

    As computing technologies continue to rapidly evolve in today’s digital world, computing education is becoming increasingly essential. Arto Hellas and Juho Leinonen, researchers at Aalto University in Finland, are exploring how innovative teaching methods can equip students with the computing skills they need to stay ahead. In particular, they are looking at how generative AI tools can enhance university-level computing education. 

    In our monthly seminar in September, Arto and Juho presented their research on using AI tools to provide personalised learning experiences and automated feedback to help requests, as well as their findings on teaching students how to write effective prompts for generative AI systems. While their research focuses primarily on undergraduate students — given that they teach such students — many of their findings have potential relevance for primary and secondary (K-12) computing education. 

    Students attend a lecture at a university.

    Generative AI consists of algorithms that can generate new content, such as text, code, and images, based on the input received. Ever since large language models (LLMs) such as ChatGPT and Copilot became widely available, there has been a great deal of attention on how to use this technology in computing education. 

    Arto and Juho described generative AI as one of the fastest-moving topics they had ever worked on, and explained that they were trying to see past the hype and find meaningful uses of LLMs in their computing courses. They presented three studies in which they used generative AI tools with students in ways that aimed to improve the learning experience. 

    Using generative AI tools to create personalised programming exercises

    An important strand of computing education research investigates how to engage students by personalising programming problems based on their interests. The first study in Arto and Juho’s research  took place within an online programming course for adult students. It involved developing a tool that used GPT-4 (the latest version of ChatGPT available at that time) to generate exercises with personalised aspects. Students could select a theme (e.g. sports, music, video games), a topic (e.g. a specific word or name), and a difficulty level for each exercise.

    A student in a computing classroom.

    Arto, Juho, and their students evaluated the personalised exercises that were generated. Arto and Juho used a rubric to evaluate the quality of the exercises and found that they were clear and had the themes and topics that had been requested. Students’ feedback indicated that they found the personalised exercises engaging and useful, and preferred these over randomly generated exercises. 

    Arto and Juho also evaluated the personalisation and found that exercises were often only shallowly personalised, however. In shallow personalisations, the personalised content was added in only one sentence, whereas in deep personalisations, the personalised content was present throughout the whole problem statement. It should be noted that in the examples taken from the seminar below, the terms ‘shallow’ and ‘deep’ were not being used to make a judgement on the worthiness of the topic itself, but were rather describing whether the personalisation was somewhat tokenistic or more meaningful within the exercise. 

    In these examples from the study, the shallow personalisation contains only one sentence to contextualise the problem, while in the deep example the whole problem statement is personalised. 

    The findings suggest that this personalised approach may be particularly effective on large university courses, where instructors might struggle to give one-on-one attention to every student. The findings further suggest that generative AI tools can be used to personalise educational content and help ensure that students remain engaged. 

    How might all this translate to K-12 settings? Learners in primary and secondary schools often have a wide range of prior knowledge, lived experiences, and abilities. Personalised programming tasks could help diverse groups of learners engage with computing, and give educators a deeper understanding of the themes and topics that are interesting for learners. 

    Responding to help requests using large language models

    Another key aspect of Alto and Juho’s work is exploring how LLMs can be used to generate responses to students’ requests for help. They conducted a study using an online platform containing programming exercises for students. Every time a student struggled with a particular exercise, they could submit a help request, which went into a queue for a teacher to review, comment on, and return to the student. 

    The study aimed to investigate whether an LLM could effectively respond to these help requests and reduce the teachers’ workloads. An important principle was that the LLM should guide the student towards the correct answer rather than provide it. 

    The study used GPT-3.5, which was the newest version at the time. The results found that the LLM was able to analyse and detect logical and syntactical errors in code, but concerningly, the responses from the LLM also addressed some non-existent problems! This is an example of hallucination, where the LLM outputs something false that does not reflect the real data that was inputted into it. 

    An example of how an LLM was able to detect a logical error in code, but also hallucinated and provided an unhelpful, false response about a non-existent syntactical error. 

    The finding that LLMs often generated both helpful and unhelpful problem-solving strategies suggests that this is not a technology to rely on in the classroom just yet. Arto and Juho intend to track the effectiveness of LLMs as newer versions are released, and explained that GPT-4 seems to detect errors more accurately, but there is no systematic analysis of this yet. 

    In primary and secondary computing classes, young learners often face similar challenges to those encountered by university students — for example, the struggle to write error-free code and debug programs. LLMs seemingly have a lot of potential to support young learners in overcoming such challenges, while also being valuable educational tools for teachers without strong computing backgrounds. Instant feedback is critical for young learners who are still developing their computational thinking skills — LLMs can provide such feedback, and could be especially useful for teachers who may lack the resources to give individualised attention to every learner. Again though, further research into LLM-based feedback systems is needed before they can be implemented en-masse in classroom settings in the future. 

    Teaching students how to prompt large language models

    Finally, Arto and Juho presented a study where they introduced the idea of ‘Prompt Problems’: programming exercises where students learn how to write effective prompts for AI code generators using a tool called Promptly. In a Prompt Problem exercise, students are presented with a visual representation of a problem that illustrates how input values will be transformed to an output. Their task is to devise a prompt (input) that will guide an LLM to generate the code (output) required to solve the problem. Prompt-generated code is evaluated automatically by the Promptly tool, helping students to refine the prompt until it produces code that solves the problem.

    Feedback from students suggested that using Prompt Problems was a good way for them to gain experience of using new programming concepts and develop their computational thinking skills. However, students were frustrated that bugs in the code had to be fixed by amending the prompt — it was not possible to edit the code directly. 

    How these findings relate to K-12 computing education is still to be explored, but they indicate that Prompt Problems with text-based programming languages could be valuable exercises for older pupils with a solid grasp of foundational programming concepts. 

    Balancing the use of AI tools with fostering a sense of community

    At the end of the presentation, Arto and Juho summarised their work and hypothesised that as society develops more and more AI tools, computing classrooms may lose some of their community aspects. They posed a very important question for all attendees to consider: “How can we foster an active community of learners in the generative AI era?” 

    In our breakout groups and the subsequent whole-group discussion, we began to think about the role of community. Some points raised highlighted the importance of working together to accurately identify and define problems, and sharing ideas about which prompts would work best to accurately solve the problems. 

    As AI technology continues to evolve, its role in education will likely expand. There was general agreement in the question and answer session that keeping a sense of community at the heart of computing classrooms will be important. 

    Arto and Juho asked seminar attendees to think about encouraging a sense of community. 

    Further resources

    The Raspberry Pi Computing Education Research Centre and Faculty of Education at the University of Cambridge have recently published a teacher guide on the use of generative AI tools in education. The guide provides practical guidance for educators who are considering using generative AI tools in their teaching. 

    Join our next seminar

    In our current seminar series, we are exploring how to teach programming with and without AI technology. Join us at our next seminar on Tuesday, 12 November at 17:00–18:30 GMT to hear Nicholas Gardella (University of Virginia) discuss the effects of using tools like GitHub Copilot on the motivation, workload, emotion, and self-efficacy of novice programmers. To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Teaching about AI in schools: Take part in our Research and Educator Community Symposium

    Teaching about AI in schools: Take part in our Research and Educator Community Symposium

    Reading Time: 4 minutes

    Worldwide, the use of generative AI systems and related technologies is transforming our lives. From marketing and social media to education and industry, these technologies are being used everywhere, even if it isn’t obvious. Yet, despite the growing availability and use of generative AI tools, governments are still working out how and when to regulate such technologies to ensure they don’t cause unforeseen negative consequences.

    How, then, do we equip our young people to deal with the opportunities and challenges that they are faced with from generative AI applications and associated systems? Teaching them about AI technologies seems an important first step. But what should we teach, when, and how?

    A teacher aids children in the classroom

    Researching AI curriculum design

    The researchers at the Raspberry Pi Foundation have been looking at research that will help inform curriculum design and resource development to teach about AI in school. As part of this work, a number of research themes have been established, which we would like to explore with educators at a face-to-face symposium. 

    These research themes include the SEAME model, a simple way to analyse learning experiences about AI technology, as well as anthropomorphisation and how this might influence the formation of mental models about AI products. These research themes have become the cornerstone of the Experience AI resources we’ve co-developed with Google DeepMind. We will be using these materials to exemplify how the research themes can be used in practice as we review the recently published UNESCO AI competencies.

    A group of educators at a workshop.

    Most importantly, we will also review how we can help teachers and learners move from a rule-based view of problem solving to a data-driven view, from computational thinking 1.0 to computational thinking 2.0.

    A call for teacher input on the AI curriculum

    Over ten years ago, teachers in England experienced a large-scale change in what they needed to teach in computing lessons when programming was more formally added to the curriculum. As we enter a similar period of change — this time to introduce teaching about AI technologies — we want to hear from teachers as we collectively start to rethink our subject and curricula. 

    We think it is imperative that educators’ voices are heard as we reimagine computer science and add data-driven technologies into an already densely packed learning context. 

    Educators at a workshop.

    Join our Research and Educator Community Symposium

    On Saturday, 1 February 2025, we are running a Research and Educator Community Symposium in collaboration with the Raspberry Pi Computing Education Research Centre

    In this symposium, we will bring together UK educators and researchers to review research themes, competency frameworks, and early international AI curricula and to reflect on how to advance approaches to teaching about AI. This will be a practical day of collaboration to produce suggested key concepts and pedagogical approaches and highlight research needs. 

    Educators and researchers at an event.

    This symposium focuses on teaching about AI technologies, so we will not be looking at which AI tools might be used in general teaching and learning or how they may change teacher productivity. 

    It is vitally important for young people to learn how to use AI technologies in their daily lives so they can become discerning consumers of AI applications. But how should we teach them? Please help us start to consider the best approach by signing up for our Research and Educator Community Symposium by 9 December 2024.

    Information at a glance

    When:  Saturday, 1 February 2025 (10am to 5pm) 

    Where: Raspberry Pi Foundation Offices, Cambridge

    Who: If you have started teaching about AI, are creating related resources, are providing professional development about AI technologies, or if you are planning to do so, please apply to attend our symposium. Travel funding is available for teachers in England.

    Please note we expect to be oversubscribed, so book early and tell us about why you are interested in taking part. We will notify all applicants of the outcome of their application by 11 December.

    Website: LINK

  • Introducing new artificial intelligence and machine learning projects for Code Clubs

    Introducing new artificial intelligence and machine learning projects for Code Clubs

    Reading Time: 4 minutes

    We’re pleased to share a new collection of Code Club projects designed to introduce creators to the fascinating world of artificial intelligence (AI) and machine learning (ML). These projects bring the latest technology to your Code Club in fun and inspiring ways, making AI and ML engaging and accessible for young people. We’d like to thank Amazon Future Engineer for supporting the development of this collection.

    A man on a blue background, with question marks over his head, surrounded by various objects and animals, such as apples, planets, mice, a dinosaur and a shark.

    The value of learning about AI and ML

    By engaging with AI and ML at a young age, creators gain a clearer understanding of the capabilities and limitations of these technologies, helping them to challenge misconceptions. This early exposure also builds foundational skills that are increasingly important in various fields, preparing creators for future educational and career opportunities. Additionally, as AI and ML become more integrated into educational standards, having a strong base in these concepts will make it easier for creators to grasp more advanced topics later on.

    What’s included in this collection

    We’re excited to offer a range of AI and ML projects that feature both video tutorials and step-by-step written guides. The video tutorials are designed to guide creators through each activity at their own pace and are captioned to improve accessibility. The step-by-step written guides support creators who prefer learning through reading. 

    The projects are crafted to be flexible and engaging. The main part of each project can be completed in just a few minutes, leaving lots of time for customisation and exploration. This setup allows for short, enjoyable sessions that can easily be incorporated into Code Club activities.

    The collection is organised into two distinct paths, each offering a unique approach to learning about AI and ML:

    Machine learning with Scratch introduces foundational concepts of ML through creative and interactive projects. Creators will train models to recognise patterns and make predictions, and explore how these models can be improved with additional data.

    The AI Toolkit introduces various AI applications and technologies through hands-on projects using different platforms and tools. Creators will work with voice recognition, facial recognition, and other AI technologies, gaining a broad understanding of how AI can be applied in different contexts.

    Inclusivity is a key aspect of this collection. The projects cater to various skill levels and are offered alongside an unplugged activity, ensuring that everyone can participate, regardless of available resources. Creators will also have the opportunity to stretch themselves — they can explore advanced technologies like Adobe Firefly and practical tools for managing Ollama and Stable Diffusion models on Raspberry Pi computers.

    Project examples

    A piece of cheese is displayed on a screen. There are multiple mice around the screen.

    One of the highlights of our new collection is Chomp the cheese, which uses Scratch Lab’s experimental face recognition technology to create a game students can play with their mouth! This project offers a playful introduction to facial recognition while keeping the experience interactive and fun. 

    A big orange fish on a dark blue background, with green leaves surrounding the fish.

    Fish food uses Machine Learning for Kids, with creators training a model to control a fish using voice commands.

    An illustration of a pink brain is displayed on a screen. There are two hands next to the screen playing the 'Rock paper scissors' game.

    In Teach a machine, creators train a computer to recognise different objects such as fingers or food items. This project introduces classification in a straightforward way using the Teachable Machine platform, making the concept easy to grasp. 

    Two men on a blue background, surrounded by question marks, a big green apple and a red tomato.

    Apple vs tomato also uses Teachable Machine, but this time creators are challenged to train a model to differentiate between apples and tomatoes. Initially, the model exhibits bias due to limited data, prompting discussions on the importance of data diversity and ethical AI practices. 

    Three people on a light blue background, surrounded by music notes and a microbit.

    Dance detector allows creators to use accelerometer data from a micro:bit to train a model to recognise dance moves like Floss or Disco. This project combines physical computing with AI, helping creators explore movement recognition technology they may have experienced in familiar contexts such as video games. 

    A green dinosaur in a forest is being observed by a person hiding in the bush holding the binoculars.

    Dinosaur decision tree is an unplugged activity where creators use a paper-based branching chart to classify different types of dinosaurs. This hands-on project introduces the concept of decision-making structures, where each branch of the chart represents a choice or question leading to a different outcome. By constructing their own decision tree, creators gain a tactile understanding of how these models are used in ML to analyse data and make predictions. 

    These AI projects are designed to support young people to get hands-on with AI technologies in Code Clubs and other non-formal learning environments. Creators can also enter one of their projects into Coolest Projects by taking a short video showing their project and any code used to make it. Their creation will then be showcased in the online gallery for people all over the world to see.

    Website: LINK

  • AI special edition in The MagPi 147

    AI special edition in The MagPi 147

    Reading Time: 2 minutes

    Discover the best AI projects for Raspberry Pi

    AI Projects

    Discover a range of practical AI Projects that put Raspberry Pi’s AI smarts to good use. We’ve got people detectors, ANPR trackers, pose detectors, text generators, music generators, and an intelligent pill dispenser.

    Handheld gaming with Raspberry Pi

    Handheld gaming

    Retro gaming on the move can be fun and
    creative. PJ Evans grabs some spare batteries and builds a handheld gaming console.

    DIY CNC Lathe and custom G-codes

    DIY CNC Lathe

    Being able to write G-codes enables all kind of custom machines. In this tutorial Jo Hinchcliffe looks at a simple small CNC lathe conversion.

    Buttons and fastenings in The MagPi 147

    Buttons and fastenings

    Where would we be without buttons and fasteners. Nicola King takes a deep dive into the types of fastenings that you can use in your crafting projects.

    DEC Flip-Chip tester

    DEC Flip-Chip tester

    Rebuilding an old PDP-9 computer with a Raspberry Pi-based device that tests hundreds of components.

    How to build a Nixie-style clock with Raspberry Pi and LEDs

    Pixie clock

    This project recreates an old Nixie tube clock, only using ultra-modern (and vastly safer) LED lights.

    You’ll find all this and much more in the latest edition of The MagPi magazine. Pick up your copy today from our store, or subscribe to get every issue delivered to your door.

  • Implementing a computing curriculum in Telangana

    Implementing a computing curriculum in Telangana

    Reading Time: 4 minutes

    Last year we launched a partnership with the Government of Telangana Social Welfare Residential Educational Institutions Society (TGSWREIS) in Telangana, India to develop and implement a computing curriculum at their Coding Academy School and Coding Academy College. Our impact team is conducting an evaluation. Read on to find out more about the partnership and what we’ve learned so far.

    Aim of the partnership 

    The aim of our partnership is to enable students in the school and undergraduate college to learn about coding and computing by providing the best possible curriculum, resources, and training for teachers. 

    Students sit in a classroom and watch the lecture slides.

    As both institutions are government institutions, education is provided for free, with approximately 800 high-performing students from disadvantaged backgrounds currently benefiting. The school is co-educational up to grade 10 and the college is for female undergraduate students only. 

    The partnership is strategically important for us at the Raspberry Pi Foundation because it helps us to test curriculum content in an Indian context, and specifically with learners from historically marginalised communities with limited resources.

    Adapting our curriculum content for use in Telangana

    Since our partnership began, we’ve developed curriculum content for students in grades 6–12 in the school, which is in line with India’s national education policy requiring coding to be introduced from grade 6. We’ve also developed curriculum content for the undergraduate students at the college. 

    Students and educators engage in digital making.

    In both cases, the content was developed based on an initial needs assessment — we used the assessment to adapt content from our previous work on The Computing Curriculum. Local examples were integrated to make the content relatable and culturally relevant for students in Telangana. Additionally, we tailored the content for different lesson durations and to allow a higher frequency of lessons. We captured impact and learning data through assessments, lesson observations, educator interviews, student surveys, and student focus groups.

    Curriculum well received by educators and students

    We have found that the partnership is succeeding in meeting many of its objectives. The curriculum resources have received lots of positive feedback from students, educators, and observers.

    Students and educators engage in digital making.

    In our recent survey, 96% of school students and 85% of college students reported that they’ve learned new things in their computing classes. This was backed up by assessment marks, with students scoring an average of 70% in the school and 69% in the college for each assessment, compared to a pass mark of 40%. Students were also positive about their experiences of the computing and coding classes, and particularly enjoyed the practical components.

    “My favourite thing in this computing classes [sic] is doing practical projects. By doing [things] practically we learnt a lot.” – Third year undergraduate student, Coding Academy College

    “Since their last SA [summative assessment] exam, students have learnt spreadsheet [concepts] and have enjoyed applying them in activities. Their favourite part has been example codes, programming, and web-designing activities.” – Student focus group facilitator, grade 9 students, Coding Academy School

    However, we also found some variation in outcomes for different groups of students and identified some improvements that are needed to ensure the content is appropriate for all. For example, educators and students felt improvements were needed to the content for undergraduates specialising in data science — there was a wish for the content to be more challenging and to more effectively prepare students for the workplace. Some amendments have been made to this content and we will continue to keep this under review. 

    In addition, we faced some challenges with the equipment and infrastructure available. For example, there were instances of power cuts and unstable internet connections. These issues have been addressed as far as possible with Wi-Fi dongles and educators adapting their delivery to work with the equipment available.

    Our ambition for India

    Our team has already made some improvements to our curriculum content in preparation for the new academic year. We will also make further improvements based on the feedback received. 

    Students and educators engage in digital making.

    The long-term vision for our work in India is to enable any school in India to teach students about computing and creating with digital technologies. Over our five-year partnership, we plan to work with TGSWREIS to roll out a computing curriculum to other government schools within the state. 

    Through our work in Telangana and Odisha, we are learning about the unique challenges faced by government schools. We’re designing our curriculum to address these challenges and ensure that every student in India has the opportunity to thrive in the 21st century. If you would like to know more about our work and impact in India, please reach out to us at india@raspberrypi.org.

    We take the evaluation of our work seriously and are always looking to understand how we can improve and increase the impact we have on the lives of young people. To find out more about our approach to impact, you can read about our recently updated theory of change, which supports how we evaluate what we do.

    Website: LINK

  • Win! One of five brand new Raspberry Pi AI Cameras

    Win! One of five brand new Raspberry Pi AI Cameras

    Reading Time: < 1 minute

    Subscribe

  • Open Source Hardware Camp

    Open Source Hardware Camp

    Reading Time: 3 minutes

    Friday kicked off with a talk on Dina St Johnston, founder of the UK’s first independent software company, which she started in 1959. After that came computing with human-worn sensors; mainframes; human creativity in the age of AI; and a look at Raftabar the robot, which uses facial recognition (and two Raspberry Pi boards) to attempt to engage humans in conversation. The day also featured an exploration of modular synthesis by musician Loula Yorke; how to poke holes in things with prototypes; and a look at the work being done by Open Innovations, an organisation that’s applying open data to policy recommendations in the north of England.

    Sunday was filled with hands-on workshops

    Saturday was the start of Open Source Hardware Camp, and featured a brilliant range of projects. Hackspace contributor Jo Hinchliffe gave a talk on open-source rocketry and the tools he uses to build flying machines, with particular reference to open source design software KiCAD. Omer Kilic and Stuart Childs taught us how to go from 10 units to 10,000 with their Adventures in Manufacturing talk. As DIY electronics enthusiasts we often wonder if we could invent the Next Big Thing, and this talk explored “the strange space between engineers, product owners and factories – setting up production lines and working with a variety of suppliers, from prototypes to mass production”.

    There was plenty for fans of vintage computing: Tony Abbey is part of the team that rebuilt the EDSAC computer at the National Museum of Computing in Bletchley, and he was there to tell us all about that project. EDSAC was one of the first general-purpose computers, built in 1949, and even though the clunking electromechanical technology of those days has been far superseded by microcontrollers that you can buy for pennies, the lessons learned by rebuilding an early computer are well worth a look.

    You too can print classical artworks on to your PCBs

    Andy Bennett shared his steampunk sunflower (left), which taught us that getting organic shapes to fit on PCBs isn’t quite as easy as it looks. He’s influenced by the work of Mohit Bhoite and Jiri Praus, both wonderful makers who have documented their build process to produce stunning open circuit sculptures. In the next talk, Roger Light explained how he built a digital camera sensor, spending £50,000 to make a device capable of capturing images at a resolution of 256×256 pixels.

    The LEDs that represent the sunflower seeds are arranged according to the Fibonacci number series, which makes them a challenge to put on a PCB

    Our favourite talk, and one which really encapsulates the brilliance of the one hardware movement, was by Spencer Owen. In 2013, Spencer built a clone of a Z80 computer on a breadboard, which went on to become the RC2014 kit computer. His talk this year was on dye sublimation printing onto PCBs. He’s worked out that with the same hardware you might use to print on to mugs and T-shirts, you can print on to the silkscreen layer of a PCB, opening up all sorts of colours and designs. Our favourite bit of Spencer’s talk is that he used the process to make a computer with rainbow PCBs, which he sold to raise money for LGBT charities; our second favourite bit of the talk is that, as JLCPCB now offers full-colour silkscreens, he wouldn’t have bothered with sublimation printing if he were starting today, but he did it anyway.

    Open source rockets designed on open source software

    That’s something we love about open source hardware – very often, the point isn’t that you can do it better, or cheaper, but that you’ve done it for yourself. And we love it that events like this keep happening, where we share the knowledge and enthusiasm that keeps communities thriving.

  • HDSP wristwatch

    HDSP wristwatch

    Reading Time: < 1 minute

    With a six-digit, seven-segment display such as the HDSP-2000 (itself an unusual choice – he hasn’t made this easy), Vitalii needed to find a way to multiplex the signals coming out of the chip, multiplying the I/O signals with transistors until he had enough to control each of the segments in the display. The result is this wonderful wristwatch, the custom PCB that enables the ATtiny85 to control the display, and a great deep dive into multiplexing written up on Hackaday.io.

    We’re seriously impressed by this feat of electronic engineering. If you are too and you want to try it yourself, we’d suggest that you start with a single seven-segment display, a breadboard, and go from there – this tiny form factor presents loads of difficulties, all of which have been overcome here with aplomb.

  • Introducing picamzero: Simplifying Raspberry Pi Camera projects for beginners

    Introducing picamzero: Simplifying Raspberry Pi Camera projects for beginners

    Reading Time: 3 minutes

    Thousands of learners worldwide take their first steps into text-based programming using the Python programming language. Python is not only beginner-friendly, but is also used extensively in industry.

    An educator helps two young learners with a coding project in a classroom.

    In 2015, Python developer Daniel Pope, who has a keen interest in education, noticed that beginners often have great ideas for creating projects but struggle because the software libraries they need to use are aimed at more confident programmers. To address this, he created Pygame Zero — a simplified version of the popular PyGame software. Since then, various developers have expanded the range of ‘zero’ libraries for Python.

    How Python zero libraries help beginner programmers

    The Raspberry Pi Foundation has a long history of supporting Python zero libraries. GPIO Zero was launched back in 2015, followed by guizero and then picozero. The goal of all ‘zero’ libraries is the same: to help beginner programmers create amazing projects using simple, understandable code, supported by useful documentation. 

    The Picamera2 library is a powerful tool for advanced users, but beginners — such as Astro Pi: Mission Space Lab programme participants — would benefit from a zero library to allow them to use the Raspberry Pi Camera module. 

    The Astro Pi Mark II units.
    The Astro Pi Mark II units
    Image taken by Astro Pi: Mission Space Lab programme participants

    Picamzero: how to get started

    The Code Club Projects and Youth Programmes teams at the Raspberry Pi Foundation have joined forces to create picamzero: a new library that makes it simple for beginners to use the Raspberry Pi Camera board.

    As with the other ‘zero’ libraries, it’s straightforward to get started. You can install picamzero by typing two commands in your Raspberry Pi’s terminal:

    sudo apt update

    sudo apt install python3-picamzero

    Once it’s installed, setting up your program to communicate with your camera is easy:

    from picamzero import Camera

    cam = Camera()

    You can ask picamzero to take a time-lapse sequence and make a video of your images using a single line of code.

    cam.capture_sequence("mysequence.jpg", make_video=True)

    Picamzero also makes it easy to add text and image overlays to your images.

    A Lego scene captured using picamzero.
    A Lego scene captured using picamzero

    We’ve written beginner-friendly documentation for the new library so that you can explore what you can create using just a few lines of code. We’ve also updated our resources so that you can start making exciting projects using picamzero straight away:

    We hope you enjoy using picamzero. Please get in touch if you have any feedback or suggestions. Happy coding!

    Website: LINK