If you are a pet owner, you know how important it is to keep furry companions fed and happy – even when life gets busy! With the Arduino Plug and Make Kit, you can now build a customizable, smart pet feeder that dispenses food on schedule and can be controlled remotely. It’s the perfect blend of functionality and creativity, designed to simplify your life and delight your cat, dog, rabbit, hamster, or cute creature of choice.
Here’s everything you need to automate feeding your pet
This intermediate project is packed with advanced features, made easy by the intuitive Plug and Make Kit. With its modular components, creating your own smart pet feeder is straightforward, fun, and easy to customize.
Here’s what you’ll need:
Arduino Plug and Make Kit, which already includes UNO R4 WiFi, Modulino Distance, Modulino Buttons, Modulino Pixels, and Qwiic cables
A continuous servo motor (such as this one, for example)
Some jumper wires and screws for assembly
A 3D printer (to create the case either with the files we provide, or with your own designs!)
Once the setup is complete, you can remotely control the feeder via a ready-to-use Arduino Clouddashboard, where you’ll set dispensing schedules, adjust portion sizes, and even customize LED lights to match your pet’s mood.
The Modulino Distance sensor ensures food comes out only when needed, while the Modulino Buzzer adds some audio feedback for a playful touch.
Make it the cat’s meow!
As you know, the Plug and Make Kit’s versatility allows for endless possibilities. Feel free to expand this pet feeder project with additional features! For example, you can add a motion-activated camera to capture your pet’s activities, or a real-time weight monitor to track how much food is consumed. You can even activate voice commands for an interactive feeding experience (maybe skip this one if you have a parrot!).
Now you have all the info you need to build your own smart pet feeder: time to grab your Arduino Plug and Make Kit and get started. The template we’ve created simplifies the process, letting you focus on the fun parts of building and experimenting.
Be sure to share your creations with us – upload them to Project Hub or email creators@arduino.cc to get in touch. We can’t wait to see how you make the small daily routine of feeding your pet smarter, and a lot more fun, with Arduino!
In 150 issues, we’ve seen a huge range of epic builds with Raspberry Pi computers at their heart. We’ve got everything machine learning prosthetic arms to underwater archaeology submarines; old-school equipment and futuristic robots. Over 20 pages with 150 incredible project ideas await you.
Archiving old floppy disks
Graham Hooley has converted an old floppy disk duplicator into an archiving machine that makes light work of preserving old files. The device uses the mechanical parts from an old disk duplicator, along with Raspberry Pi and a Camera Module. Disk images are scanned, snapped, and saved to a USB flash drive.
Mow the lawn automatically
Lawny is the brainchild of Eugene Tkachenko. This robot mower is built with windscreen wiper motors controlled by Raspberry Pi. A Raspberry Pi Camera provides a first-person view as Lawny rolls around the garden.
Photon 2 Lander
This is the latest circuit sculpture in a series inspired by planetary landing craft, made by the artist and engineer Mohit Bhoite.
Custom CNC machine: A carbon filament winder
“There comes a time in every maker’s life where the urge to build a completely custom
CNC machine kicks in!” Or so says Jo Hinchliffe. This month Jo looks at increasingly approachable project area, making a prototype carbon fibre filament winding machine
Raspberry Pi Audio
Raspberry Pi hardware is the ideal choice for home studios and audio systems. You can quickly drop a Raspberry Pi into a recording environment and use it alongside professional audio. This month maker, KG Orphanides, puts the powerful-yet-silent Raspberry Pi 500 at the heart of their audio studio build.
In 2024, we continued to invest in more ways to protect our community and fight bad actors, so billions of people can trust the apps they download from Google Play and millions of developers can build thriving businesses.
AI-powered threat detection, stronger privacy policies, enhanced tools for app developers and more have enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting people from harmful or malicious apps before they can cause any damage.
To learn more about how we’re helping keep Android users safe on Google Play and beyond, read the Google Security Blog.
Many modern video games may put your character inside of a virtual 3D environment, but you aren’t seeing that in three dimensions — your TV’s screen is only a 2D display, after all. 3D displays/glasses and VR goggles make it feel more like you’re in the 3D world, but it isn’t quite the same as you have no control over focus. What would gaming look like in true 3D? Greg Brault built this 9x9x9 LED cube as a video game display to find out.
Brault actually built a similar 8×8×8 LED cube with some games 10 years ago, but this new version is a lot better. Not only does it have an additional 217 LEDs, but Brault took the time to create a kind of graphics engine to make game development easier. That’s so good that he was able to program a version of Doom that runs on the cube!
The new cube contains 729 WS2811 individually addressable RGB LEDs on custom PCBs and those are much easier to control than the standard RGB LEDs Brault used in the original cube. An Arduino Nano ESP32 board controls those LEDs on one ESP32-S3 core and the gameplay on the other core. It can play sound effects via a DFPlayer Mini board.
But the real power is in Brault’s custom 3D rendering engine. Building on the FastLED library, it has all kinds of functions and objects useful for programming graphics on the unique cube display. It is efficient enough to run games at a playable “framerate.”
With that engine at his disposal, Brault was able to make a handful of games: Pong, Space Invaders, Pole Position, Snake, Asteroids, and even Doom. Seeing those reimagined to work in 3D is a real treat, so be sure to watch Brault’s demonstration videos.
2025 brings us the Year of the Snake! Just like the Wood Snake in today’s Doodle, this year invites us to embrace the twists and turns and to have a little fun as we embark on new adventures. Ready to celebrate the Year of the Snake in style? Here’s how Google can help:
Coil up for some Lunar New Year Entertainment: Google TV’s special Lunar New Year collection is curated to celebrate the Year of the Snake. From snake-centric movies like The Jungle Book and Raiders of the Lost Ark to stories of renewal like The Secret Life of Walter Mitty to movies from superstars born in the Year of the Snake like The Era’s Tour (Taylor’s Version), there is something for everyone. Google Play’s Lunar New Year hub is also here, with exciting games and apps including Candy Crush Saga, Dramabax, Pokémon TCG Pocket and Weplay. Discover new favorites and revisit beloved classics for a truly festive experience.
Dress up your digital den: Chrome users can update their browser backgrounds with designs created by Asian and Pacific Islander artists or by searching “snakes” within the Themes section of the Chrome Web Store. And during your next Google Meet meeting, check out the new Lunar New Year background featuring red envelopes to bring good luck, fresh fruit for good health, cherry blossoms to symbolize new beginnings and pops of red to bring prosperity.
Explore the hiss-tory of the traditions:Google Arts & Culture has fresh Lunar New Year content, offering a fascinating glimpse into the cultural and artistic significance of Lunar New Year and its various celebrations. Visit virtual exhibits, discover captivating stories, and deepen your appreciation for this vibrant holiday.
Rustle up the flavors of Lunar New Year: Use Google Search and Maps to identify Asian-owned local businesses near you, and discover local restaurants and markets that offer the most delicious festival treats like yi mein (longevity noodles), tang yuan (sweet rice balls), and banh chung (chung cake) in your town.
Save 35% off the cover price with a subscription to The MagPi. UK subscribers get three issues for just £10 and a FREE Raspberry Pi Pico W, then pay £30 every six issues. You’ll save money and get a regular supply of in-depth reviews, features, guides and other Raspberry Pi enthusiast goodness delivered directly to your door every month.
As an organisation with global reach, translation and localisation have been part of the Raspberry Pi Foundation’s activities from the start. Code Clubs and educational partners all over the world are helping young people learn about computing in their own language. We’ve already published over 1,900 translated learning resources, covering up to 32 languages, thanks to the work of our talented localisation team and our amazing community of volunteer translators.
How our approach to translation considers design, process and people
English is seen by many as the language of computing, and in many countries, it’s also either the language of education or a language that young people aspire to learn. However, English is, in some instances, a barrier to learning: young people in many communities don’t have enough knowledge of English to use it to learn about digital technologies, or even if they do, the language of communication with other students, teachers, or volunteers may not be English.
Our ‘Space Talk’ project in Latin American Spanish
In a world where browsers can instantly translate web pages and large language models can power seemingly perfect conversations in virtually any language, it’s easy to assume that translation just happens and that somehow, technology takes care of it. Unfortunately, that’s not the case. Technology is certainly crucial to translation, but there’s much more to it than that. Our approach to translation involves considering design, process, and people to ensure that localised materials truly help young people with their learning journey.
Localisation or translation?
Localisation and translation are similar terms that are often used interchangeably. Localisation normally refers to adapting a product to suit a local market, whereas translation is a subset of localisation that involves changing the language of the text. For instance, localisation includes currencies, measurements, formatting dates and numbers, and contextual references. Meanwhile, translation involves only changing the language of the text, such as from English to French.
At the Raspberry Pi Foundation, we see translation as an enabler. It enables volunteers to reach learners, learners to succeed in their educational goals, and the Foundation to achieve its mission all over the world.
Four key ways the Foundation maximises the impact and reach of our translated materials
1. Create with localisation in mind
Regardless of whether learning materials are intended for English-speaking or global audiences, it’s important to create and design them with localisation in mind. That way, they can be used in a variety of places, and any piece of content (text, graphics, or illustrations) can be modified to meet the needs of the target audience. Keeping localisation in mind might include allowing space for text expansion, being mindful of any text embedded in graphic elements, and even making sure the context is understandable for a variety of audiences. Making a piece of content localisable at the creation stage is virtually cost-free. Modifying fully built assets to translate them or to use them in other markets can be expensive and extremely time-consuming!
2. Always have user needs and priorities upfront
Before investing in localising or translating any materials, we seek to understand the needs and priorities of our users. In many countries where English is not the usual language of communication, materials in English are a barrier, even if some of the users have a working knowledge of English. Making materials available in local languages directly results in additional reach and enhanced learning outcomes. In other communities where English has a certain status, a more selective approach may be more appropriate. A full translation may not be expected, but translating or adapting elements within them, such as introductions, videos, infographics, or glossaries, can help engage new learners.
3. Maximise the use of technology
While it’s possible to translate with pen and paper, translation is only scalable with the use of technology. Computer-assisted translation tools, translation memories, terminology databases, machine translation, large language models, and so on are all technologies that play their part in making the translation process more efficient and scalable.
At the Foundation, we make use of a variety of translation technologies and also, crucially, work very closely with our content and development teams to integrate their tools and processes into the overall localisation workflow.
4. Take great care of the people
Even with the best technology and the smoothest integrations, there is a human element that is absolutely essential. Our amazing community of volunteers and partners work very closely with learners in their communities. They understand the needs of those learners and have a wealth of information and insights. We work with them to prioritise, translate, review and test the learning materials. They are key to ensuring that our learning materials help our users reach their learning goals.
In summary
Thinking about localisation from the moment we start creating learning materials, understanding the needs of users when creating our end goals, maximising the use of technology, and taking good care of our people and partners are the key principles that drive our translation effort.
If you’d like to find out more about translation at the Raspberry Pi Foundation or would like to contribute to the translation of our learning materials, feel free to contact us at translation@raspberrypi.org.
A version of this article also appears in Hello World issue 23.
We are proud to announce the Made-in-India UNO Ek R4! Available exclusively in India in both WiFi and Minima variants, it is born to meet the needs of the country’s growing maker and innovation ecosystem, by combining all the powerful features of the UNO R4 with the benefits of local manufacturing, enhanced availability, anddedicated support for Indian users.
Uno, one, Ek ! In case you are wondering, Ek means “one” in Hindi, symbolizing unity and simplicity. It represents the Arduino UNO’s position as the foundation of countless maker projects – simple yet powerful, and always the first step toward innovation. To pronounce Ek, say “ake” (rhymes with “bake”) with a soft “k” sound at the end.
Supporting innovation in India
The two new boards were developed under the “Make in India” campaign, launched to make India the global design and manufacturing hub, and are being launched as part of the country’s Republic Day celebrations. They were first unveiled at the World Economic Forum 2025 in Davos, where they were presented to Shri Ashwini Vaishnav, India’s incumbent Minister of Electronics and Information Technology, and Mr Jayant Chaudhary, Minister of State (IC) for the Ministry of Skill Development & Entrepreneurship. The event was an outstanding opportunity to reflect on India’s huge role in technological innovation and open-source initiatives, with a focus on fostering STEM education and advancing the maker community.
Fabio Violante, CEO (right), and Guneet Bedi, SVP and General Manager (left) with Shri Ashwini Vaishnaw, Minister of Electronics and IT (center).
Fabio Violante, CEO (right), and Guneet Bedi, SVP and General Manager (left) with Mr Jayant Chaudhary, Minister of State (IC) for the Ministry of Skill Development & Entrepreneurship (center).
We are committed to empowering the thriving maker and engineering community in India – the second country in the world for Arduino IDE downloads, just to mention one important statistic! As our CEO Fabio Violante shares,“Arduino’s decision to manufacture in India reflects the nation’s immense potential as a rising global leader in technology. This step embodies our deep belief in the power of collaboration and community. By joining forces with Indian manufacturers, we aim to ignite a culture of innovation that resonates far beyond borders, inspiring creators and visionaries worldwide.”
Why choose UNO Ek R4 boards?
The UNO Ek R4 WiFi and UNO Ek R4 Minima offer the same powerful performance as their global counterparts, featuring a 32-bit microprocessor with enhanced speed, memory, and connectivity options. But the Made-in-India editions come with added benefits tailored specifically for Indian users, including:
Faster delivery: Locally manufactured boards with extensive stock ensure reduced lead times for projects of all sizes.
Affordable pricing: Genuine Arduino products made accessible at competitive prices.
Local support: Indian users gain access to official technical assistance alongside Arduino’s vast library of global resources.
Sustainable manufacturing: Produced ethically with eco-friendly packaging and certified to SA8000 and FSC standards.
Guneet Bedi, Arduino’s Senior Vice President and General Manager of the Americas, comments: “By adding the Arduino UNO Ek R4 WiFi and Arduino UNO Ek R4 Minima to our product line, Arduino is helping to drive adoption of connected devices and STEM education around the world. We’re excited to see the creative projects this community can create with these new boards.”
The past and the future are Ek
The strong legacy of the UNO concept finds a new interpretation, ready to leverage trusted Arduino quality and accessibility to serve projects of any complexity – from IoT to educational applications to AI.
Catering more closely to local needs, UNO Ek R4 WiFi and UNO Ek R4 Minima are equipped to drive the next wave of innovation in India. Both will be available through authorized distributors across the country: sign up here to get all the updates about the release!
The Standard kit features the robotic arm, breakout board (for Raspberry Pi 4 or 5), power supply, paper ‘map’, wooden blocks, coloured balls, and tags. The Advanced version adds some flat-pack shelving for ‘warehousing’ operations.
A smartphone companion app is the easiest way to try out AI modes such as object tracking and face recognition. But there’s a lot more you can do: by following an extensive array of online tutorials, you’ll learn how to program it with Python, use OpenCV for image recognition, and much more.
Verdict
9/10
A sturdy robotic arm with 6DOF and computer vision. Price: £236 / $300
Server rooms are built for the comfort of servers — not people. But those servers need maintenance, which means they need to be accessible. The resulting access corridors take up room that could be filled with more servers, which is why Jdw447 designed a claw machine-esque ‘modular server room’ and built a working scale model to demonstrate the concept.
This isn’t necessarily a serious proof of concept, as there would be a lot more to consider on top of simply moving the servers to an accessible location. But it is a novel idea that Jdw447 actually brought to life in the form of a relatively small model based on Oliver Shory’s gantry design. It is a bit like a claw machine mixed with a plotter. At actual server room scale, it would look like an overhead gantry crane. Here, it looks a bit like the motion system on a laser cutter.
The gantry is made of aluminum extrusion and 3D-printed joints. It has four stepper motors to move the gantry and to actuate the lifting mechanism, which grabs the model server racks using magnets. An Arduino UNO Rev3 board controls those motors through ULN2003 drivers, and the operator directs the movement using joysticks monitored by the Arduino.
This motion system sits in a large MDF box representing a server room, with several 3D-printed blocks representing the server racks arranged in a grid. When a “server” needs maintenance, the operator can use the gantry to pick it up and move it to the desired location.
Today we’re rolling out a handful of updates to make Android’s hearing aid and screenreader experiences even more accessible.
Starting with the Samsung Galaxy S25, we’re bringing the benefits of the next generation of Bluetooth to GN Hearing and Oticon Intent hearing aids, using new LE Audio technology. With LE Audio compatibility, people can now access easy hearing aid management — including hands-free calling, a way to change presets via native settings and lower latency Bluetooth connections. This new integration will also be available on the Pixel 9 with Android 16 beta and come to Galaxy S24 with Android 15 in the coming weeks.
And we’re beginning to roll out new updates to TalkBack, Android’s screenreader, to make devices even more accessible for people who are blind or have low-vision. Starting with Samsung Galaxy S25 devices, anyone who uses braille will be able to use their displays via HID, a way to connect to Bluetooth devices. Over the coming months this functionality will begin to work on any phone or tablet using Android 15. TalkBack will also provide more detailed image descriptions, powered by Gemini models, on Galaxy S25 devices in the coming weeks.
Together with Samsung, today we unveiled a handful of new Android features. The new Samsung Galaxy S25 comes with Gemini built in and available with the press of a button. Don’t miss the just-launched, kid-friendly smartwatch experience that puts parents in control. And there are plenty of other Gemini updates, new ways to search, accessibility tools and more to check out.
And for people who are blind or have low vision, TalkBack 15 on Galaxy S25 devices will now be compatible with braille displays that use HID, a popular way to connect to Bluetooth devices. Following feedback from the community, this compatibility will ensure people can use their braille displays without additional steps, making Galaxy S25 devices even more accessible. In the coming weeks, TalkBack on Galaxy S25 devices will also provide more detailed image descriptions, powered by Gemini models.
5. Stay connected with the Galaxy Watch for Kids experience
With Google Family Link, parents can use their phone to set up and manage Galaxy Watch7 LTE smartwatches with a Galaxy Watch for Kids experience. This allows parents to approve contacts, monitor their child’s watch’s location, manage apps and set up school time to limit distractions during school hours.
This update starts rolling out today in the U.S. with support from major carriers including AT&T, T-Mobile and Verizon.
Last year, we introduced Circle to Search to help you easily circle, scribble or tap anything you see on your Android screen, and find information from the web without switching apps. Now we’re introducing two improvements that make Circle to Search even more helpful.
First, we’re expanding AI Overviews to more kinds of visual search results for places, trending images, unique objects and more. Inspired by a piece of art? Circle it and see a gen AI snapshot of helpful information with links to dig deeper and learn more from the web.
Second, we’re making it easier for you to get things done on your phone. Circle to Search will now quickly recognize numbers, email addresses and URLs you see on your screen so you can take action with a single tap.
The lightweight gadget has a month-long standby battery life and recharges via its USB-C connector in two or three hours. The clever design hides the USB-C port at one end, revealed when you firmly yank off the silver plastic retaining clip.
The NeoRulerGO (£55 [£47 now] / $59 [was $69 at launch]) has three modes: Ruler, Scale Ruler, and Customized Scale Ruler, plus a Settings menu. Cycle through metres, inches, feet, centimetres and millimetres, or select ‘fit in’ using the NeoRulerGO’s hard plastic buttons. Two red laser beams emitted at right angles are used to locate the start point. Roll along, keeping the device perpendicular for the most accurate reading, and lift the NeoRulerGO off the surface to lock in the reading.
There is a Corner to Corner option to measure internal corners for which the NeoRulerGO needs to begin and finish at 45 degrees, swinging along the length. The 93 different scales translate to and from 1:100,000 with an accuracy of 1 mm based on the markings on the original drawing, impressing an architect friend who uses a £500 Leica DISTO professional laser measure.
Petite but precise
Hozo packs plenty of features into the NeoRulerGO, but the trade-off for its teeniness is that it’s fiddly to use. Deviations and bumps in the course of rolling can also cause measuring to stop and start again, so make sure you sense-check the reading. These can be exported to the Meazor Android or iOS app for inclusion in a project or simply saved as a list. Hozo helpfully includes configuration options on the NeoRulerGO and within the app to change the screen orientation and left- or right-handed use, so it’s a matter of working out which settings work best for you.
We used NeoRulerGO to take accurate measurements for bathroom spaces and fittings, including the trim needed for the circumference of a partially curved mirror. Its precise measurements were also helpful when stretching and blocking hand-knitted pieces that needed to be a fixed size and accurately sewn together, and when trying to design an enclosure for a Raspberry Pi to be fashioned from assorted materials of varying thicknesses and flex. It was really handy being able to simply cycle through measurements to see how a reading translated metric and imperial measurements down to the nearest ±1 mm, reassuring us when sourcing components.
Verdict
9/10
Despite a few handling issues, we found NeoRulerGO ideal for measuring awkward spaces and shapes, including curved surfaces, with none of the jeopardy of using a retractable metal ruler that might spring back painfully at any moment.
Specs
Weight: 45 g | Dimensions: 31×18×146 mm | Screen: 1.14 in | Wheel: 30 mm | Battery: 300 mAh | Resolution: ±0.02 in (0.5 mm) | Accuracy: ±0.04 inch (1 mm) + (Dx0.5%) in ideal circumstances | Features: Inches, feet, metres, centimetres, millimetres; 93 built-in scales, customisable scales (100K:1 to 1:100K)
As our lives become increasingly intertwined with AI-powered tools and systems, it’s more important than ever to equip young people with the skills and knowledge they need to engage with AI safely and responsibly. AI literacy isn’t just about understanding the technology — it’s about fostering critical conversations on how to integrate AI tools into our lives while minimising potential harm — otherwise known as ‘AI safety’.
The UK AI Safety Institute defines AI safety as: “The understanding, prevention, and mitigation of harms from AI. These harms could be deliberate or accidental; caused to individuals, groups, organisations, nations or globally; and of many types, including but not limited to physical, psychological, social, or economic harms.”
As a result of this growing need, we’re thrilled to announce the latest addition to our AI literacy programme, Experience AI — ‘AI safety: responsibility, privacy, and security’. Co-developed with Google DeepMind, this comprehensive suite of free resources is designed to empower 11- to 14-year-olds to understand and address the challenges of AI technologies. Whether you’re a teacher, youth leader, or parent, these resources provide everything you need to start the conversation.
Linking old and new topics
AI technologies are providing huge benefits to society, but as they become more prevalent we cannot ignore the challenges AI tools bring with them. Many of the challenges aren’t new, such as concerns over data privacy or misinformation, but AI systems have the potential to amplify these issues.
Our resources use familiar online safety themes — like data privacy and media literacy — and apply AI concepts to start the conversation about how AI systems might change the way we approach our digital lives.
Each session explores a specific area:
Your data and AI: How data-driven AI systems use data differently to traditional software and why that changes data privacy concerns
Media literacy in the age of AI: The ease of creating believable, AI-generated content and the importance of verifying information
Using AI tools responsibly: Encouraging critical thinking about how AI is marketed and understanding personal and developer responsibilities
Each topic is designed to engage young people to consider both their own interactions with AI systems and the ethical responsibilities of developers.
Designed to be flexible
Our AI safety resources have flexibility and ease of delivery at their core, and each session is built around three key components:
Animations: Each session begins with a concise, engaging video introducing the key AI concept using sound pedagogy — making it easy to deliver and effective. The video then links the AI concept to the online safety topic and opens threads for thought and conversation, which the learners explore through the rest of the activities.
Unplugged activities: These hands-on, screen-free activities — ranging from role-playing games to thought-provoking challenges — allow learners to engage directly with the topics.
Discussion questions: Tailored for various settings, these questions help spark meaningful conversations in classrooms, clubs, or at home.
Experience AI has always been about allowing everyone — including those without a technical background or specialism in computer science — to deliver high-quality AI learning experiences, which is why we often use videos to support conceptual learning.
In addition, we want these sessions to be impactful in many different contexts, so we included unplugged activities so that you don’t need a computer room to run them! There is also advice on shortening the activities or splitting them so you can deliver them over two sessions if you want.
The discussion topics provide a time-efficient way of exploring some key implications with learners, which we think will be more effective in smaller groups or more informal settings. They also highlight topics that we feel are important but may not be appropriate for every learner, for example, the rise of inappropriate deepfake images, which you might discuss with a 14-year-old but not an 11-year-old.
A modular approach for all contexts
Our previous resources have all followed a format suitable for delivery in a classroom, but for these resources, we wanted to widen the potential contexts in which they could be used. Instead of prescribing the exact order to deliver them, educators are encouraged to mix and match activities that they feel would be effective for their context.
We hope this will empower anyone, no matter their surroundings, to have meaningful conversations about AI safety with young people.
The modular design ensures maximum flexibility. For example:
A teacher might combine the video with an unplugged activity and follow-up discussion for a 60-minute lesson
A club leader could show the video and run a quick activity in a 30-minute session
A parent might watch the video and use the discussion questions during dinner to explore how generative AI shapes the content their children encounter
The importance of AI safety education
With AI becoming a larger part of daily life, young people need the tools to think critically about its use. From understanding how their data is used to spotting misinformation, these resources are designed to build confidence and critical thinking in an AI-powered world.
AI safety is about empowering young people to be informed consumers of AI tools. By using these resources, you’ll help the next generation not only navigate AI, but shape its future. Dive into our materials, start a conversation, and inspire young minds to think critically about the role of AI in their lives.
Ready to get started? Explore our AI safety resources today: rpf.io/aisafetyblog. Together, we can empower every child to thrive in a digital world.
If you’ve been exploring MicroPython on Arduino, you already know how powerful and flexible this Python-based language can be for microcontroller programming. Whether you’re a pro or just starting out, MicroPython opens up a new world of quick prototyping and clean, readable code.
Now, we’re making it even easier to get started and manage your MicroPython projects with the brand new MicroPython Package Installer for Arduino!
What’s the MicroPython Package Installer?
Installing libraries and managing MicroPython code on your Arduino boards can sometimes feel like a chore. Hunting down the right libraries, uploading files manually – let’s be honest, it takes time.
The MicroPython Package Installer streamlines the entire process:
Find packages: Search for libraries directly from Arduino’s official MicroPython package index.
Install in seconds: Connect your board, pick a package, and install it with a single click.
Custom installations: Want to add a package from a GitHub URL? You can do that too.
Plus, it automatically converts files into the efficient .mpy format, optimizing size and speed on your microcontroller.
Why is this a big deal?
As MicroPython gains importance in the Arduino ecosystem, so does the need for tools that make it accessible and fun. Here’s how the MicroPython Package Installer does that:
Automated package installation: No need to worry about manual file management – installing libraries is quick and straightforward.
Works on any platform: Whether you’re using Windows, macOS, or Linux, the tool is ready for you.
Perfect for beginners: No complicated workflows – just search, install, and start coding.
With the Arduino MicroPython Package Installer, you can spend less time setting things up and more time building your projects.
What packages can I find?
The MicroPython Package Installer connects to the growing Arduino MicroPython package index where you can find:
Official Arduino MicroPython libraries: A collection of packages curated and maintained by Arduino for common hardware and tasks.
Community-contributed libraries: Useful libraries contributed by the MicroPython community, including sensors, drivers, and more.
MicroPython standard libraries: All the official MicroPython libraries from the micropython-lib repository are also available for installation.
We’re excited to see this registry grow! If you’ve created a library that could help others, consider contributing to the package index on GitHub. Let’s build this ecosystem together!
How to get started
Here’s your step-by-step guide to running MicroPython on Arduino:
1. Install MicroPython on your board
If your board doesn’t have MicroPython installed yet, start with the Arduino MicroPython Installer. It automatically detects your connected board, downloads the latest firmware, and installs MicroPython with a single click.
2. Write and upload code
Once MicroPython is running, you’ll need a lightweight editor to write and manage your programs. Arduino Lab for MicroPython is the perfect tool for the job. Connect to your board, write your MicroPython code, upload files, and interact with the REPL shell to test your scripts in real time.
3. Manage MicroPython packages
Finally, use the MicroPython Package Installer to find and install libraries directly to your board. Search for packages, install them in seconds, or add custom ones from a GitHub URL.
Ready to dive in?
MicroPython has been part of the Arduino ecosystem for a while now, but with these tools, the experience is smoother and more beginner-friendly than ever before.
So, what’s stopping you? Grab your Arduino board, follow the steps above, and start experimenting with MicroPython today. Whether it’s a quick sensor readout, an IoT project, or a creative prototype, you’ll be up and running in no time.
Allie has form with Adventure Time builds, having created a life-size BMO games console to house an OctoPrint 3D printer (see Allie’s GitHub page).
“My technical background is incredibly diverse, but when it comes to electronics I am completely self-taught,” reveals Allie. “I got interested in the Raspberry Pi because of how incredibly powerful it was (at a really good price point!) and the community behind it.”
Allie chose Raspberry Pi for this “incredibly silly and frivolous” prop project since it would “cover everything needed without me needing to spend tons of time looking for usable peripherals and testing things to make sure that they worked. It was also a chance to try Raspberry Pi 5 for the first time… [I] knew that it would demolish anything I threw at it; [I] didn’t want to worry about lag or usability”
Since Allie can’t play the bass guitar, it was time for a creative solution that involved real musical instrument hardware and a means of making it play on demand. Allie designed a guitar case to house the electronics, cannibalised small speakers for their innards, and found a way to fool Raspberry Pi 5 into thinking it was drawing the mandated 5 amps, allowing for residual power to connect up a portable battery pack and a generic touchscreen.
Time trial
Allie says the time constraint was by far the biggest challenge, since inspiration came only two months before the DragonCon cosplay event at which it was to debut. “It was a huge undertaking to get everything done in time.”
Allie designed their take on Marceline’s guitar in Fusion 360, with custom speaker enclosures for the Dayton Audio boards, electronics attachments, and detachable parts plus a sliding panel. Allie says the software side was pretty easy. “Raspberry Pi provides most useful things baked right into the OS. I only had to write some simple Python code to create the custom song buttons.”
Although some tweaks were needed – “what project would be complete with a couple of iterations?” – these were mainly related to the sliding panel that covers the touchscreen when it’s not in use and which needed to be 3D-printed and painted and still be able to slide smoothly. Allie also tried to find an alternative solution to simply playing Spotify in the Chromium browser, feeling certain there would be a Python library for it, “but alas, there was not!”
Although designing and creating the Adventure Time Self-Playing Guitar was a considerable task, Allie says the key to any successful build is breaking it into achievable bite-sized pieces. “When tackling a large project, especially if it has elements that are new to you, it’s really easy to get a bit overwhelmed and not know where to start or what to do next. Figuring out the broad strokes of a project first, then separating them into smaller and smaller pieces really helps make things feel a lot more manageable. Also, good sandpaper will save your life!” For another Adventure Time build.
2011’s Real Steel may have vanished from the public consciousness in a remarkably short amount of time, but the concept was pretty neat. There is something exciting about the idea of fighting through motion-controlled humanoid robots. That is completely possible today — it would just be wildly expensive at the scale seen in the movie. But MPuma made it affordable by scaling the concept down to Rock ‘Em Sock ‘Em Robots.
The original Rock ‘Em Sock ‘Em Robots toy was purely mechanical, with the players controlling their respective robots through linkages. In this project, MPuma modernized the toy with servo motors controlled via player motion.
As designed, the motion-controlled robot has three servo motors: one for the torso rotation, one for the shoulder, and one for the elbow. If desired, the builder can equip both robots in that manner. An Arduino UNO Rev3 board controls those motors, making them match the player’s movement.
The Arduino detects player movement through three potentiometers — one for each servo motor. Twisting the elbow potentiometer will, for example, cause the robot’s elbow servo motor to move by the same angle. That arrangement is very responsive, because analog potentiometer readings are quick. It is, therefore, suitable for combat.
The final piece of the puzzle is attaching the potentiometers to the player’s body. MPuma didn’t bother with anything complicated or fancy, they just mounted the potentiometers to pieces of cardboard and strapped those to the player’s arm.
This may not be as cinematic as Real Steel’s robots, but you can recreate MPuma’s project for less than you spent to see that movie in theaters.
Robotic vehicles can have a wide variety of drive mechanisms that range from a simple tricycle setup all the way to crawling legs. Alex Le’s project leverages the reliability of LEGO blocks with the customizability of 3D-printed pieces to create a highly mobile omnidirectional robot called Swervebot, which is controllable over Wi-Fi thanks to an Arduino Nano ESP32.
The base mechanism of a co-axial swerve drive robot is a swerve module that uses one axle + motor to spin the wheel and another axle + motor to turn it. When combined with several other swerve modules in a single chassis, the Swervebot is able to perform very complex maneuvers such as spinning while moving in a particular direction. For each of these modules, a pair of DC motors were mounted into custom, LEGO-compatible enclosures and attached to a series of gears for transferring their motion into the wheels. Once assembled into a 2×2 layout, Le moved onto the next steps of wiring and programming the robot.
The Nano ESP32 is attached to two TB6612 motor drivers and a screen for displaying fun, animated eyes while the robot is in-motion or idling. Controlling the swerve bot is easy too, as the ESP32 hosts a webpage full of buttons and other inputs for setting speeds and directions.
If your car was made in the last decade, its dash probably has several displays, gauges, and indicator lights. But how many of those do you actually look at on a regular basis? Likely only one or two, like the speedometer and gas gauge. Knowing that, John Sutley embraced minimalism to use a Game Boy as the dash for his car.
Unlike most modern video game consoles, which load assets into memory before using them, the original Nintendo Game Boy used a more direct tie between the console and the game cartridge. They shared memory, with the Game Boy accessing the cartridge’s ROM chip at the times necessary to load just enough of the game to continue. That access was relatively fast, which helped to compensate for the small amount of available system RAM.
Sutley’s hack works by updating the data in a custom “cartridge’s” equivalent of ROM (which is rewritable in this case, and therefore not actually read-only). When the Game Boy updates the running “game,” it will display the data it sees on the “ROM.” Sutley just needed a way to update that data with information from the car, such as speed.
The car in question is a second-generation Hyundai Sante Fe. Like all vehicles available in the US after 1998, it has an OBDII port and Sutley was able to tap into that to access the CAN bus that the car uses to send data between different systems. That data includes pertinent information, such as speed.
Sutley used an Arduino paired with a CAN shield to sniff and parse that data. The Arduino then writes to the “ROM” with whatever Sutley wants to display on the Game Boy’s screen, such as speed.
This is, of course, a remarkably poor dash. The original Game Boy didn’t even have a backlight for the screen, so this would be downright unsafe at night. But we can all agree that it is very cool.
Earlier this week, the UK Government published its AI Opportunities Action Plan, which sets out an ambitious vision to maintain the UK’s position as a global leader in artificial intelligence.
Whether you’re from the UK or not, it’s a good read, setting out the opportunities and challenges facing any country that aspires to lead the world in the development and application of AI technologies.
In terms of skills, the Action Plan highlights the need for the UK to train tens of thousands more AI professionals by 2030 and sets out important goals to expand education pathways into AI, invest in new undergraduate and master’s scholarships, tackle the lack of diversity in the sector, and ensure that the lifelong skills agenda focuses on AI skills.
This is all very important, but the Action Plan fails to mention what I think is one of the most important investments we need to make, which is in schools.
“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”
While reading the section of the Action Plan that dealt with AI skills, I was reminded of this quote attributed to Bill Gates, which was adapted from Roy Amara’s law of technology. We tend to overestimate what we can achieve in the short term and underestimate what we can achieve in the long term.
In focusing on the immediate AI gold rush, there is a risk that the government overlooks the investments we need to make right now in schools, which will yield huge returns — for individuals, communities, and economies — over the long term. Realising the full potential of a future where AI technologies are ubiquitous requires genuinely long-term thinking, which isn’t always easy for political systems that are designed around short-term results.
But what are those investments? The Action Plan rightly points out that the first step for the government is to accurately assess the size of the skills gap. As part of that work, we need to figure out what needs to change in the school system to build a genuinely diverse and broad pipeline of young people with AI skills. The good news is that we’ve already made a lot of progress.
AI literacy
Over the past three years, the Raspberry Pi Foundation and our colleagues in the Raspberry Pi Computing Education Research Centre at the University of Cambridge have been working to understand and define what AI literacy means. That led us to create a research-informed model for AI literacy that unpacks the concepts and knowledge that constitute a foundational understanding of AI.
In partnership with one of the leading UK-based AI companies, Google DeepMind, we used that model to create Experience AI. This suite of classroom resources, teacher professional development, and hands-on practical activities enables non-specialist teachers to deliver engaging lessons that help young people build that foundational understanding of AI technologies.
We’ve seen huge demand from UK schools already, with thousands of lessons taught in UK schools, and we’re delighted to be working with Parent Zone to support a wider roll out in the UK, along with free teacher professional development.
CEO Philip Colligan and Prime Minister Keir Starmer at the UK launch of Experience AI.
With the generous support of Google.org, we are working with a global network of education partners — from Nigeria to Nepal — to localise and translate these resources, and deliver locally organised teacher professional development. With over 1 million young people reached already, Experience AI can plausibly claim to be the most widely used AI literacy curriculum in the world, and we’re improving it all the time.
All of the materials are available for anyone to use and can be found on the Experience AI website.
There is no AI without CS
With the CEO of GitHub claiming that it won’t be long before 80% of code is written by AI, it’s perhaps not surprising that some people are questioning whether we still need to teach kids how to code.
I’ll have much more to say on this in a future blog post, but the short answer is that computer science and programming is set to become more — not less — important in the age of AI. This is particularly important if we want to tackle the lack of diversity in the tech sector and ensure that young people from all backgrounds have the opportunity to shape the AI-enabled future that they will be living in.
The simple truth is that there is no artificial intelligence without computer science. The rapid advances in AI are likely to increase the range of problems that can be solved by technology, creating demand for more complex software, which in turn will create demand for more programmers with increasingly sophisticated and complex skills.
That’s why we’ve set ourselves the ambition that we will inspire 10 million more young people to learn how to get creative with technology over the next 10 years through Code Club.
Curriculum reform
But we also need to think about what needs to change in the curriculum to ensure that schools are equipping young people with the skills and knowledge they need to thrive in an AI-powered world.
That will mean changes to the computer science curriculum, providing different pathways that reflect young people’s interests and passions, but ensuring that every child leaves school with a qualification in computer science or applied digital skills.
It’s not just computer science courses. We need to modernise mathematics and figure out what a data science curriculum looks like (and where it fits). We also need to recognise that AI skills are just as relevant to biology, geography, and languages as they are to computer science.
To be clear, I am not talking about how AI technologies will save teachers time, transform assessments, or be used by students to write essays. I am talking about the fundamentals of the subjects themselves and how AI technologies are revolutionising the sciences and humanities in practice in the real world.
These are all areas where the Raspberry Pi Foundation is engaged in original research and experimentation. Stay tuned.
Supporting teachers
All of this needs to be underpinned by a commitment to supporting teachers, including through funding and time to engage in meaningful professional development. This is probably the biggest challenge for policy makers at a time when budgets are under so much pressure.
For any nation to plausibly claim that it has an Action Plan to be an AI superpower, it needs to recognise the importance of making the long-term investment in supporting our teachers to develop the skills and confidence to teach students about AI and the role that it will play in their lives.
I’d love to hear what you think and if you want to get involved, please get in touch.
Um dir ein optimales Erlebnis zu bieten, verwenden wir Technologien wie Cookies, um Geräteinformationen zu speichern und/oder darauf zuzugreifen. Wenn du diesen Technologien zustimmst, können wir Daten wie das Surfverhalten oder eindeutige IDs auf dieser Website verarbeiten. Wenn du deine Einwillligung nicht erteilst oder zurückziehst, können bestimmte Merkmale und Funktionen beeinträchtigt werden.
Funktional
Immer aktiv
Die technische Speicherung oder der Zugang ist unbedingt erforderlich für den rechtmäßigen Zweck, die Nutzung eines bestimmten Dienstes zu ermöglichen, der vom Teilnehmer oder Nutzer ausdrücklich gewünscht wird, oder für den alleinigen Zweck, die Übertragung einer Nachricht über ein elektronisches Kommunikationsnetz durchzuführen.
Vorlieben
Die technische Speicherung oder der Zugriff ist für den rechtmäßigen Zweck der Speicherung von Präferenzen erforderlich, die nicht vom Abonnenten oder Benutzer angefordert wurden.
Statistiken
Die technische Speicherung oder der Zugriff, der ausschließlich zu statistischen Zwecken erfolgt.Die technische Speicherung oder der Zugriff, der ausschließlich zu anonymen statistischen Zwecken verwendet wird. Ohne eine Vorladung, die freiwillige Zustimmung deines Internetdienstanbieters oder zusätzliche Aufzeichnungen von Dritten können die zu diesem Zweck gespeicherten oder abgerufenen Informationen allein in der Regel nicht dazu verwendet werden, dich zu identifizieren.
Marketing
Die technische Speicherung oder der Zugriff ist erforderlich, um Nutzerprofile zu erstellen, um Werbung zu versenden oder um den Nutzer auf einer Website oder über mehrere Websites hinweg zu ähnlichen Marketingzwecken zu verfolgen.