Nitrogen is critical for farming at scale and without some form of nitrogen to enrich the soil, we couldn’t grow staple crops efficiently enough to feed our large global population. Serious science goes into the production of fertilizers and the Birkeland-Eyde process was one early example. It uses electrical arcs to turn nitrogen in the air into nitric acid. Marb is an enthusiastic citizen scientist and built his own experimental reactor to harness the Birkeland-Eyde process.
The Birkeland-Eyde process was largely phased out a century ago, because it is inefficient due to the high energy requirements. It needs a lot of energy to create the electric arcs — too much energy to be practical at the scale necessary for modern industrial farming. But efficiency isn’t a major concern for Marb, who is more interested in the science than fertilizer production.
Creating an electrical arc isn’t very difficult, but controlling it is more challenging. For that reason, Marb used an Arduino UNO Rev3 to oversee his DIY reactor. Through a breakout shield, the Arduino controls the flow of power to the arc electrodes. That requires a large power supply, transformers, and a boost converter.
The rest of the reactor is devoted to the containment, preparation, and flow of air. The Birkeland-Eyde process works best with dry air, so Marb’s design pumps air through a desiccant-packed tube and into the reaction chamber where the electrodes meet. Sensors, like a temperature sensor, help the Arduino gain feedback on the conditions.
Marb’s video ends with a demonstration, but he hasn’t yet refined the reaction process for maximum yields. If there is enough interest, Marb says that he’ll make a follow-up video with more detail.
With just a few days to go, Arduino Days 2025 promises to be one of the biggest and most exciting events in our 20-year history! Join us for two days of live-streamed content on March 21st-22nd, featuring inspiring talks, major product announcements, and community showcases from makers, educators, and industry leaders worldwide.
But that’s just the beginning: tune in to be the first to hear about brand-new announcements, including exciting developments around Arduino Cloud!
And because we’re celebrating 20 years of Arduino, we’ve got something special for you: exclusive discounts on the Arduino Store throughout the event.
A packed lineup of speakers and topics
This year’s live-streamed event brings together an incredible mix of voices, from Arduino users presenting their ideas to startups and multinational partners sharing their success stories.
Expect sessions covering robotics, generative AI, building automation, and K-12 education, with insights from some of the most influential figures in open-source hardware, IoT, and embedded technology.
Eben Upton (Raspberry Pi), Limor Fried (Adafruit), and Zach Shelby (Edge Impulse) will discuss the future of connected devices and how open-source platforms continue to shape innovation.
For those interested in IoT and connectivity, we’ll have key insights from Swee Ann Teo (Espressif Systems), Matt Johnson (Silicon Labs), and Jonhatan Beri (Golioth), covering how hardware, cloud, and AI are coming together to power the next generation of smart devices.
You’ll find plenty of inspiration for your projects thanks to guests that run the gamut from custom electric cars (Charly Bosch) to interactive art (Mónica Rikic).
Of course, you’ll also hear from Arduino’s own leaders, including CEO Fabio Violante, co-founders Massimo Banzi and David Cuartielles, and team members from Turin, Lugano, Malmö, Austin, and beyond.
Arduino Days isn’t just about us, it’s about you! Around the world, organizations and Arduino fans are hosting their own events to celebrate. Check out the map on the Arduino Days website to see what’s happening near you.
Visit the Arduino Days website to find all the latest updates, the full schedule, and details on how to join the live stream. We can’t wait to celebrate with you!
Sous vide (which means “under vacuum” in French) is a cooking technique in which food is sealed in a plastic bag (or another container) and immersed in warm water for a long period of time. It is great for meat, like steak, because it ensures the food is an even temperature throughout. For a steak, you would then quickly sear the outside for beefy perfection. If that intrigues you, Rob Cai has a guide that will walk you through the construction of a sous vide cooker.
You can, of course, purchase a sous vide cooker and they’re quite affordable these days. But building your own is a fun project and it gives you complete control over the cooker’s functionality.
Closed-loop feedback is critical for sous vide cooking. The cooker needs to keep the water at a precise temperature, which means it needs to monitor the temperature while heating.
In this case, an Arduino Nano oversees that process. An LCD screen and pair of potentiometers let the user set the temperature and cook time. All of those components go in a basic enclosure for protection. The Arduino then toggles AC power to an immersion heater via a relay and monitors the water with a DS18B20 temperature sensor.
This doesn’t require any kind of tricky PID control that would need tuning, because water is relatively slow to change temperature. Therefore, the provided Arduino sketch is easy to understand and modify to get the exact performance you want.
For people who use hearing aids, the inability to stream directly to hearing devices can leave people out or create barriers to important information, whether it’s in a classroom, train station or concert.
Now, Android supports Auracast, a new Bluetooth technology that uses your phone to enable a direct connection from hearing aids to audio broadcasts from crowded and public venues. This also means hearing aid presets, available within your phone settings, can be conveniently applied to broadcasts to personalize streams to your hearing. And expanding on our work with Bluetooth SIG to use the latest LE Audio technology, we’re bringing to Pixel 9 devices the ability to connect to broadcasts through QR codes – removing the need to go into your settings.
To use Auracast, pair LE Audio compatible hearing aids from companies like GN Hearing and Starkey with your Samsung Galaxy devices with One UI 7 and Android 15 or Pixel 9 devices running the Android 16 beta and tune into Auracast broadcasts from compatible TV streamers or public venues.
Google Play Games on PC was designed to give you more flexibility for how you play your favorite games. Whether you’re a mobile gamer who’s always wanted to try out a popular PC title, or a PC enthusiast wanting to experience your favorite Android games on your desktop, there’s something exciting for everyone. Google Play Games on PC will be moving into general availability later this year, so be on the lookout for even more multiplatform games and upgrades.
Enjoy smooth and improved gameplay powered by Android
We know a key piece of a great gaming experience is how well a game performs. We’re equipping developers with a modern graphics API called Vulkan and enhancing the Android Dynamic Performance Framework (ADPF). Vulkan allows games to make better use of a device’s graphics for smoother frame rates and more realistic visuals. And ADPF will help game developers optimize the device performance so that you can have a more stable and responsive gaming experience.
Earlier this month, young creators gathered at the Sport Ireland Campus National Indoor Training Centre in Dublin for Coolest Projects Ireland 2025, an inspiring showcase of creativity, coding, and problem solving. With more than 80 participants sharing over 60 incredible projects, this year’s event highlighted the passion and innovation of young creators from across Northern Ireland and the Republic of Ireland.
The day offered the chance for young people to share their digital projects, engage with a like-minded community, chat with VIP judges, and take part in exciting coding activities like Astro Pi Mission Zero. The event was once again supported by Meta, who sponsored the new AI category, continuing their commitment to promoting the importance of digital skills to young people.
Celebrating creativity
Coolest Projects is a space for all digital projects, across all levels and categories, from hardware inventions to AI to Scratch. The event celebrates not just the finished products, but also the learning journeys of young creators and skills such as problems solving and creativity.
Helen Gardner, Programme Manager at the Raspberry Pi Foundation, shared her enthusiasm about this year’s showcase:
“Returning to Dublin for Coolest Projects is always such a joy! It’s incredible to see the enthusiasm, creativity, and talent of young creators as they bring their ideas to life. This event is all about celebrating the community and inspiring the next generation of problem-solvers. It’s always so inspiring to witness their amazing projects and the energy they bring to the day!”
Participants at Coolest Projects Ireland included young people from schools, coding clubs such as Code Club and CoderDojo, and independent makers. Many were returning participants, excited to showcase their latest projects and connect with fellow creators. The sense of community and encouragement was felt throughout the event, with mentors, parents, and judges offering valuable support and feedback to support growth and celebrate achievements.
Spotlight on the judges favourites
This year, judges were particularly impressed with the originality and impact of the projects. We caught up with four of the creators to find out why being involved in Coolest Projects Ireland was important to them.
Sister duo, Riddhiba and Aarushiba, created Innovaid, a project that uses technology to improve safety at events.
“We wanted to solve a problem that was affecting a large number of people. Having read news articles, and having talked to people who have had bad experiences at concerts and large events, we wanted to solve this problem that has been ongoing for many years. Although technology has advanced rapidly in the past years, there are still flaws in large event management leading to incidents and deaths. We wanted to incorporate safety, medical aid, crowd management, and inclusivity.”
“Coolest Projects Ireland was an amazing experience for both of us, we got the opportunity to meet with so many people that were so passionate about technology and coding. We met many people who also wanted to make a change in society, or wanted to solve problems.”
Coolest Projects also welcomed an AI category, supported by Meta, for the first time, which included Kirsty’s entry, A haon, dó, trí – Learn with me. Kirsty’s entry used machine learning to help learners master the Irish language in an engaging and interactive way. Kirsty shared a little about her journey with the project
“I really enjoyed some of the machine learning with Scratch projects on the Raspberry Pi site. While doing the ‘Alien language’ project, it occurred to me that I could use a similar approach to build a game to help young kids learn Irish.”
“I had to build my own Irish language training data set so I recorded lots of speech samples from my school friends. However, I go to an all-girls school, which would have meant my training data would have been very limited! So I recorded some boys’ voices at my CoderDojo to make my data set more varied and balanced.”
In the Games category, Timi received acknowledgement for his project, Stakes & Laughters Maximus.
“I got the idea from a story my dad told me about when I was younger. I apparently got really upset when I lost a game of Snakes and Ladders. So, I wanted to make a Snakes and Ladders game that wasn’t just about luck. I wanted players to have to think strategically about how to use their luck.”
“There were many challenges! Everything from the character movement to the turn system and the items presented roadblocks. But I broke through them by carefully thinking about what I wanted to achieve and then using code to create the logic for it.”
Get involved
The Coolest Projects online showcase is open for entries, providing young people worldwide the opportunity to share their digital creations.
We also have upcoming in-person events in the US, UK, and around the world thanks to our partner organisations. You can find out more and get involved with these through the Coolest Projects website.
Finally, we want to say a huge thank you to everyone who made Coolest Projects Ireland 2025 such a fantastic experience! We can’t wait to see what young innovators create next year.
Guinness is one of those beers (specifically, a stout) that people take seriously and the Guinness brand has taken full advantage of that in their marketing. They even sell a glass designed specifically for enjoying their flagship creation, which has led to a trend that the company surely appreciates: “splitting the G.” But that’s difficult for many to pull off, so Eamon Magd built this device that makes the trick easy to master.
“Splitting the G” refers to taking the initial gulp of stout in precisely the right amount to leave the line between liquid and foam in the middle of the “G” on the Guinness logo on a standard Guinness pint glass. Not too difficult for frequent imbibers, but Magd doesn’t usually drink and hasn’t had the practice.
This device solves that problem by vibrating when Magd sips just enough Guinness to result in a split G. It does that with an Arduino UNO Rev3 that monitors the stout in the glass with a non-contact liquid level sensor.
Traditional liquid level sensors, like floats, require physical contact with the contents of the vessel, which can be unsanitary. The sensor chosen by Magd doesn’t, as it relies on capacitive measurements. It attaches to the outside of the glass and can tell if liquid inside the glass is above or below its level.
Magd just had to find the right spot on the glass to attach that sensor and then programmed an Arduino sketch to run the vibration motor when the sensor fails to detect liquid. Magd even plans to put that to the test at the Guinness Storehouse in Ireland.
We’re gearing up for Embedded World, the leading event for embedded systems, industrial automation, and IoT technology, taking place March 11th-13th in Nuremberg. Visit us in Hall 3A, Booth 313 to explore our latest innovations and experience more live demos than ever, thanks to key collaborations across the industrial landscape. This year, we’re demonstrating just how far Arduino has come in bridging the gap between prototyping and industrial deployment.
Explore the forefront of innovation with us
At this year’s Arduino booth, we’re turning ideas into reality with groundbreaking solutions for smart industries, automotive prototyping, and next-gen IoT applications. Here’s a glimpse of what you’ll find when you visit:
The future of automotive – Learn about the E/E Starter Kit, developed as part of our partnership with Bosch for the digital.auto initiative. This cutting-edge platform empowers developers, startups, and universities to prototype software-defined vehicles (SDVs) with real-world applications in mind.
Ultra-wideband (UWB) technology in action – We’re unveiling two new UWB-powered products, developed with Truesense, to enable next-level precision tracking, seamless connectivity with cloud platforms, and secure data transmission.
Game-changing product launches – Be among the first to see our newest hardware innovations, designed to streamline industrial development and accelerate time to market.
AI-powered warehouse and logistics automation – See how computer vision and edge computing can revolutionize inventory management, predictive maintenance, and smart logistics thanks to an Arduino-based solution by our partner System Electronics.
Advanced robotics & AGVs – Get hands-on with the Portenta AGV Kit, developed with Analog Devices, Inc., to explore automated guided vehicles (AGVs) with real-time location tracking, motor control, and 3D mapping – perfect for factory automation, research, and education.
Single Pair Ethernet (SPE) solutions – Discover how next-gen industrial connectivity is simplifying communication for automation and sensor networks.
Environmental monitoring & motion-based control – Check out live demos that showcase intelligent sensing solutions for industrial environments, smart buildings, and more.
Embedded World 2025 is your chance to experience Arduino Pro’s industrial-grade solutions up close and see how our open-source ecosystem is shaping the future of embedded technology.
Celebrate our 20th with a free ticket!
Arduino is turning 20 this year, and we’re excited to kick off the celebrations at Embedded World!
While Arduino Day 2025 (March 21st-22nd) will be the main event, we want to start the party early – so we’re giving you a free ticket! Just register for Embedded World using our voucher code ew25542980.
Visit Hall 3A, Booth 313 to say check out our latest technology and meet the team. See you in Nuremberg!
Generative AI (GenAI) tools like GitHub Copilot and ChatGPT are rapidly changing how programming is taught and learnt. These tools can solve assignments with remarkable accuracy. GPT-4, for example, scored an impressive 99.5% on an undergraduate computer science exam, compared to Codex’s 78% just two years earlier. With such capabilities, researchers are shifting from asking, “Should we teach with AI?” to “How do we teach with AI?”
Leo Porter from UC San Diego
Daniel Zingaro from the University of Toronto
Leo Porter and Daniel Zingaro have spearheaded this transformation through their groundbreaking undergraduate programming course. Their innovative curriculum integrates GenAI tools to help students tackle complex programming tasks while developing critical thinking and problem-solving skills.
Leo and Daniel presented their work at the Raspberry Pi Foundation research seminar in December 2024. During the seminar, it became clear that much could be learnt from their work, with their insights having particular relevance for teachers in secondary education thinking about using GenAI in their programming classes
Practical applications in the classroom
In 2023, Leo and Daniel introduced GitHub Copilot in their introductory programming CS1-LLM course at UC San Diego with 550 students. The course included creative, open-ended projects that allowed students to explore their interests while applying the skills they’d learnt. The projects covered the following areas:
Data science: Students used Kaggle datasets to explore questions related to their fields of study — for example, neuroscience majors analysed stroke data. The projects encouraged interdisciplinary thinking and practical applications of programming.
Image manipulation: Students worked with the Python Imaging Library (PIL) to create collages and apply filters to images, showcasing their creativity and technical skills.
Game development: A project focused on designing text-based games encouraged students to break down problems into manageable components while using AI tools to generate and debug code.
Students consistently reported that these projects were not only enjoyable but also responsible for deepening their understanding of programming concepts. A majority (74%) found the projects helpful or extremely helpful for their learning. One student noted that.
“Programming projects were fun and the amount of freedom that was given added to that. The projects also helped me understand how to put everything that we have learned so far into a project that I could be proud of.“
Core skills for programming with Generative AI
Leo and Daniel emphasised that teaching programming with GenAI involves fostering a mix of traditional and AI-specific skills.
Writing software with GenAI applications, such as Copilot, needs to be approached differently to traditional programming tasks
Their approach centres on six core competencies:
Prompting and function design: Students learn to articulate precise prompts for AI tools, honing their ability to describe a function’s purpose, inputs, and outputs, for instance. This clarity improves the output from the AI tool and reinforces students’ understanding of task requirements.
Code reading and selection: AI tools can produce any number of solutions, and each will be different, requiring students to evaluate the options critically. Students are taught to identify which solution is most likely to solve their problem effectively.
Code testing and debugging: Students practise open- and closed-box testing, learning to identify edge cases and debug code using tools like doctest and the VS Code debugger.
Problem decomposition: Breaking down large projects into smaller functions is essential. For instance, when designing a text-based game, students might separate tasks into input handling, game state updates, and rendering functions.
Leveraging modules: Students explore new programming domains and identify useful libraries through interactions with Copilot. This prepares them to solve problems efficiently and creatively.
Ethical and metacognitive skills: Students engage in discussions about responsible AI use and reflect on the decisions they make when collaborating with AI tools.
Adapting assessments for the AI era
The rise of GenAI has prompted educators to rethink how they assess programming skills. In the CS1-LLM course, traditional take-home assignments were de-emphasised in favour of assessments that focused on process and understanding.
Leo and Daniel chose several types of assessments — some involved having to complete programming tasks with the help of GenAI tools, while others had to be completed without.
Quizzes and exams: Students were evaluated on their ability to read, test, and debug code — skills critical for working effectively with AI tools. Final exams included both tasks that required independent coding and tasks that required use of Copilot.
Creative projects: Students submitted projects alongside a video explanation of their process, emphasising problem decomposition and testing. This approach highlighted the importance of critical thinking over rote memorisation.
Challenges and lessons learnt
While Leo and Daniel reported that the integration of AI tools into their course has been largely successful, it has also introduced challenges. Surveys revealed that some students felt overly dependent on AI tools, expressing concerns about their ability to code independently. Addressing this will require striking a balance between leveraging AI tools and reinforcing foundational skills.
Additionally, ethical concerns around AI use, such as plagiarism and intellectual property, must be addressed. Leo and Daniel incorporated discussions about these issues into their curriculum to ensure students understand the broader implications of working with AI technologies.
A future-oriented approach
Leo and Daniel’s work demonstrates that GenAI can transform programming education, making it more inclusive, engaging, and relevant. Their course attracted a diverse cohort of students, as well as students traditionally underrepresented in computer science — 52% of the students were female and 66% were not majoring in computer science — highlighting the potential of AI-powered learning to broaden participation in computer science.
By embracing this shift, educators can prepare students not just to write code but to also think critically, solve real-world problems, and effectively harness the AI innovations shaping the future of technology.
If you’re an educator interested in using GenAI in your teaching, we recommend checking out Leo and Daniel’s book, Learn AI-Assisted Python Programming, as well as their course resources on GitHub. You may also be interested in our own Experience AI resources, which are designed to help educators navigate the fast-moving world of AI and machine learning technologies.
Join us at our next online seminar on 11 March
Our 2025 seminar series is exploring how we can teach young people about AI technologies and data science. At our next seminar on Tuesday, 11 March at 17:00–18:00 GMT, we’ll hear from Lukas Höper and Carsten Schulte from Paderborn University. They’ll be discussing how to teach school students about data-driven technologies and how to increase students’ awareness of how data is used in their daily lives.
To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.
Park, play and stay entertained with more gaming apps available on Android Auto. Whether you’re looking for a quick puzzle or an adrenaline-pumping race, you can now use your car’s screen while you’re parked to play games like Farm Heroes Saga, Candy Crush Soda Saga, Angry Birds 2 and Beach Buggy Racing. Simply download a game to your mobile device to access it in Android Auto while you’re waiting in the car, and level up in your downtime.
Home file servers can be very useful for people who work across multiple devices and want easy access to their documents. And there are a lot of DIY build guides out there. But most of them are full-fledged NAS (network-attached storage) devices and they tend to rely on single-board computers. Those take a long time to boot and consume quite a lot of power. This lightweight file server by Zombieschannel is different, because it runs entirely on an Arduino.
An ESP32 is a microcontroller with built-in connectivity (Wi-Fi and Bluetooth). Like all MCUs, it can “boot” and start running its firmware almost instantly. And while it runs, it will consume much less power than a conventional PC or a single-board computer. Zombieschannel’s project proves that the Arduino Nano ESP32 is suitable for a file server — if your expectations are modest.
The hardware for this project consists of a Nano ESP32, an SD card reader module, and a small monochrome OLED screen. The SD card provides file storage and the OLED shows status information.
Most of the work went into writing the firmware, which Zombieschannel did with assistance from ChatGPT. That has the Arduino hosting a basic web interface that local users can access to upload or download files. Zombieschannel also created a command line interface that provides more comprehensive access via a serial connection.
This does have limitations and the transfer speeds are quite slow by modern standards. But the file server seems useful for small files, like text documents. Zombieschannel plans to design an enclosure for the device and it should tuck unobtrusively into a corner, where it can run without drawing much power.
If you hear the term “generative art” today, you probably subconsciously add “AI” to the beginning without even thinking about it. But generative art techniques existed long before modern AI came along — they even predate digital computing altogether. Despite that long history, generative art remains interesting as consumers attempt to identify patterns in the underlying algorithms. And thanks to the “Generative Art 1€” vending machine built by Niklas Roy, you can experience that for yourself by spending a single euro.
Roy built this vending machine to display at the “Intelligence, it’s automatic” exhibit, hosted at Zebrastraat in Belgium. Rather than AI, Roy gave the machine more traditional algorithms to generate abstract pieces of line art. Each piece uses the current time as the “seed” for the algorithms, so it will be unique and an identical piece will never appear again. And the current piece, shown on a screen in the machine, always evolves as time passes. If a viewer sees something they like, they’ll need to insert a euro coin immediately or risk losing the opportunity to secure the art.
Once paid, the machine will use a built-in pen plotter to draw the line on a piece of paper. It will also label the art with a unique identifier: the seed number. Then, it will stamp the paper for authenticity. Finally, it will cut that piece from the roll of paper and dispense the art through a chute at the bottom.
That all happens under the direction of an Arduino Mega 2560 board. It controls the pen plotter, which is a repurposed model called Artima Colorgraf. The coin-op mechanism is an off-the-shelf unit and a Python script, running on a connected laptop, performs the art generation. What message is this vending machine meant to convey? Maybe that art is ethereal or that it has little value — just a euro — to modern society. Whatever the case, it is a work of art in its own right.
This week at MWC in Barcelona, we’re showing how AI on Android can help you day-to-day with fun, interactive demos for attendees. You can learn how Gemini Live can help you with complex topics (in multiple languages!), use Circle to Search to translate a menu and check how the latest partner devices from Android are bringing these experiences to life. We’re also showing new live video and screen-sharing capabilities in Gemini Live, which will start rolling out to Gemini Advanced subscribers as part of the Google One AI Premium plan on Android devices later this month.
If you’re on the ground, come visit Android Avenue between Halls 2 and 3 and try out these demos, and explore the show floor to collect our special edition Android pins at our partners’ booths!
A small startup called K-Scale Labs is in the process of developing an affordable, open-source humanoid robot and Mike Rigsby wanted to build a compatible hand. This three-fingered robot hand is the result, and it makes use of serial bus servos from Waveshare.
Most Arduino users are familiar with full-duplex serial communication, which requires two data lines. The first carries data in one direction, while the second carries data in the other. As such, devices can send and receive data at the same time — they don’t have to wait to until the line is “free” to send data.
But half-duplex serial communication is also possible. Each device just has to wait its turn to send data. That is less common, but it does have some benefits. In this case, Rigsby used Waveshare servo motors that communicate via a half-duplex serial bus. The benefit is that users can daisy-chain multiple servos together, connecting to a single serial pin on the host device. These particular servo motors also have magnetic encoders instead of potentiometers, which are more reliable.
Five of those servos actuate the 3D-printed fingers on Rigsby’s robot hand (the top two fingers have two joints each). He used an Arduino UNO Rev3 board to control them, but couldn’t use the typical RX and TX (0 and 1) pins for communication over the serial bus. For that reason, he included a serial bus module meant specifically for driving servos like these.
This seems to work pretty well and the motors move smoothly — though they currently lack sensors that would enable force/pressure control.
Arduino and System Electronics are joining forces to create cutting-edge solutions for industrial and building automation, focusing specifically on edge computing to deliver real-time processing, predictive analytics, and seamless integration for AI-driven inventory management and logistics.
System Electronics – as an “Innovation accelerators for over 50 years” – brings to the table the electronics and mechatronics expertise to facilitate innovation and make a decisive contribution to their clients’ competitive edge. “Our collaboration with System Electronics merges their deep industrial expertise with Arduino’s focus on edge computing, enabling businesses to deploy intelligent, scalable automation solutions faster than ever. From AI-driven inventory management to predictive maintenance and real-time quality control, this partnership empowers industrial players to optimize operations, reduce downtime, and drive efficiency with cutting-edge technology,” our CEO Fabio Violante announced. “This is another step toward making advanced AI and automation accessible to industries of all sizes, accelerating the transformation of smart manufacturing and logistics.”
Andrea Gozzi, General Manager of System Electronics added, “At System Electronics, we believe in developing solutions that bring real value to industrial automation. By partnering with Arduino, we’ll create powerful and flexible AI-driven solutions that meet the needs of modern manufacturing, robotics, and smart infrastructure: innovations built for reliability, scalability, and performance.”
The benefits of a partnership for innovation
AI-driven inventory and warehouse management leverage computer vision and automation to make predictive maintenance and autonomous decision-making faster and more efficient than ever.
To fully explore these opportunities, Arduino and System Electronics are integrating acceleration technology from Hailo, a leader in Edge AI processors. By combining Hailo’s AI acceleration, Arduino’s open ecosystem, and System Electronics’ industrial-grade expertise, this collaboration aims to deliver real-time, high-performance automation solutions with enhanced precision and speed – bringing intelligence directly to where it’s needed most.
Why explore smart inventory and logistics
Smart factories and connected warehouses leverage powerful, real-time AI computing at the edge to improve efficiency and quality standards.
For example, computer vision can be employed at the edge to detect defects and irregularities, as well as to automate warehouse logistics with precise item tracking and storage optimization; sensor data can be analyzed to implement predictive maintenance and tackle anomalies before failures occur, reducing downtime; motion planning and obstacle avoidance can be optimized for AGVs and robotic arms thanks to intelligent control based on data and AI.
See the demo in action at Embedded World
Within their collaboration, Arduino and System Electronics have developed an innovative solution for Modula, leaders in warehouse automation: “We are committed to revolutionizing intralogistics through automation and smart technology that make processes easier. Harnessing the power of AI-driven intelligence allows us to enhance efficiency, optimize workflows, and deliver even greater value to our customers without adding complexities for them,” said Franco Stefani, founder of Modula.
This demo for a smart inventory solution will debut at Embedded World 2025 (Nuremberg, March 11th-13th): visit Arduino in Hall 3A, Booth #313, to see it in action and connect with the experts behind its development.
In April 2023, we launched our first Experience AI resources, developed in partnership with Google DeepMind to support educators to engage their students in learning about the topic of AI. Since then, the Experience AI programme has grown rapidly, reaching thousands of educators all over the world. Read on to find out more about the impact of our resources, and what we are learning.
The Experience AI resources
The Experience AI resources are designed to help educators introduce AI and AI safety to 11- to 14-year-olds. They consist of:
Foundations of AI: a comprehensive unit of six lessons including lesson plans, slide decks, activities, videos, and more to support educators to introduce AI and machine learning to young people
Large language models (LLMs): a lesson designed to help young people discover how large language models work, their benefits, and why their outputs are not always reliable
Ecosystems and AI — Biology: a lesson providing an opportunity for young people to explore how AI applications are supporting animal conservation
AI safety:a set of resources with a flexible design to support educators in a range of settings to equip young people with the knowledge and skills to responsibly and safely navigate the challenges associated with AI
The launch of Experience AI came at an important time: AI technologies are playing an ever-growing role in our everyday lives, so it is crucial for young people to gain the understanding and skills they need to critically engage with these technologies. While the resources were initially designed for use by educators in the UK, they immediately attracted interest from educators across the world, as well as individuals wanting to learn about AI. The resources have now been downloaded over 325,000 times by people from over 160 countries. This includes downloads from over 7000 educators worldwide, who will collectively reach an estimated 1.2 million young people.
Thanks to funding from Google DeepMind and Google.org, we have also been working with partners from across the globe to localise and translate the resources for learners and educators in their countries, and provide training to support local educators to deliver the lessons. The educational resources are now available in up to 15 languages, and to date, we have trained over 100 representatives from 20 international partner organisations, who will go on to train local educators. Five of these organisations have begun onward training already, collectively training over 1500 local educators so far.
The impact of Experience AI
The Experience AI resources have been well received by students and educators. Based on responses to our follow-up surveys, in countries where we have partners:
95% of educators agreed that the Experience AI sessions have increased their students’ knowledge of AI concepts
90% of young people (including young people in formal and non-formal education settings and learning independently) indicated that they better understand what AI and machine learning are
This is backed up by qualitative feedback from surveys and interviews.
“Students’ perception and understanding of AI has improved and corrected. They realised they can contribute and be a part of the [development], instead of only users.” – Noorlaila, educator, SMK Cyberjaya, Malaysia
“[Students] found it interesting in the sense that it’s relevant information and they didn’t know what information was used for training models.” – Teacher, Liceul Tehnologic “Crisan” Criscior, Romania
“Based on my knowledge and learning about AI, I now appreciate the definition of AI as well as its implementation.” – Student, Changamwe JSS, Kenya
The training and resources also support educators to feel more confident to teach about AI:
93% of international partner representatives who participated in our training agreed that the training increased their knowledge of AI concepts
88% of educators receiving onward training by our international partners agreed that the training increased their confidence to teach AI concepts
87% of educator respondents from our ‘Understanding AI for educators’ online course agreed that the course was useful for supporting young people
“It was a wonderful experience for me to join this workshop. Truly I was able to learn a lot about AI and I feel more confident now to teach the kids back at school about this new knowledge.” – Nur, educator, SMK Bandar Tasek Mutiara, trained by our partner Penang Science Cluster, Malaysia
“This was one of the best information sessions I’ve been to! So, so helpful!” – Meagan, educator, University of Alberta, trained by our partner Digital Moment, Canada
“The layout of the course in terms of content structuring is amazing. I love the discussion forum and the insightful yet empathetic responses by the course moderators on the discussion board. Honestly, I am really glad I started my AI in education journey with you.” – Priyanka, head teacher (primary level), United Arab Emirates, online course participant
What are we learning?
We are committed to continually improving our resources based on feedback from users. A recent review of feedback from educators highlighted key aspects of the resources that educators value most, as well as some challenges educators are facing and possible areas for improvement. For example, educators particularly like the interactive aspects, the clear structure and explanations, and the videos featuring professionals from the AI industry. We are continuing to look for ways we can better support educators to adapt the content and language to better support students in their context, fit Experience AI into their school timetables, and overcome technical barriers.
If you would like to try out our Experience AI resources, head to experience-ai.org, where you can find our free resources and online course, as well as information about local partners in your area.
We’re hugely proud of the new magazine. It’s got all the amazing features that made The MagPi such a success, but with a new design that’s easier to read, better at displaying code, and more in sync with Raspberry Pi’s amazing documentation and tutorials.
Thank you again to everybody who supports us by subscribing to the magazine or contributing to our endeavour. We really can’t do it without you.
We’ve worked incredibly hard to make this issue one to remember. Inside the inaugural Raspberry Pi Official Magazine, you will discover…
Raspberry Pi problem solving
Our lead feature this month is a huge analysis of Raspberry Pi troubleshooting. We’ve gathered a vast amount of documentation on power requirements, SD card performance, Raspberry Pi OS customisation, boot problems, audio and video fixes, and hardware enhancements.
The maker toolset
It’s incredibly important to make things. Making is rewarding, fun, and practical. In this month’s magazine, you’ll discover everything you need to set up your makerspace. Our maker toolset has the full range from simple circuits and humble sewing up to 3D printing and metalwork.
HexBoard
Raspberry Pi Official Magazine is packed with all the best projects from around the globe. Jared DeCook shares his incredible HexBoard musical instrument with us. Instead of piano-style keys, it features hexagonal buttons and RGB LEDs, all controlled by Raspberry Pi RP2040.
Raspberry Pi Chess Board
Imagine playing chess against a robot. That’s what high school student Tamerlan Goglichidze has created. With a stepper motor and magnets it moves the chess pieces around.
Sense HAT V2
Sense HAT is a great way to discover coding and data gathering. In this tutorial we’ll show you how to attach a Sense HAT to your Raspberry Pi and start controlling the LED display.
Custom CNC Machine
Jo Hinchliffe brings together various parts for a custom CNC machine that acts as a carbon filament winding machine for making custom carbon-fibre tubes.
10 Amazing Accessories
Power up your Raspberry Pi 5 with these incredible add-ons that enable extra functionality. We’ve got everything from USB sound cards to overpowered cooling systems.
You’ll find all this and much more in the latest edition of Raspberry Pi Official Magazine. Pick up your copy today from our store, or subscribe to get every issue delivered to your door.
Ever been in a hotel or AirBnB with rubbish WiFi? Make it better by using a Raspberry Pi with PiFi, a powerful dongle that let’s you create a secure wireless router with a Raspberry Pi – including VPN capabilities. We have five to give away, and you can enter below…
AI has become a pervasive term that is heard with trepidation, excitement, and often a furrowed brow in school staffrooms. For educators, there is pressure to use AI applications for productivity — to save time, to help create lesson plans, to write reports, to answer emails, etc. There is also a lot of interest in using AI tools in the classroom, for example, to personalise or augment teaching and learning. However, without understanding AI technology, neither productivity nor personalisation are likely to be successful as teachers and students alike must be critical consumers of these new ways of working to be able to use them productively.
Fifty teachers and researchers share knowledge about teaching about AI.
In both England and globally, there are few new AI-based curricula being introduced and the drive for teachers and students to learn about AI in schools is lagging, with limited initiatives supporting teachers in what to teach and how to teach it. At the Raspberry Pi Foundation and Raspberry Pi Computing Education Research Centre, we decided it was time to investigate this missing link of teaching about AI, and specifically to discover what the teachers who are leading the way in this topic are doing in their classrooms.
A day of sharing and activities in Cambridge
We organised a day-long, face-to-face symposium with educators who have already started to think deeply about teaching about AI, have started to create teaching resources, and are starting to teach about AI in their classrooms. The event was held in Cambridge, England, on 1 February 2025, at the head office of the Raspberry Pi Foundation.
Teachers collaborated and shared their knowledge about teaching about AI.
Over 150 educators and researchers applied to take part in the symposium. With only 50 places available, we followed a detailed protocol, whereby those who had the most experience teaching about AI in schools were selected. We also made sure that educators and researchers from different teaching contexts were selected so that there was a good mix of primary to further education phases represented. Educators and researchers from England, Scotland, and the Republic of Ireland were invited and gathered to share about their experiences. One of our main aims was to build a community of early adopters who have started along the road of classroom-based AI curriculum design and delivery.
Inspiration, examples, and expertise
To inspire the attendees with an international perspective of the topics being discussed, Professor Matti Tedre, a visiting academic from Finland, gave a brief overview of the approach to teaching about AI and resources that his research team have developed. In Finland, there is no compulsory distinct computing topic taught, so AI is taught about in other subjects, such as history. Matti showcased tools and approaches developed from the Generation AI research programme in Finland. You can read about the Finnish research programme and Matti’s two month visit to the Raspberry Pi Computing Education Research Centre in our blog.
A Finnish perspective to teaching about AI.
Attendees were asked to talk about, share, and analyse their teaching materials. To model how to analyse resources, Ben Garside from the Raspberry Pi Foundation modelled how to complete the activities using the Experience AI resources as an example. The Experience AI materials have been co-created with Google DeepMind and are a suite of free classroom resources, teacher professional development, and hands-on activities designed to help teachers confidently deliver AI lessons. Aimed at learners aged 11 to 14, the materials are informed by the AI education framework developed at the Raspberry Pi Computing Education Research Centre and are grounded in real-world contexts. We’ve recently released new lessons on AI safety, and we’ve localised the resources for use in many countries including Africa, Asia, Europe, and North America.
In the morning session, Ben exemplified how to talk about and share learning objectives, concepts, and research underpinning materials using the Experience AI resources and in the afternoon he discussed how he had mapped the Experience AI materials to the UNESCO AI competency framework for students.
UNESCO provide important expertise.
Kelly Shiohira, from UNESCO, kindly attended our session, and gave an invaluable insight into the UNESCO AI competency framework for students. Kelly is one of the framework’s authors and her presentation helped teachers understand how the materials had been developed. The attendees then used the framework to analyse their resources, to identify gaps and to explore what progression might look like in the teaching of AI.
Teachers shared their knowledge about teaching about AI.
Throughout the day, the teachers worked together to share their experience of teaching about AI. They considered the concepts and learning objectives taught, what progression might look like, what the challenges and opportunities were of teaching about AI, what research informed the resources and what research needs to be done to help improve the teaching and learning of AI.
What next?
We are now analysing the vast amount of data that we gathered from the day and we will share this with the symposium participants before we share it with a wider audience. What is clear from our symposium is that teachers have crucial insights into what should be taught to students about AI, and how, and we are greatly looking forward to continuing this journey with them.
As well as the symposium, we are also conducting academic research in this area, you can read more about this in our Annual Report and on our research webpages. We will also be consulting with teachers and AI experts. If you’d like to ensure you are sent links to these blog posts, then sign up to our newsletter. If you’d like to take part in our research and potentially be interviewed about your perspectives on curriculum in AI, then contact us at: rpcerc-enquiries@cst.cam.ac.uk
We also are sharing the research being done by ourselves and other researchers in the field at our research seminars. This year, our seminar series is on teaching about AI and data science in schools. Please do sign up and come along, or watch some of the presentations that have already been delivered by the amazing research teams who are endeavouring to discover what we should be teaching about AI and how in schools
Cirrhosis of the liver is an extremely serious condition that requires extensive medical monitoring and often intervention. Progression of the condition can be fatal, so even if caught early it must be monitored closely. But, like most things in medicine, that gets expensive. That’s why Marb built his own DIY “micro lab” to analyze ammonia levels in blood and urine.
Disclaimer: Don’t rely on YouTube videos for your medical needs!
The severity of Marb’s condition correlates with increased ammonia production, which is common for cirrhosis of the liver. More ammonia in the blood and urine indicates progression of the disease and a need for immediate medical intervention. Marb’s micro lab lets him monitor his own ammonia levels at home.
The central detection mechanism of this micro lab relies on Berthelot’s reagent, which becomes a blue-green color in the presence of ammonia. To make use of that, the micro lab needs to properly expose the sample to Berthelot’s reagent and look at the resulting color change.
An Arduino Nano board controls the whole process through a custom PCB. That starts with heating the sample in a vial to release the ammonia vapor. The vapor travels via a tube through a gas diffuser into another vial containing Berthelot’s reagent. A magnetic stirrer beneath mixes the gas into the reagent. A 660nm (deep red) laser shines through that vial into a photo diode on the other side, and the Arduino monitors that through a pre-amp.
If a lot of the red light passes through, then the Berthelot’s reagent didn’t turn very blue and there is little to no ammonia present. If hardly any red light passes through, then reagent is very blue and that indicates a high level of ammonia.
The amount of light detected, between those two extremes, provides a reasonably accurate measure of Marb’s ammonia levels, so he can keep track of his condition’s progression.
Starting this week, cars with Google built-in will get dozens of new apps to enjoy while parked.
This launch is part of our car ready mobile apps program, which we introduced last year to make it easier for developers to distribute their large-screen-optimized mobile apps to car screens.
Whether you’re waiting at a charging station or in line at school pickup, you’ll be able to stay entertained with popular streaming and gaming apps like Farm Heroes Saga and F1 TV.
Availability will begin with select Volvo cars and Polestar models and more makes and models coming soon. Check out the full list of apps on the Google Play Store.
Every year, we take a moment to reflect on the contributions we made to the open source movement, and the many ways our community has made a huge difference. As we publish the latest Open Source Report, we are proud to say 2024 was another year of remarkable progress and achievements.
A year of growth and collaboration
At Arduino, we continued pushing the boundaries of open hardware and software.
Kept improving Arduino IDE 2, Arduino CLI, and more!
These updates ensure a more flexible and robust ecosystem for developers, educators, and makers worldwide.
But what truly makes open source thrive is the community behind it! Over the past year, Arduino users contributed 1,198 new libraries to the Library Manager (+18% YoY growth!), shared hundreds of open-source tutorials, and actively engaged in thousands of discussions and collaborations on GitHub and Project Hub. These collective efforts fuel innovation, making the Arduino ecosystem more dynamic, inclusive, and powerful than ever.
How can you contribute to open source?
We believe open-source success is built on collaboration. Every original Arduino purchase, Arduino Cloud subscription, and community contribution helps support and expand this shared ecosystem. Donations of course are also welcome, and play a great part in everything we do!
Download the 2024 Open Source Report to explore the milestones we’ve achieved together. Here’s to another year of openness, creativity, and progress!
Um dir ein optimales Erlebnis zu bieten, verwenden wir Technologien wie Cookies, um Geräteinformationen zu speichern und/oder darauf zuzugreifen. Wenn du diesen Technologien zustimmst, können wir Daten wie das Surfverhalten oder eindeutige IDs auf dieser Website verarbeiten. Wenn du deine Einwillligung nicht erteilst oder zurückziehst, können bestimmte Merkmale und Funktionen beeinträchtigt werden.
Funktional
Immer aktiv
Die technische Speicherung oder der Zugang ist unbedingt erforderlich für den rechtmäßigen Zweck, die Nutzung eines bestimmten Dienstes zu ermöglichen, der vom Teilnehmer oder Nutzer ausdrücklich gewünscht wird, oder für den alleinigen Zweck, die Übertragung einer Nachricht über ein elektronisches Kommunikationsnetz durchzuführen.
Vorlieben
Die technische Speicherung oder der Zugriff ist für den rechtmäßigen Zweck der Speicherung von Präferenzen erforderlich, die nicht vom Abonnenten oder Benutzer angefordert wurden.
Statistiken
Die technische Speicherung oder der Zugriff, der ausschließlich zu statistischen Zwecken erfolgt.Die technische Speicherung oder der Zugriff, der ausschließlich zu anonymen statistischen Zwecken verwendet wird. Ohne eine Vorladung, die freiwillige Zustimmung deines Internetdienstanbieters oder zusätzliche Aufzeichnungen von Dritten können die zu diesem Zweck gespeicherten oder abgerufenen Informationen allein in der Regel nicht dazu verwendet werden, dich zu identifizieren.
Marketing
Die technische Speicherung oder der Zugriff ist erforderlich, um Nutzerprofile zu erstellen, um Werbung zu versenden oder um den Nutzer auf einer Website oder über mehrere Websites hinweg zu ähnlichen Marketingzwecken zu verfolgen.