Schlagwort: Machine Learning

  • tinyML in Malawi: Empowering local communities through technology

    tinyML in Malawi: Empowering local communities through technology

    Reading Time: 3 minutes

    Dr. David Cuartielles, co-founder of Arduino, recently participated in a workshop titled “TinyML for Sustainable Development” in Zomba, organized by the International Centre for Theoretical Physics (ICTP), a category 1 UNESCO institute, and the University of Malawi. Bringing together students, educators, and professionals from Malawi and neighboring countries, as well as international experts from Brazil, Slovenia, Italy, and Sweden, the event aimed to introduce participants to tiny machine learning (tinyML) and its applications in addressing global challenges, bringing cutting-edge technology to new frontiers.

    The workshop was supported by various global organizations and companies, including RAiDO, ICTP, NAiXus, UNESCO’s IRC-AI, the EDGE AI FOUNDATION, ITU’s AI-4-Good, CRAFS, and the Ministry of Education of Malawi. As part of our commitment to supporting educational initiatives that promote technological empowerment and sustainable development worldwide, Arduino contributed by donating equipment for the hands-on sessions, enabling participants to gain practical experience with embedded systems and machine learning.

    Cuartielles – who centered his session on an introduction to Nicla Vision – is a long-time supporter of the importance of providing access to advanced technologies in regions with limited resources. He believes that such communities can leapfrog traditional development stages by adopting innovative solutions tailored to their specific needs. During the workshop, participants engaged in projects focusing on agriculture, health, and environmental monitoring, demonstrating the potential of tinyML in improving local livelihoods.

    “You cannot imagine the pride of seeing things work, when students and teachers from different countries or regions join to learn about our technology, and about how they can apply it in their own education programs or everyday implementation cases,” Cuartielles says.

    For those interested in learning more about the workshop and its content, all presentation slides and resources are available online

    In partnership with

    The post tinyML in Malawi: Empowering local communities through technology appeared first on Arduino Blog.

    Website: LINK

  • How can we teach students about AI and data science? Join our 2025 seminar series to learn more about the topic

    How can we teach students about AI and data science? Join our 2025 seminar series to learn more about the topic

    Reading Time: 4 minutes

    AI, machine learning (ML), and data science infuse our daily lives, from the recommendation functionality on music apps to technologies that influence our healthcare, transport, education, defence, and more.

    What jobs will be affected by AL, ML, and data science remains to be seen, but it is increasingly clear that students will need to learn something about these topics. There will be new concepts to be taught, new instructional approaches and assessment techniques to be used, new learning activities to be delivered, and we must not neglect the professional development required to help educators master all of this. 

    An educator is helping a young learner with a coding task.

    As AI and data science are incorporated into school curricula and teaching and learning materials worldwide, we ask: What’s the research basis for these curricula, pedagogy, and resource choices?

    In 2024, we showcased researchers who are investigating how AI can be leveraged to support the teaching and learning of programming. But in 2025, we look at what should be taught about AI, ML, and data science in schools and how we should teach this. 

    Our 2025 seminar speakers — so far!

    We are very excited that we have already secured several key researchers in the field. 

    On 21 January, Shuchi Grover will kick off the seminar series by giving an important overview of AI in the K–12 landscape, including developing both AI literacy and AI ethics. Shuchi will provide concrete examples and recently developed frameworks to give educators practical insights on the topic.

    Our second session will focus on a teacher professional development (PD) programme to support the introduction of AI in Upper Bavarian schools. Franz Jetzinger from the Technical University of Munich will summarise the PD programme and share how teachers implemented the topic in their classroom, including the difficulties they encountered.

    Again from Germany, Lukas Höper from Paderborn University, with Carsten Schulte will describe important research on data awareness and introduce a framework that is likely to be key for learning about data-driven technology. The pair will talk about the Data Awareness Framework and how it has been used to help learners explore, evaluate, and be empowered in looking at the role of data in everyday applications.  

    Our April seminar will see David Weintrop from the University of Maryland introduce, with his colleagues, a data science curriculum called API Can Code, aimed at high-school students. The group will highlight the strategies needed for integrating data science learning within students’ lived experiences and fostering authentic engagement.

    Later in the year, Jesús Moreno-Leon from the University of Seville will help us consider the  thorny but essential question of how we measure AI literacy. Jesús will present an assessment instrument that has been successfully implemented in several research studies involving thousands of primary and secondary education students across Spain, discussing both its strengths and limitations.

    What to expect from the seminars

    Our seminars are designed to be accessible to anyone interested in the latest research about AI education — whether you’re a teacher, educator, researcher, or simply curious. Each session begins with a presentation from our guest speaker about their latest research findings. We then move into small groups for a short discussion and exchange of ideas before coming back together for a Q&A session with the presenter. 

    An educator is helping two young learners with a coding task.

    Attendees of our 2024 series told us that they valued that the talks “explore a relevant topic in an informative way“, the “enthusiasm and inspiration”, and particularly the small-group discussions because they “are always filled with interesting and varied ideas and help to spark my own thoughts”. 

    The seminars usually take place on Zoom on the first Tuesday of each month at 17:00–18:30 GMT / 12:00–13:30 ET / 9:00–10:30 PT / 18:00–19:30 CET. 

    You can find out more about each seminar and the speakers on our upcoming seminar page. And if you are unable to attend one of our talks, you can watch them from our previous seminar page, where you will also find an archive of all of our previous seminars dating back to 2020.

    How to sign up

    To attend the seminars, please register here. You will receive an email with the link to join our next Zoom call. Once signed up, you will automatically be notified of upcoming seminars. You can unsubscribe from our seminar notifications at any time.

    We hope to see you at a seminar soon!

    Website: LINK

  • Introducing new artificial intelligence and machine learning projects for Code Clubs

    Introducing new artificial intelligence and machine learning projects for Code Clubs

    Reading Time: 4 minutes

    We’re pleased to share a new collection of Code Club projects designed to introduce creators to the fascinating world of artificial intelligence (AI) and machine learning (ML). These projects bring the latest technology to your Code Club in fun and inspiring ways, making AI and ML engaging and accessible for young people. We’d like to thank Amazon Future Engineer for supporting the development of this collection.

    A man on a blue background, with question marks over his head, surrounded by various objects and animals, such as apples, planets, mice, a dinosaur and a shark.

    The value of learning about AI and ML

    By engaging with AI and ML at a young age, creators gain a clearer understanding of the capabilities and limitations of these technologies, helping them to challenge misconceptions. This early exposure also builds foundational skills that are increasingly important in various fields, preparing creators for future educational and career opportunities. Additionally, as AI and ML become more integrated into educational standards, having a strong base in these concepts will make it easier for creators to grasp more advanced topics later on.

    What’s included in this collection

    We’re excited to offer a range of AI and ML projects that feature both video tutorials and step-by-step written guides. The video tutorials are designed to guide creators through each activity at their own pace and are captioned to improve accessibility. The step-by-step written guides support creators who prefer learning through reading. 

    The projects are crafted to be flexible and engaging. The main part of each project can be completed in just a few minutes, leaving lots of time for customisation and exploration. This setup allows for short, enjoyable sessions that can easily be incorporated into Code Club activities.

    The collection is organised into two distinct paths, each offering a unique approach to learning about AI and ML:

    Machine learning with Scratch introduces foundational concepts of ML through creative and interactive projects. Creators will train models to recognise patterns and make predictions, and explore how these models can be improved with additional data.

    The AI Toolkit introduces various AI applications and technologies through hands-on projects using different platforms and tools. Creators will work with voice recognition, facial recognition, and other AI technologies, gaining a broad understanding of how AI can be applied in different contexts.

    Inclusivity is a key aspect of this collection. The projects cater to various skill levels and are offered alongside an unplugged activity, ensuring that everyone can participate, regardless of available resources. Creators will also have the opportunity to stretch themselves — they can explore advanced technologies like Adobe Firefly and practical tools for managing Ollama and Stable Diffusion models on Raspberry Pi computers.

    Project examples

    A piece of cheese is displayed on a screen. There are multiple mice around the screen.

    One of the highlights of our new collection is Chomp the cheese, which uses Scratch Lab’s experimental face recognition technology to create a game students can play with their mouth! This project offers a playful introduction to facial recognition while keeping the experience interactive and fun. 

    A big orange fish on a dark blue background, with green leaves surrounding the fish.

    Fish food uses Machine Learning for Kids, with creators training a model to control a fish using voice commands.

    An illustration of a pink brain is displayed on a screen. There are two hands next to the screen playing the 'Rock paper scissors' game.

    In Teach a machine, creators train a computer to recognise different objects such as fingers or food items. This project introduces classification in a straightforward way using the Teachable Machine platform, making the concept easy to grasp. 

    Two men on a blue background, surrounded by question marks, a big green apple and a red tomato.

    Apple vs tomato also uses Teachable Machine, but this time creators are challenged to train a model to differentiate between apples and tomatoes. Initially, the model exhibits bias due to limited data, prompting discussions on the importance of data diversity and ethical AI practices. 

    Three people on a light blue background, surrounded by music notes and a microbit.

    Dance detector allows creators to use accelerometer data from a micro:bit to train a model to recognise dance moves like Floss or Disco. This project combines physical computing with AI, helping creators explore movement recognition technology they may have experienced in familiar contexts such as video games. 

    A green dinosaur in a forest is being observed by a person hiding in the bush holding the binoculars.

    Dinosaur decision tree is an unplugged activity where creators use a paper-based branching chart to classify different types of dinosaurs. This hands-on project introduces the concept of decision-making structures, where each branch of the chart represents a choice or question leading to a different outcome. By constructing their own decision tree, creators gain a tactile understanding of how these models are used in ML to analyse data and make predictions. 

    These AI projects are designed to support young people to get hands-on with AI technologies in Code Clubs and other non-formal learning environments. Creators can also enter one of their projects into Coolest Projects by taking a short video showing their project and any code used to make it. Their creation will then be showcased in the online gallery for people all over the world to see.

    Website: LINK

  • Making fire detection more accurate with ML sensor fusion

    Making fire detection more accurate with ML sensor fusion

    Reading Time: 2 minutes

    The mere presence of a flame in a controlled environment, such as a candle, is perfectly acceptable, but when tasked with determining if there is cause for alarm solely using vision data, embedded AI models can struggle with false positives. Solomon Githu’s project aims to lower the rate of incorrect detections with a multi-input sensor fusion technique wherein image and temperature data points are used by a model to alert if there’s a potentially dangerous blaze.

    Gathering both kinds of data is the Arduino TinyML Kit’s Nano 33 BLE Sense. Using the kit, Githu could capture a wide variety of images thanks to the OV7675 camera module and temperature information with the Nano 33 BLE Sense’s onboard HTS221 sensor. After exporting a large dataset of fire/fire-less samples alongside a range of ambient temperatures, he leveraged Google Colab to train the model before importing it into the Edge Impulse Studio. In here, the model’s memory footprint was further reduced to fit onto the Nano 33 BLE Sense.

    The inferencing sketch polls the camera for a new frame, and once it has been resized, its frame data, along with a new sample from the temperature sensor, are merged and sent through the model which outputs either “fire” or “safe_environment”. As detailed in Githu’s project post, the system accurately classified several scenarios in which a flame combined with elevated temperatures resulted in a positive detection.

    The post Making fire detection more accurate with ML sensor fusion appeared first on Arduino Blog.

    Website: LINK

  • This desk lamp automatically adjusts its brightness using AI on an Arduino UNO

    This desk lamp automatically adjusts its brightness using AI on an Arduino UNO

    Reading Time: 2 minutes

    When you hear about all of the amazing things being accomplished with artificial intelligence today, you probably assume that they require a massive amount of processing power. And while that is often true, there are machine learning models that can run on the edge — including on low-power hardware like microcontrollers. To prove that, Shovan Mondal built this AI-enhanced desk lamp.

    Mondal’s goal with this project was to demonstrate that AI (specifically machine learning) can be easy to implement on affordable and efficient hardware, such as an Arduino UNO Rev3 board. Here, the ML model adjusts the brightness of the lamp’s LED proportionally to the ambient light in the area as detected by an LDR (light-dependent resistor). The lamp body is heavy cardstock paper. 

    It would be possible to program this behavior explicitly with set thresholds or a manually created formula. But a trained ML model can do the same job without explicit instructions. The training process is simply subjecting the lamp to different lighting conditions and manually adjusting the brightness to suit them. That produces a series of data pairs consisting of the LDR and LED brightness values. 

    In CSV format, that data can be used to train a linear regression model provided with scikit-learn. That then produces a formula and values that will reproduce the data seen in the training set. The output can then set the LED brightness. 

    In this case, that formula is very simple, because it only has to account for two variables with a direct relationship. But much more complex relationships are possible, as are ML models that perform tasks more challenging than linear regression.

    The post This desk lamp automatically adjusts its brightness using AI on an Arduino UNO appeared first on Arduino Blog.

    Website: LINK

  • Machine learning makes fabric buttons practical

    Machine learning makes fabric buttons practical

    Reading Time: 2 minutes

    The entire tech industry is desperate for a practical wearable HMI (Human Machine Interface) right now. The most newsworthy devices at CES this year were the Rabbit R1 and the Humane AI Pin, both of which are attempts to streamline wearable interfaces with and for AI. Both have numerous drawbacks, as do most other approaches. What the world really needs is an affordable, practical, and unobtrusive solution, and North Carolina State University researchers may have found the answer in machine learning-optimized fabric buttons.

    It is, of course, possible to adhere a conventional button to fabric. But by making the button itself from fabric, these researchers have improved comfort, lowered costs, and introduced a lot more flexibility — both literally and metaphorically. These are triboelectric touch sensors, which detect the amount of force exerted on them by measuring the energy between two layers of opposite charges.

    But there is a problem with this approach: the measured values vary dramatically based on usage, environmental conditions, manufacturing tolerances, and physical wear. The fabric button on one shirt sleeve may present completely different readings than another. If this were a simple binary button, it wouldn’t be as challenging of an issue. But the whole point of this sensor type is to provide a one-dimensional scale corresponding to the pressure exerted, so consistency is important.

    Because achieving physical consistency isn’t practical, the team turned to machine learning. A TensorFlow Lite for Microcontrollers machine learning model, running on an Arduino Nano ESP32 board, interprets the readings from the sensors. It is then able to differentiate between several interactions: single clicks, double clicks, triple clicks, single slides, double slides, and long presses.

    Even if the exact readings change between sensors (or the same sensor over time), the patterns are still recognizable to the machine learning model. This would make it practical to integrate fabric buttons into inexpensive garments and users could interact with their devices through those interfaces.

    The researchers demonstrated the concept with mobile apps and even a game. More details can be found in their paper here.

    Image credit: Y. Chen et al.

    The post Machine learning makes fabric buttons practical appeared first on Arduino Blog.

    Website: LINK

  • Classify nearby annoyances with this sound monitoring device

    Classify nearby annoyances with this sound monitoring device

    Reading Time: 2 minutes

    Soon after a police station opened near his house, Christopher Cooper noticed a substantial increase in the amount of emergency vehicle traffic and their associated noises even though local officials had promised that it would not be disruptive. But rather than write down every occurrence to track the volume of disturbances, he came up with a connected audio-classifying device that can automatically note the time and type of sound for later analysis.

    Categorizing each sound was done by leveraging Edge Impulse and an Arduino Nano 33 BLE Sense. After training a model and deploying it within a sketch, the Nano will continually listen for new noises through its onboard microphone, run an inference, and then output the label and confidence over UART serial. Reading this stream of data is an ESP32 Dev Kit, which displays every entry in a list on a useful GUI. The screen allows users to select rows, view more detailed information, and even modify the category if needed.

    Going beyond the hardware aspect, Cooper’s project also includes a web server running on the ESP32 that can show the logs within a browser, and users can even connect an SD card to have automated file entries created. For more information about this project, you can read Cooper’s write-up here on Hackster.io.

    [youtube https://www.youtube.com/watch?v=UYE-HdRBQnE?feature=oembed&w=500&h=281]

    The post Classify nearby annoyances with this sound monitoring device appeared first on Arduino Blog.

    Website: LINK

  • The Experience AI Challenge: Find out all you need to know

    The Experience AI Challenge: Find out all you need to know

    Reading Time: 3 minutes

    We’re really excited to see that Experience AI Challenge mentors are starting to submit AI projects created by young people. There’s still time for you to get involved in the Challenge: the submission deadline is 24 May 2024. 

    The Experience AI Challenge banner.

    If you want to find out more about the Challenge, join our live webinar on Wednesday 3 April at 15:30 BST on our YouTube channel.

    [youtube https://www.youtube.com/watch?v=kH3BI70M0e0?feature=oembed&w=500&h=281]

    During the webinar, you’ll have the chance to:

    • Ask your questions live. Get any Challenge-related queries answered by us in real time. Whether you need clarification on any part of the Challenge or just want advice on your young people’s project(s), this is your chance to ask.
    • Get introduced to the submission process. Understand the steps of submitting projects to the Challenge. We’ll walk you through the requirements and offer tips for making your young people’s submission stand out.
    • Learn more about our project feedback. Find out how we will deliver our personalised feedback on submitted projects (UK only).
    • Find out how we will recognise your creators’ achievements. Learn more about our showcase event taking place in July, and the certificates and posters we’re creating for you and your young people to celebrate submitting your projects.

    Subscribe to our YouTube channel and press the ‘Notify me’ button to receive a notification when we go live. 

    Why take part? 

    The Experience AI Challenge, created by the Raspberry Pi Foundation in collaboration with Google DeepMind, guides young people under the age of 18, and their mentors, through the exciting process of creating their own unique artificial intelligence (AI) project. Participation is completely free.

    Central to the Challenge is the concept of project-based learning, a hands-on approach that gets learners working together, thinking critically, and engaging deeply with the materials. 

    A teacher and three students in a classroom. The teacher is pointing at a computer screen.

    In the Challenge, young people are encouraged to seek out real-world problems and create possible AI-based solutions. By taking part, they become problem solvers, thinkers, and innovators. 

    And to every young person based in the UK who creates a project for the Challenge, we will provide personalised feedback and a certificate of achievement, in recognition of their hard work and creativity. Any projects considered as outstanding by our experts will be selected as favourites and its creators will be invited to a showcase event in the summer. 

    Resources ready for your classroom or club

    You don’t need to be an AI expert to bring this Challenge to life in your classroom or coding club. Whether you’re introducing AI for the first time or looking to deepen your young people’s knowledge, the Challenge’s step-by-step resource pack covers all you and your young people need, from the basics of AI, to training a machine learning model, to creating a project in Scratch.  

    In the resource pack, you will find:

    • The mentor guide contains all you need to set up and run the Challenge with your young people 
    • The creator guide supports young people throughout the Challenge and contains talking points to help with planning and designing projects 
    • The blueprint workbook helps creators keep track of their inspiration, ideas, and plans during the Challenge 

    The pack offers a safety net of scaffolding, support, and troubleshooting advice. 

    Find out more about the Experience AI Challenge

    By bringing the Experience AI Challenge to young people, you’re inspiring the next generation of innovators, thinkers, and creators. The Challenge encourages young people to look beyond the code, to the impact of their creations, and to the possibilities of the future.

    You can find out more about the Experience AI Challenge, and download the resource pack, from the Experience AI website.

    Website: LINK

  • Teaching about AI explainability

    Teaching about AI explainability

    Reading Time: 6 minutes

    In the rapidly evolving digital landscape, students are increasingly interacting with AI-powered applications when listening to music, writing assignments, and shopping online. As educators, it’s our responsibility to equip them with the skills to critically evaluate these technologies.

    A woman teacher helps a young person with a coding project.

    A key aspect of this is understanding ‘explainability’ in AI and machine learning (ML) systems. The explainability of a model is how easy it is to ‘explain’ how a particular output was generated. Imagine having a job application rejected by an AI model, or facial recognition technology failing to recognise you — you would want to know why.

    Two teenage girls do coding activities at their laptops in a classroom.

    Establishing standards for explainability is crucial. Otherwise we risk creating a world where decisions impacting our lives are made by opaque systems we don’t understand. Learning about explainability is key for students to develop digital literacy, enabling them to navigate the digital world with informed awareness and critical thinking.

    Why AI explainability is important

    AI models can have a significant impact on people’s lives in various ways. For instance, if a model determines a child’s exam results, parents and teachers would want to understand the reasoning behind it.

    Two learners sharing a laptop in a coding session.

    Artists might want to know if their creative works have been used to train a model and could be at risk of plagiarism. Likewise, coders will want to know if their code is being generated and used by others without their knowledge or consent. If you came across an AI-generated artwork that features a face resembling yours, it’s natural to want to understand how a photo of you was incorporated into the training data. 

    Explainability is about accountability, transparency, and fairness, which are vital lessons for children as they grow up in an increasingly digital world.

    There will also be instances where a model seems to be working for some people but is inaccurate for a certain demographic of users. This happened with Twitter’s (now X’s) face detection model in photos; the model didn’t work as well for people with darker skin tones, who found that it could not detect their faces as effectively as their lighter-skinned friends and family. Explainability allows us not only to understand but also to challenge the outputs of a model if they are found to be unfair.

    In essence, explainability is about accountability, transparency, and fairness, which are vital lessons for children as they grow up in an increasingly digital world.

    Routes to AI explainability

    Some models, like decision trees, regression curves, and clustering, have an in-built level of explainability. There is a visual way to represent these models, so we can pretty accurately follow the logic implemented by the model to arrive at a particular output.

    By teaching students about AI explainability, we are not only educating them about the workings of these technologies, but also teaching them to expect transparency as they grow to be future consumers or even developers of AI technology.

    A decision tree works like a flowchart, and you can follow the conditions used to arrive at a prediction. Regression curves can be shown on a graph to understand why a particular piece of data was treated the way it was, although this wouldn’t give us insight into exactly why the curve was placed at that point. Clustering is a way of collecting similar pieces of data together to create groups (or clusters) with which we can interrogate the model to determine which characteristics were used to create the groupings.

    A decision tree that classifies animals based on their characteristics; you can follow these models like a flowchart

    However, the more powerful the model, the less explainable it tends to be. Neural networks, for instance, are notoriously hard to understand — even for their developers. The networks used to generate images or text can contain millions of nodes spread across thousands of layers. Trying to work out what any individual node or layer is doing to the data is extremely difficult.

    Learners in a computing classroom.

    Regardless of the complexity, it is still vital that developers find a way of providing essential information to anyone looking to use their models in an application or to a consumer who might be negatively impacted by the use of their model.

    Model cards for AI models

    One suggested strategy to add transparency to these models is using model cards. When you buy an item of food in a supermarket, you can look at the packaging and find all sorts of nutritional information, such as the ingredients, macronutrients, allergens they may contain, and recommended serving sizes. This information is there to help inform consumers about the choices they are making.

    Model cards attempt to do the same thing for ML models, providing essential information to developers and users of a model so they can make informed choices about whether or not they want to use it.

    Model cards include details such as the developer of the model, the training data used, the accuracy across diverse groups of people, and any limitations the developers uncovered in testing.

    Model cards should be accessible to as many people as possible.

    A real-world example of a model card is Google’s Face Detection model card. This details the model’s purpose, architecture, performance across various demographics, and any known limitations of their model. This information helps developers who might want to use the model to assess whether it is fit for their purpose.

    Transparency and accountability in AI

    As the world settles into the new reality of having the amazing power of AI models at our disposal for almost any task, we must teach young people about the importance of transparency and responsibility. 

    An educator points to an image on a student's computer screen.

    As a society, we need to have hard discussions about where and when we are comfortable implementing models and the consequences they might have for different groups of people. By teaching students about explainability, we are not only educating them about the workings of these technologies, but also teaching them to expect transparency as they grow to be future consumers or even developers of AI technology.

    Most importantly, model cards should be accessible to as many people as possible — taking this information and presenting it in a clear and understandable way. Model cards are a great way for you to show your students what information is important for people to know about an AI model and why they might want to know it. Model cards can help students understand the importance of transparency and accountability in AI.  


    This article also appears in issue 22 of Hello World, which is all about teaching and AI. Download your free PDF copy now.

    If you’re an educator, you can use our free Experience AI Lessons to teach your learners the basics of how AI works, whatever your subject area.

    Website: LINK

  • Classifying audio on the GIGA R1 WiFi from purely synthetic data

    Classifying audio on the GIGA R1 WiFi from purely synthetic data

    Reading Time: 2 minutes

    One of the main difficulties that people encounter when trying to build their edge ML models is gathering a large, yet simultaneously diverse, dataset. Audio models normally require setting up a microphone, capturing long sequences of sounds, and then manually removing bad data from the resulting files. Shakhizat Nurgaliyev’s project, however, eliminates the need for the arduous process by taking advantage of generative models to produce the dataset artificially.

    In order to go from three audio classes: speech, music, and background noise to a complete dataset, Nurgaliyev wrote a simple prompt for ChatGPT that gave directions for creating a total of 300 detailed audio descriptions. After this, he grabbed an NVIDIA Jetson AGX Orin Developer Kit and loaded Meta’s generative AudioCraft model which allowed him to pass in the previously made audio prompts and receive sound snippets in return.

    The final steps involved creating an Edge Impulse audio classification project, uploading the generated samples, and designing an Impulse that leveraged the MFE audio block and a Keras classifier model. Once an Arduino library had been built, Nurgaliyev loaded it, along with a simple sketch, onto an Arduino GIGA R1 WiFi board that continually listened for new audio data, performed classification, and displayed the label on the GIGA R1’s Display Shield screen.

    [youtube https://www.youtube.com/watch?v=SMixY8lOAN4?feature=oembed&w=500&h=281]

    To read more about this project, you can visit its write-up here on Hackster.io.

    The post Classifying audio on the GIGA R1 WiFi from purely synthetic data appeared first on Arduino Blog.

    Website: LINK

  • Improving comfort and energy efficiency in buildings with automated windows and blinds

    Improving comfort and energy efficiency in buildings with automated windows and blinds

    Reading Time: 2 minutes

    When dealing with indoor climate controls, there are several variables to consider, such as the outside weather, people’s tolerance to hot or cold temperatures, and the desired level of energy savings. Windows can make this extra challenging, as they let in large amounts of light/heat and can create poorly insulated regions, which is why Jallson Suryo developed a prototype that aims to balance these needs automatically through edge AI techniques.

    Suryo’s smart building ventilation system utilizes two separate boards, with an Arduino Nano 33 BLE Sense handling environmental sensor fusion and a Nicla Voice listening for certain ambient sounds. Rain and thunder noises were uploaded from an existing dataset, split and labeled accordingly, and then used to train a Syntiant audio classification model for the Nicla Voice’s NDP120 processor. Meanwhile, weather and ambient light data was gathered using the Nano’s onboard sensors and combined into time-series samples with labels for sunny/cloudy, humid, comfortable, and dry conditions.

    After deploying the board’s respective classification models, Suryo added some additional code that writes new I2C data from the Nicla Voice to the Nano that indicates if rain/thunderstorm sounds are present. If they are, the Nano can automatically close the window via servo motors while other environmental factors can set the position of the blinds. With this multi-sensor technique, a higher level of accuracy can be achieved for more precision control over a building’s windows, and thus attempt to lower the HVAC costs.

    [youtube https://www.youtube.com/watch?v=mqk1IRz76HM?feature=oembed&w=500&h=281]

    More information about Suryo’s project can be found here on its Edge Impulse docs page

    The post Improving comfort and energy efficiency in buildings with automated windows and blinds appeared first on Arduino Blog.

    Website: LINK

  • Teaching an Arduino UNO R4-powered robot to navigate obstacles autonomously

    Teaching an Arduino UNO R4-powered robot to navigate obstacles autonomously

    Reading Time: 2 minutes

    The rapid rise of edge AI capabilities on embedded targets has proven that relatively low-resource microcontrollers are capable of some incredible things. And following the recent release of the Arduino UNO R4 with its Renesas RA4M1 processor, the ceiling has gotten even higher as YouTuber Nikodem Bartnik has demonstrated with his lidar-equipped mobile robot.

    Bartnik’s project started with a simple question of whether it’s possible to teach a basic robot how to make its way around obstacles using only lidar instead of the more resource-intensive computer vision techniques employed by most other platforms. The chassis and hardware, including two DC motors, an UNO R4 Minima, a Bluetooth® module, and SD card, were constructed according to Open Robotic Platform (ORP) rules so that others can easily replicate and extend its functionality. After driving through a series of courses in order to collect a point cloud from the spinning lidar sensor, Bartnik imported the data and performed a few transformations to greatly minify the classification model.

    Once trained, the model was exported with help from the micromlgen Python package and loaded onto the UNO R4. The setup enables the incoming lidar data to be classified as the direction in which the robot should travel, and according to Bartnik’s experiments, this approach worked surprisingly well. Initially, there were a few issues when navigating corners and traveling through a figure eight track, but additional training data solved it and allowed the vehicle to overcome a completely novel course at maximum speed.

    [youtube https://www.youtube.com/watch?v=PdSDhdciSpE?feature=oembed&w=500&h=281]

    The post Teaching an Arduino UNO R4-powered robot to navigate obstacles autonomously appeared first on Arduino Blog.

    Website: LINK

  • Nothin’ but (neural) net: Track your basketball score with a Nano 33 BLE Sense

    Nothin’ but (neural) net: Track your basketball score with a Nano 33 BLE Sense

    Reading Time: 2 minutes

    When playing a short game of basketball, few people enjoy having to consciously track their number of successful throws. Yet when it comes to automation, nearly all systems rely on infrared or visual proximity detection as a way to determine when a shot has gone through the basket versus missed. This is what inspired one team from the University of Ljubljan to create a small edge ML-powered device that can be suspended from the net with a pair of zip ties for real-time scorekeeping.

    After collecting a total of 137 accelerometer samples via an Arduino Nano 33 BLE Sense and labeling them as either a miss, a score, or nothing within the Edge Impulse Studio, the team trained a classification model and reached an accuracy of 84.6% on real-world test data. Getting the classification results from the device to somewhere readable is handled by the Nano’s onboard BLE server. It provides two services, with the first for reporting the current battery level and the second for sending score data.

    Once the firmware had been deployed, the last step involved building a mobile application to view the relevant information. The app allows users to connect to the basketball scoring device, check if any new data has been received, and then parse/display the new values onscreen.

    [youtube https://www.youtube.com/watch?v=93X_wOuFTdY?feature=oembed&w=500&h=281]

    To read more about this project, you can head over to its write-up on Hackster.io.

    The post Nothin’ but (neural) net: Track your basketball score with a Nano 33 BLE Sense appeared first on Arduino Blog.

    Website: LINK

  • Helping robot dogs feel through their paws

    Helping robot dogs feel through their paws

    Reading Time: 2 minutes

    Your dog has nerve endings covering its entire body, giving it a sense of touch. It can feel the ground through its paws and use that information to gain better traction or detect harmful terrain. For robots to perform as well as their biological counterparts, they need a similar level of sensory input. In pursuit of that goal, the Autonomous Robots Lab designed TRACEPaw for legged robots.

    TRACEPaw (Terrain Recognition And Contact force Estimation Paw) is a sensorized foot for robot dogs that includes all of the hardware necessary to calculate force and classify terrain. Most systems like this use direct sensor readings, such as those from force sensors. But TRACEPaw is unique in that it uses indirect data to infer this information. The actual foot is a deformable silicone hemisphere. A camera looks at that and calculates the force based on the deformation it sees. In a similar way, a microphone listens to the sound of contact and uses that to judge the type of terrain, like gravel or dirt.

    To keep TRACEPaw self-contained, Autonomous Robots Lab chose to utilize an Arduino Nicla Vision board. That has an integrated camera, microphone, six-axis motion sensor, and enough processing power for onboard machine learning. Using OpenMV and TensorFlow Lite, TRACEPaw can estimate the force on the silicone pad based on how much it deforms during a step. It can also analyze the audio signal from the microphone to guess the terrain, as the silicone pad sounds different when touching asphalt than it does when touching loose soil.

    More details on the project are available on GitHub.

    The post Helping robot dogs feel through their paws appeared first on Arduino Blog.

    Website: LINK

  • This smart diaper knows when it is ready to be changed

    This smart diaper knows when it is ready to be changed

    Reading Time: 2 minutes

    The traditional method for changing a diaper starts when someone smells or feels the that the diaper has been soiled, and while it isn’t the greatest process, removing the soiled diaper as soon as possible is important for avoiding rashes and infections. Justin Lutz has created an intelligent solution to this situation by designing a small device that alerts people over Bluetooth® when the diaper is ready to be changed.

    Because a dirty diaper gives off volatile organic compounds (VOCs) and small particulates, Lutz realized he could use the Arduino Nicla Sense ME’s built-in BME688 sensor which can measure VOCs, temperature/humidity, and air quality. After gathering 29 minutes of gas and air quality measurements in the Edge impulse Studio for both clean and soiled diapers, he trained a classification model for 300 epochs, resulting in a model with 95% accuracy.

    Based on his prior experience with the Nicla Sense ME’s BLE capabilities and MIT App Inventor, Lutz used the two to devise a small gadget that wirelessly connects to a phone app so it can send notifications when it’s time for a new diaper.

    [youtube https://www.youtube.com/watch?v=Q1BknhEv9cQ?feature=oembed&w=500&h=281]

    To read more about this project, you can check out Lutz’s write-up here on the Edge Impulse docs page.

    The post This smart diaper knows when it is ready to be changed appeared first on Arduino Blog.

    Website: LINK

  • This Nicla Vision-based fire detector was trained entirely on synthetic data

    This Nicla Vision-based fire detector was trained entirely on synthetic data

    Reading Time: 2 minutes

    Due to an ever-warming planet thanks to climate change and greatly increasing wildfire chances because of prolonged droughts, being able to quickly detect when a fire has broken out is vital for responding while it’s still in a containable stage. But one major hurdle to collecting machine learning model datasets on these types of events is that they can be quite sporadic. In his proof of concept system, engineer Shakhizat Nurgaliyev shows how he leveraged NVIDIA Omniverse Replicator to create an entirely generated dataset and then deploy a model trained on that data to an Arduino Nicla Vision board.

    The project started out as a simple fire animation inside of Omniverse which was soon followed by a Python script that produces a pair of virtual cameras and randomizes the ground plane before capturing images. Once enough had been created, Nurgaliyev utilized the zero-shot object detection application Grounding DINO to automatically draw bounding boxes around the virtual flames. Lastly, each image was brought into an Edge Impulse project and used to develop a FOMO-based object detection model.

    By taking this approach, the model achieved an F1 score of nearly 87% while also only needing a max of 239KB of RAM and a mere 56KB of flash storage. Once deployed as an OpenMV library, Nurgaliyev shows in his video below how the MicroPython sketch running on a Nicla Vision within the OpenMV IDE detects and bounds flames. More information about this system can be found here on Hackster.io.

    [youtube https://www.youtube.com/watch?v=OFCwgWvivHo?feature=oembed&w=500&h=375]

    The post This Nicla Vision-based fire detector was trained entirely on synthetic data appeared first on Arduino Blog.

    Website: LINK

  • Predicting soccer games with ML on the UNO R4 Minima

    Predicting soccer games with ML on the UNO R4 Minima

    Reading Time: 2 minutes

    Based on the Renesas RA4M1 microcontroller, the new Arduino UNO R4 boasts 16x the RAM, 8x the flash, and a much faster CPU compared to the previous UNO R3. This means that unlike its predecessor, the R4 is capable of running machine learning at the edge to perform inferencing of incoming data. With this fact in mind, Roni Bandini wanted to leverage his UNO R4 Minima by training a model to predict the likelihood of a FIFA team winning their match.

    Bandini began his project by first downloading a dataset containing historical FIFA matches, including the country, team, opposing team, ranking, and neutral location. Next, the data was added to Edge impulse as a time-series dataset which feeds into a Keras classifier ML block and produces “win” and “lose/draw” values. Once trained, the model achieved an accuracy of 69% with a loss value of 0.58.

    Inputting the desired country and rank to make a prediction is done by making selections on a DFRobot LCD shield, and these values are then used to populate the input tensor for the model before it gets invoked and returns its classification results. Bandini’s device demonstrates how much more powerful the Arduino UNO R4 is over the R3, and additional information on the project can be found here in his post.

    [youtube https://www.youtube.com/watch?v=dYTukgY9kEU?feature=oembed&w=500&h=281]

    The post Predicting soccer games with ML on the UNO R4 Minima appeared first on Arduino Blog.

    Website: LINK

  • Intelligently control an HVAC system using the Arduino Nicla Vision

    Intelligently control an HVAC system using the Arduino Nicla Vision

    Reading Time: 2 minutes

    Shortly after setting the desired temperature of a room, a building’s HVAC system will engage and work to either raise or lower the ambient temperature to match. While this approach generally works well to control the local environment, the strategy also leads to tremendous wastes of energy since it is unable to easily adapt to changes in occupancy or activity. In contrast, Jallson Suryo’s smart HVAC project aims to tailor the amount of cooling to each zone individually by leveraging computer vision to track certain metrics.

    Suryo developed his proof of concept as a 1:50 scale model of a plausible office space, complete with four separate rooms and a plethora of human figurines. Employing Edge Impulse and a smartphone, 79 images were captured and had bounding boxes drawn around each person for use in a FOMO-based object detection model. After training, Suryo deployed the OpenMV firmware onto an Arduino Nicla Vision board and was able to view detections in real-time.

    The last step involved building an Arduino library containing the model and integrating it into a sketch that communicates with an Arduino Nano peripheral board over I2C by relaying the number of people per quadrant. Based on this data, the Nano dynamically adjusts one of four 5V DC fans to adjust the temperature while displaying relevant information on an OLED screen. To see how this POC works in more detail, you can visit Suryo’s write-up on the Edge Impulse docs page.

    The post Intelligently control an HVAC system using the Arduino Nicla Vision appeared first on Arduino Blog.

    Website: LINK

  • Meet Arduino Pro at tinyML EMEA Innovation Forum 2023

    Meet Arduino Pro at tinyML EMEA Innovation Forum 2023

    Reading Time: 3 minutes

    On June 26th-28th, the Arduino Pro team will be in Amsterdam for the tinyML EMEA Innovation Forum – one of the year’s major events for the world where AI models meet agile, low-power devices.

    This is an exciting time for companies like Arduino and anyone interested in accelerating the adoption of tiny machine learning: technologies, products, and ideas are converging into a worldwide phenomenon with incredible potential – and countless applications already.

    At the summit, our team will indeed present a selection of demos that leverage tinyML to create useful solutions in a variety of industries and contexts. For example, we will present:

    • A fan anomaly detection system based on the Nicla Sense ME. In this solution developed with SensiML, the Nicla module leverages its integrated accelerometer to constantly measure the vibrations generated by a computer fan. Thanks to a trained model, condition monitoring turns into anomaly detection – the system is able to determine whether the fan is on or off, notify users of any shocks, and even alert them if its super precise and efficient sensor detects sub-optimal airflow.
    • A vineyard pest monitoring system with the Nicla Vision and MKR WAN 1310. Machine vision works at the service of smart agriculture in this solution: even in the most remote field, a pheromone is used to attract insects inside a case lined with glue traps. The goal is not to capture all the insects, but to use a Nicla Vision module to take a snapshot of the captured bugs, recognize the ones that pose a real threat, and send updated data on how many specimens were found. New-generation farmers can thus schedule interventions against pests as soon as needed, before the insects get out of control and cause damage to the crops. Leveraging LoRa® connectivity, this application is both low-power and high-efficiency.
    • An energy monitoring-based anomaly detection solution for DC motors, with the Opta. This application developed with Edge Impulse leverages an Opta WiFi microPLC to easily implement industrial-level, real-time monitoring and fault detection – great to enable predictive maintenance, reducing downtime and overall costs. A Hall effect current sensor is attached in series with the supply line of the DC motor to acquire real-time data, which is then analyzed using ML algorithms to identify patterns and trends that might indicate faulty operation. The DC motor is expected to be in one of two statuses – ON or OFF – but different conditions can be simulated with the potentiometer. When unexpected electric consumption is shown, the Opta WiFi detects the anomaly and turns on a warning LED.

    The Arduino Pro team is looking forward to meeting customers and partners in Amsterdam – championing open source, accessibility, and flexibility in industrial-grade solutions at the tinyML EMEA Innovation Forum!

    The post Meet Arduino Pro at tinyML EMEA Innovation Forum 2023 appeared first on Arduino Blog.

    Website: LINK

  • How we’re learning to explain AI terms for young people and educators

    How we’re learning to explain AI terms for young people and educators

    Reading Time: 6 minutes

    What do we talk about when we talk about artificial intelligence (AI)? It’s becoming a cliche to point out that, because the term “AI” is used to describe so many different things nowadays, it’s difficult to know straight away what anyone means when they say “AI”. However, it’s true that without a shared understanding of what AI and related terms mean, we can’t talk about them, or educate young people about the field.

    A group of young people demonstrate a project at Coolest Projects.

    So when we started designing materials for the Experience AI learning programme in partnership with leading AI unit Google DeepMind, we decided to create short explanations of key AI and machine learning (ML) terms. The explanations are doubly useful:

    1. They ensure that we give learners and teachers a consistent and clear understanding of the key terms across all our Experience AI resources. Within the Experience AI Lessons for Key Stage 3 (age 11–14), these key terms are also correlated to the target concepts and learning objectives presented in the learning graph. 
    2. They help us talk about AI and AI education in our team. Thanks to sharing an understanding of what terms such as “AI”, “ML”, “model”, or “training” actually mean and how to best talk about AI, our conversations are much more productive.

    As an example, here is our explanation of the term “artificial intelligence” for learners aged 11–14:

    Artificial intelligence (AI) is the design and study of systems that appear to mimic intelligent behaviour. Some AI applications are based on rules. More often now, AI applications are built using machine learning that is said to ‘learn’ from examples in the form of data. For example, some AI applications are built to answer questions or help diagnose illnesses. Other AI applications could be built for harmful purposes, such as spreading fake news. AI applications do not think. AI applications are built to carry out tasks in a way that appears to be intelligent.

    You can find 32 explanations in the glossary that is part of the Experience AI Lessons. Here’s an insight into how we arrived at the explanations.

    Reliable sources

    In order to ensure the explanations are as precise as possible, we first identified reliable sources. These included among many others:

    Explaining AI terms to Key Stage 3 learners: Some principles

    Vocabulary is an important part of teaching and learning. When we use vocabulary correctly, we can support learners to develop their understanding. If we use it inconsistently, this can lead to alternate conceptions (misconceptions) that can interfere with learners’ understanding. You can read more about this in our Pedagogy Quick Read on alternate conceptions.

    Some of our principles for writing explanations of AI terms were that the explanations need to: 

    • Be accurate
    • Be grounded in education research best practice
    • Be suitable for our target audience (Key Stage 3 learners, i.e. 11- to 14-year-olds)
    • Be free of terms that have alternative meanings in computer science, such as “algorithm”

    We engaged in an iterative process of writing explanations, gathering feedback from our team and our Experience AI project partners at Google DeepMind, and adapting the explanations. Then we went through the feedback and adaptation cycle until we all agreed that the explanations met our principles.

    A real banana and an image of a banana shown on the screen of a laptop are both labelled "Banana".
    Image: Max Gruber / Better Images of AI / Ceci n’est pas une banane / CC-BY 4.0

    An important part of what emerged as a result, aside from the explanations of AI terms themselves, was a blueprint for how not to talk about AI. One aspect of this is avoiding anthropomorphism, detailed by Ben Garside from our team here.

    As part of designing the the Experience AI Lessons, creating the explanations helped us to:

    • Decide which technical details we needed to include when introducing AI concepts in the lessons
    • Figure out how to best present these technical details
    • Settle debates about where it would be appropriate, given our understanding and our learners’ age group, to abstract or leave out details

    Using education research to explain AI terms

    One of the ways education research informed the explanations was that we used semantic waves to structure each term’s explanation in three parts: 

    1. Top of the wave: The first one or two sentences are a high-level abstract explanation of the term, kept as short as possible, while introducing key words and concepts.
    2. Bottom of the wave: The middle part of the explanation unpacks the meaning of the term using a common example, in a context that’s familiar to a young audience. 
    3. Top of the wave: The final one or two sentences repack what was explained in the example in a more abstract way again to reconnect with the term. The end part should be a repeat of the top of the wave at the beginning of the explanation. It should also add further information to lead to another concept. 

    Most explanations also contain ‘middle of the wave’ sentences, which add additional abstract content, bridging the ‘bottom of the wave’ concrete example to the ‘top of the wave’ abstract content.

    Here’s the “artificial intelligence” explanation broken up into the parts of the semantic wave:

    • Artificial intelligence (AI) is the design and study of systems that appear to mimic intelligent behaviour. (top of the wave)
    • Some AI applications are based on rules. More often now, AI applications are built using machine learning that is said to ‘learn’ from examples in the form of data. (middle of the wave)
    • For example, some AI applications are built to answer questions or help diagnose illnesses. Other AI applications could be built for harmful purposes, such as spreading fake news (bottom of the wave)
    • AI applications do not think. (middle of the wave)
    • AI applications are built to carry out tasks in a way that appears to be intelligent. (top of the wave)
    Our "artificial intelligence" explanation broken up into the parts of the semantic wave.
    Our “artificial intelligence” explanation broken up into the parts of the semantic wave. Red = top of the wave; yellow = middle of the wave; green = bottom of the wave

    Was it worth our time?

    Some of the explanations went through 10 or more iterations before we agreed they were suitable for publication. After months of thinking about, writing, correcting, discussing, and justifying the explanations, it’s tempting to wonder whether I should have just prompted an AI chatbot to generate the explanations for me.

    A window of three images. On the right is a photo of a big tree in a green field in a field of grass and a bright blue sky. The two on the left are simplifications created based on a decision tree algorithm. The work illustrates a popular type of machine learning model: the decision tree. Decision trees work by splitting the population into ever smaller segments. I try to give people an intuitive understanding of the algorithm. I also want to show that models are simplifications of reality, but can still be useful, or in this case visually pleasing. To create this I trained a model to predict pixel colour values, based on an original photograph of a tree.
    Rens Dimmendaal & Johann Siemens / Better Images of AI / Decision Tree reversed / CC-BY 4.0

    I tested this idea by getting a chatbot to generate an explanation of “artificial intelligence” using the prompt “Explain what artificial intelligence is, using vocabulary suitable for KS3 students, avoiding anthropomorphism”. The result included quite a few inconsistencies with our principles, as well as a couple of technical inaccuracies. Perhaps I could have tweaked the prompt for the chatbot in order to get a better result. However, relying on a chatbot’s output would mean missing out on some of the value of doing the work of writing the explanations in collaboration with my team and our partners.

    The visible result of that work is the explanations themselves. The invisible result is the knowledge we all gained, and the coherence we reached as a team, both of which enabled us to create high-quality resources for Experience AI. We wouldn’t have gotten to know what resources we wanted to write without writing the explanations ourselves and improving them over and over. So yes, it was worth our time.

    What do you think about the explanations?

    The process of creating and iterating the AI explanations highlights how opaque the field of AI still is, and how little we yet know about how best to teach and learn about it. At the Raspberry Pi Foundation, we now know just a bit more about that and are excited to share the results with teachers and young people.

    You can access the Experience AI Lessons and the glossary with all our explanations at experience-ai.org. The glossary of AI explanations is just in its first published version: we will continue to improve it as we find out more about how to best support young people to learn about this field.

    Let us know what you think about the explanations and whether they’re useful in your teaching. Onwards with the exciting work of establishing how to successfully engage young people in learning about and creating with AI technologies.

    Website: LINK

  • This AI system helps visually impaired people locate dining utensils

    This AI system helps visually impaired people locate dining utensils

    Reading Time: 2 minutes

    People with visual impairments also enjoy going out to a restaurant for a nice meal, which is why it is common for wait staff to place the salt and pepper shakes in a consistent fashion: salt on the right and pepper on the left. That helps visually impaired diners quickly find the spice they’re looking for and a similar arrangement works for utensils. But what about after the diner sets down a utensil in the middle of a meal? The ForkLocator is an AI system that can help them locate the utensil again.

    This is a wearable device meant for people with visual impairments. It uses object recognition and haptic cues to help the user locate their fork. The current prototype, built by Revoxdyna, only works with forks. But it would be possible to expand the system to work with the full range of utensils. Haptic cues come from four servo motors, which prod the user’s arm to indicate the direction in which they should move their hand to find the fork.

    The user’s smartphone performs the object recognition and should be worn or positioned in such a way that its camera faces the table. The smartphone app looks for the plate, the fork, and the user’s hand. It then calculates a vector from the hand to the fork and tells an Arduino board to actuate the servo motors corresponding to that direction. Those servos and the Arduino attach to a 3D-printed frame that straps to the user’s upper arm.

    A lot more development is necessary before a system like the ForkLocator would be ready for the consumer market, but the accessibility benefits are something to applaud.

    [youtube https://www.youtube.com/watch?v=_TgC0KYyzwI?feature=oembed&w=500&h=281]

    The post This AI system helps visually impaired people locate dining utensils appeared first on Arduino Blog.

    Website: LINK

  • Enabling automated pipeline maintenance with edge AI

    Enabling automated pipeline maintenance with edge AI

    Reading Time: 2 minutes

    Pipelines are integral to our modern way of life, as they enable the fast transportation of water and energy between central providers and the eventual consumers of that resource. However, the presence of cracks from mechanical or corrosive stress can lead to leaks, and thus waste of product or even potentially dangerous situations. Although methods using thermal cameras or microphones exist, they’re hard to use interchangeably across different pipeline types, which is why Kutluhan Aktar instead went with a combination of mmWave radar and an ML model running on an Arduino Nicla Vision board to detect these issues before they become a real problem.

    The project was originally conceived as an arrangement of parts on a breadboard, including a Seeed Studio MR60BHA1 60GHz radar module, an ILI9341 TFT screen, an Arduino Nano for interfacing with the sensor and display, and a Nicla Vision board. From here, Kutluhan designed his own Dragonite-themed PCB, assembled the components, and began collecting training and testing data for a machine learning model by building a small PVC model, introducing various defects, and recording the differences in data from the mmWave sensor. The system is able to do this by measuring the minute variations in vibrations as liquids move around, with increased turbulence often being correlated with defects.

    After configuring a time-series impulse, a classification model was trained with the help of Edge Impulse that would use the three labels (cracked, clogged, and leakage) to see if the pipe had any damage. It was then deployed to the Nicla Vision where it achieved an accuracy of 90% on real-world data. With the aid of the screen, operators can tell the result of the classification immediately, as well as send the data to a custom web application. 

    [youtube https://www.youtube.com/watch?v=ghSaefzzEXY?feature=oembed&w=500&h=281]

    More details on the project be found here in its Edge Impulse docs page.

    The post Enabling automated pipeline maintenance with edge AI appeared first on Arduino Blog.

    Website: LINK