Schlagwort: TinyML

  • tinyML in Malawi: Empowering local communities through technology

    tinyML in Malawi: Empowering local communities through technology

    Reading Time: 3 minutes

    Dr. David Cuartielles, co-founder of Arduino, recently participated in a workshop titled “TinyML for Sustainable Development” in Zomba, organized by the International Centre for Theoretical Physics (ICTP), a category 1 UNESCO institute, and the University of Malawi. Bringing together students, educators, and professionals from Malawi and neighboring countries, as well as international experts from Brazil, Slovenia, Italy, and Sweden, the event aimed to introduce participants to tiny machine learning (tinyML) and its applications in addressing global challenges, bringing cutting-edge technology to new frontiers.

    The workshop was supported by various global organizations and companies, including RAiDO, ICTP, NAiXus, UNESCO’s IRC-AI, the EDGE AI FOUNDATION, ITU’s AI-4-Good, CRAFS, and the Ministry of Education of Malawi. As part of our commitment to supporting educational initiatives that promote technological empowerment and sustainable development worldwide, Arduino contributed by donating equipment for the hands-on sessions, enabling participants to gain practical experience with embedded systems and machine learning.

    Cuartielles – who centered his session on an introduction to Nicla Vision – is a long-time supporter of the importance of providing access to advanced technologies in regions with limited resources. He believes that such communities can leapfrog traditional development stages by adopting innovative solutions tailored to their specific needs. During the workshop, participants engaged in projects focusing on agriculture, health, and environmental monitoring, demonstrating the potential of tinyML in improving local livelihoods.

    “You cannot imagine the pride of seeing things work, when students and teachers from different countries or regions join to learn about our technology, and about how they can apply it in their own education programs or everyday implementation cases,” Cuartielles says.

    For those interested in learning more about the workshop and its content, all presentation slides and resources are available online

    In partnership with

    The post tinyML in Malawi: Empowering local communities through technology appeared first on Arduino Blog.

    Website: LINK

  • Reimagining the chicken coop with predator detection, Wi-Fi control, and more

    Reimagining the chicken coop with predator detection, Wi-Fi control, and more

    Reading Time: 2 minutes

    The traditional backyard chicken coop is a very simple structure that typically consists of a nesting area, an egg-retrieval panel, and a way to provide food and water as needed. Realizing that some aspects of raising chickens are too labor-intensive, the Coders Cafe crew decided to automate most of the daily care process by bringing some IoT smarts to the traditional hen house.

    Controlled and actuated by an Arduino UNO R4 WiFi and a stepper motor, respectively, the front door of the coop relies on a rack-and-pinion mechanism to quickly open or close at the scheduled times. After the chickens have entered the coop to rest or lay eggs, they can be fed using a pair of fully-automatic dispensers. Each one is a hopper with a screw at the bottom which pulls in the food with the help of gravity and gently distributes it onto the ground. And similar to the door, feeding chickens can be scheduled in advance through the team’s custom app and the UNO R4’s integrated Wi-Fi chipset.

    The last and most advanced feature is the AI predator detection system. Thanks to a DFRobot HuskeyLens vision module and its built-in training process, images of predatory animals can be captured and leveraged to train the HuskyLens for when to generate an alert. Once an animal has been detected, it tells the UNO R4 over I2C, which in turn, sends an SMS message via Twilio.

    More details about the project can be found in Coders Cafe’s Instructables writeup.

    The post Reimagining the chicken coop with predator detection, Wi-Fi control, and more appeared first on Arduino Blog.

    Website: LINK

  • Making fire detection more accurate with ML sensor fusion

    Making fire detection more accurate with ML sensor fusion

    Reading Time: 2 minutes

    The mere presence of a flame in a controlled environment, such as a candle, is perfectly acceptable, but when tasked with determining if there is cause for alarm solely using vision data, embedded AI models can struggle with false positives. Solomon Githu’s project aims to lower the rate of incorrect detections with a multi-input sensor fusion technique wherein image and temperature data points are used by a model to alert if there’s a potentially dangerous blaze.

    Gathering both kinds of data is the Arduino TinyML Kit’s Nano 33 BLE Sense. Using the kit, Githu could capture a wide variety of images thanks to the OV7675 camera module and temperature information with the Nano 33 BLE Sense’s onboard HTS221 sensor. After exporting a large dataset of fire/fire-less samples alongside a range of ambient temperatures, he leveraged Google Colab to train the model before importing it into the Edge Impulse Studio. In here, the model’s memory footprint was further reduced to fit onto the Nano 33 BLE Sense.

    The inferencing sketch polls the camera for a new frame, and once it has been resized, its frame data, along with a new sample from the temperature sensor, are merged and sent through the model which outputs either “fire” or “safe_environment”. As detailed in Githu’s project post, the system accurately classified several scenarios in which a flame combined with elevated temperatures resulted in a positive detection.

    The post Making fire detection more accurate with ML sensor fusion appeared first on Arduino Blog.

    Website: LINK

  • Making a car more secure with the Arduino Nicla Vision

    Making a car more secure with the Arduino Nicla Vision

    Reading Time: 2 minutes

    Shortly after attending a recent tinyML workshop in Sao Paolo, Brazil, Joao Vitor Freitas da Costa was looking for a way to incorporate some of the technologies and techniques he learned into a useful project. Given that he lives in an area which experiences elevated levels of pickpocketing and automotive theft, he turned his attention to a smart car security system.

    His solution to a potential break-in or theft of keys revolves around the incorporation of an Arduino Nicla Vision board running a facial recognition model that only allows the vehicle to start if the owner is sitting in the driver’s seat. The beginning of the image detection/processing loop involves grabbing the next image from the board’s camera and sending it to a classification model where it receives one of three labels: none, unknown, or Joao, the driver. Once the driver has been detected for 10 consecutive seconds, the Nicla Vision activates a relay in order to complete the car’s 12V battery circuit, at which point the vehicle can be started normally with the ignition.

    Through this project, da Costa was able to explore a practical application of vision models at-the-edge to make his friend’s car safer to use. To see how it works in more detail, you can check out the video below and delve into the tinyML workshop he attended here.

    [youtube https://www.youtube.com/watch?v=LG1YhM2kelI?feature=oembed&w=500&h=281]

    The post Making a car more secure with the Arduino Nicla Vision appeared first on Arduino Blog.

    Website: LINK

  • Classify nearby annoyances with this sound monitoring device

    Classify nearby annoyances with this sound monitoring device

    Reading Time: 2 minutes

    Soon after a police station opened near his house, Christopher Cooper noticed a substantial increase in the amount of emergency vehicle traffic and their associated noises even though local officials had promised that it would not be disruptive. But rather than write down every occurrence to track the volume of disturbances, he came up with a connected audio-classifying device that can automatically note the time and type of sound for later analysis.

    Categorizing each sound was done by leveraging Edge Impulse and an Arduino Nano 33 BLE Sense. After training a model and deploying it within a sketch, the Nano will continually listen for new noises through its onboard microphone, run an inference, and then output the label and confidence over UART serial. Reading this stream of data is an ESP32 Dev Kit, which displays every entry in a list on a useful GUI. The screen allows users to select rows, view more detailed information, and even modify the category if needed.

    Going beyond the hardware aspect, Cooper’s project also includes a web server running on the ESP32 that can show the logs within a browser, and users can even connect an SD card to have automated file entries created. For more information about this project, you can read Cooper’s write-up here on Hackster.io.

    [youtube https://www.youtube.com/watch?v=UYE-HdRBQnE?feature=oembed&w=500&h=281]

    The post Classify nearby annoyances with this sound monitoring device appeared first on Arduino Blog.

    Website: LINK

  • Arduino featured in the 2024 State of the Edge AI Report

    Arduino featured in the 2024 State of the Edge AI Report

    Reading Time: 3 minutes

    The 2024 State of Edge AI Report is out, and we’re proud to be in it — for the second year in a row!

    “Edge AI is a crucial technology in this world of finite resources. It allows us to monitor and optimize consumption in real time: so the use of electricity or water, for example, can be optimized not just for today, but for the future. Manufacturing, agriculture and logistics can minimize their impact, with huge potential for cost savings as well as lowering our carbon footprint,” explains Fabio Violante, CEO of Arduino.

    Edge AI has witnessed a remarkable surge in recent years, driven among other factors by the urgent need for efficient resource management and sustainability. Indeed, this technology leverages real-time data analytics and predictive modeling to enable proactive decision-making in a wide variety of sectors. 

    The 2024 “State of Edge AI” Report, curated by Wevolver, contains a plethora of examples and insights relevant to applications ranging from healthcare to automotive. 

    For example, edge AI solutions facilitate precision farming practices by analyzing soil moisture levels, weather patterns, and crop health data to optimize irrigation and fertilization, thereby maximizing yields while minimizing environmental impact.

    In logistics and transportation networks, deploying AI-powered edge devices in vehicles and infrastructure makes real-time monitoring of traffic conditions and route optimization feasible. This not only improves operational efficiency but also enhances safety by mitigating the risks of accidents and breakdowns. Edge AI also facilitates the development of smart cities by enabling intelligent management of utilities, transportation systems, and public services through seamless integration with IoT devices and sensors deployed across urban environments. This empowers municipalities to optimize resource allocation, reduce congestion, and enhance the overall quality of life for residents.

    In addition to optimizing resource and energy use to reduce financial and environmental impacts, edge AI-powered systems can lead to significant cost savings by foreseeing equipment failures. Predictive maintenance was indeed the focus of our contribution to this year’s report, showcasing products like Opta, Nicla Sense ME and Portenta Machine Control and success stories (like AROL’s and Engapplic’s) that bring the benefits of edge AI into the realm of present, tangible opportunities for enterprises in any industry and at any stage of their development. 

    Curious to find out more? Just download the 108-page report for free at this link

    “Simplicity is the key to success. In the tech world, a solution is only as successful as it is widely accepted, adopted and applied — and not everyone can be an expert. You don’t have to know how electricity works to turn on the lights, how an engine is built to drive a car, or how large language models were developed to write a ChatGPT prompt: that plays a huge part in the popularity of these tools,” Violante adds. “That’s why, at Arduino, we make it our mission to democratize technologies like edge AI — providing simple interfaces, off-the-shelf hardware, readily available software libraries, free tools, shared knowledge, and everything else we can think of. We believe edge AI today can become an accessible, even easy-to-use option, and that more and more people across all industries, in companies of all sizes, will be able to leverage this innovation to solve problems, create value, and grow.”

    The post Arduino featured in the 2024 State of the Edge AI Report appeared first on Arduino Blog.

    Website: LINK

  • Detecting HVAC failures early with an Arduino Nicla Sense ME and edge ML

    Detecting HVAC failures early with an Arduino Nicla Sense ME and edge ML

    Reading Time: 2 minutes

    Having constant, reliable access to a working HVAC system is vital for our way of living, as they provide a steady supply of fresh, conditioned air. In an effort to decrease downtime and maintenance costs from failures, Yunior González and Danelis Guillan have developed a prototype device that aims to leverage edge machine learning to predict issues before they occur.

    The duo went with a Nicla Sense ME due to its onboard accelerometer, and after collecting many readings from each of the three axes at a 10Hz sampling rate, they imported the data into Edge Impulse to create the model. This time, rather than using a classifier, they utilized a K-means clustering algorithm — which is great at detecting anomalous readings, such as a motor spinning erratically, compared to a steady baseline.

    Once the Nicla Sense ME had detected an anomaly, it needed a way to send this data somewhere else and generate an alert. González and Guillan’s setup accomplishes the goal by connecting a Microchip AVR-IoT Cellular Mini board to the Sense ME along with a screen, and upon receiving a digital signal from the Sense ME, the AVR-IoT Cellular Mini logs a failure in an Azure Cosmos DB instance where it can be viewed later on a web app.

    To read more about this preventative maintenance project, you can read the pair’s write-up here on Hackster.io.

    [youtube https://www.youtube.com/watch?v=R_FClm1Hk_s?feature=oembed&w=500&h=281]

    The post Detecting HVAC failures early with an Arduino Nicla Sense ME and edge ML appeared first on Arduino Blog.

    Website: LINK

  • Classifying audio on the GIGA R1 WiFi from purely synthetic data

    Classifying audio on the GIGA R1 WiFi from purely synthetic data

    Reading Time: 2 minutes

    One of the main difficulties that people encounter when trying to build their edge ML models is gathering a large, yet simultaneously diverse, dataset. Audio models normally require setting up a microphone, capturing long sequences of sounds, and then manually removing bad data from the resulting files. Shakhizat Nurgaliyev’s project, however, eliminates the need for the arduous process by taking advantage of generative models to produce the dataset artificially.

    In order to go from three audio classes: speech, music, and background noise to a complete dataset, Nurgaliyev wrote a simple prompt for ChatGPT that gave directions for creating a total of 300 detailed audio descriptions. After this, he grabbed an NVIDIA Jetson AGX Orin Developer Kit and loaded Meta’s generative AudioCraft model which allowed him to pass in the previously made audio prompts and receive sound snippets in return.

    The final steps involved creating an Edge Impulse audio classification project, uploading the generated samples, and designing an Impulse that leveraged the MFE audio block and a Keras classifier model. Once an Arduino library had been built, Nurgaliyev loaded it, along with a simple sketch, onto an Arduino GIGA R1 WiFi board that continually listened for new audio data, performed classification, and displayed the label on the GIGA R1’s Display Shield screen.

    [youtube https://www.youtube.com/watch?v=SMixY8lOAN4?feature=oembed&w=500&h=281]

    To read more about this project, you can visit its write-up here on Hackster.io.

    The post Classifying audio on the GIGA R1 WiFi from purely synthetic data appeared first on Arduino Blog.

    Website: LINK

  • Controlling a power strip with a keyword spotting model and the Nicla Voice

    Controlling a power strip with a keyword spotting model and the Nicla Voice

    Reading Time: 2 minutes

    As Jallson Suryo discusses in his project, adding voice controls to our appliances typically involves an internet connection and a smart assistant device such as Amazon Alexa or Google Assistant. This means extra latency, security concerns, and increased expenses due to the additional hardware and bandwidth requirements. This is why he created a prototype based on an Arduino Nicla Voice that can provide power for up to four outlets using just a voice command.

    Suryo gathered a dataset by repeating the words “one,” “two,” “three,” “four,” “on,” and “off” into his phone and then uploaded the recordings to an Edge Impulse project. From here, he split the files into individual words before rebalancing his dataset to ensure each label was equally represented. The classifier model was trained for keyword spotting and used Syntiant NDP120-optimal settings for voice to yield an accuracy of around 80%.

    Apart from the Nicla Voice, Suryo incorporated a Pro Micro board to handle switching the bank of relays on or off. When the Nicla Voice detects the relay number, such as “one” or “three”, it then waits until the follow-up “on” or “off” keyword is detected. With both the number and state now known, it sends an I2C transmission to the accompanying Pro Micro which decodes the command and switches the correct relay.

    To see more about this voice-controlled power strip, be sure to check out Suryo’s Edge Impulse tutorial.

    [youtube https://www.youtube.com/watch?v=9PRjhA38jBE?feature=oembed&w=500&h=281]

    The post Controlling a power strip with a keyword spotting model and the Nicla Voice appeared first on Arduino Blog.

    Website: LINK

  • Improve recycling with the Arduino Pro Portenta C33 and AI audio classification

    Improve recycling with the Arduino Pro Portenta C33 and AI audio classification

    Reading Time: 2 minutes

    In July 2023, Samuel Alexander set out to reduce the amount of trash that gets thrown out due to poor sorting practices at the recycling bin. His original design relied on an Arduino Nano 33 BLE Sense to capture audio through its onboard microphone and then perform edge audio classification with an embedded ML model to automatically separate materials based on the sound they make when tossed inside. But in this latest iteration, Alexander added several large improvements to help the concept scale much further.

    Perhaps the most substantial modification, the bin now uses an Arduino Pro Portenta C33 in combination with an external Nicla Voice or Nano 33 BLE Sense to not only perform inferences to sort trash, but also send real-time data to a cloud endpoint. By utilizing the Arduino Cloud through the Portanta C33, each AI-enabled recycling bin can now report its current capacity for each type of waste and then send an alert when collection must occur.

    While not as practical for household use, this integration could be incredibly effective for municipalities looking to create a network of bins that can be deployed in a city park environment or another public space.

    Thanks to these upgrades, Alexander was able to submit his prototype for consideration in the 2023 Hackaday Prize competition where he was awarded the Protolabs manufacturing grant. To see more about this innovative project, you can check out its write-up here and watch Alexander’s detailed explanation video below.

    [youtube https://www.youtube.com/watch?v=jqvCssm-7A4?feature=oembed&w=500&h=281]

    The post Improve recycling with the Arduino Pro Portenta C33 and AI audio classification appeared first on Arduino Blog.

    Website: LINK

  • Building the OG smartwatch from Inspector Gadget

    Building the OG smartwatch from Inspector Gadget

    Reading Time: 2 minutes

    We recently showed you Becky Stern’s recreation of the “computer book” carried by Penny in the Inspector Gadget cartoon, but Stern didn’t stop there. She also built a replica of Penny’s most iconic gadget: her watch. Penny was a trendsetter and rocked that decades before the Apple Watch hit the market. Stern’s replica looks just like the cartoon version and even has some of the same features.

    The centerpiece of this project is an Arduino Nicla Voice board. The Arduino team designed that board specifically for speech recognition on the edge, which made it perfect for recognizing Penny’s signature “come in, Brain!” voice command. Stern used Edge Impulse to train an AI to recognize that phrase as a wake word. When the Nicla Voice board hears that, it changes the image on the smart watch screen to a new picture of Brain the dog.

    The Nicla Vision board and an Adafruit 1.69″ color IPS TFT screen fit inside a 3D-printed enclosure modeled on Penny’s watch from the cartoon. That even has a clever 3D-printed watch band with links connected by lengths of fresh filament. Power comes from a small lithium battery that also fits inside the enclosure.

    This watch and Stern’s computer book will both be part of an Inspector Gadget display put on by Digi-Key at Maker Faire Rome, so you can see it in person if you attend.

    [youtube https://www.youtube.com/watch?v=Yd74FYTvGX8?feature=oembed&w=500&h=281]

    The post Building the OG smartwatch from Inspector Gadget appeared first on Arduino Blog.

    Website: LINK

  • Improving comfort and energy efficiency in buildings with automated windows and blinds

    Improving comfort and energy efficiency in buildings with automated windows and blinds

    Reading Time: 2 minutes

    When dealing with indoor climate controls, there are several variables to consider, such as the outside weather, people’s tolerance to hot or cold temperatures, and the desired level of energy savings. Windows can make this extra challenging, as they let in large amounts of light/heat and can create poorly insulated regions, which is why Jallson Suryo developed a prototype that aims to balance these needs automatically through edge AI techniques.

    Suryo’s smart building ventilation system utilizes two separate boards, with an Arduino Nano 33 BLE Sense handling environmental sensor fusion and a Nicla Voice listening for certain ambient sounds. Rain and thunder noises were uploaded from an existing dataset, split and labeled accordingly, and then used to train a Syntiant audio classification model for the Nicla Voice’s NDP120 processor. Meanwhile, weather and ambient light data was gathered using the Nano’s onboard sensors and combined into time-series samples with labels for sunny/cloudy, humid, comfortable, and dry conditions.

    After deploying the board’s respective classification models, Suryo added some additional code that writes new I2C data from the Nicla Voice to the Nano that indicates if rain/thunderstorm sounds are present. If they are, the Nano can automatically close the window via servo motors while other environmental factors can set the position of the blinds. With this multi-sensor technique, a higher level of accuracy can be achieved for more precision control over a building’s windows, and thus attempt to lower the HVAC costs.

    [youtube https://www.youtube.com/watch?v=mqk1IRz76HM?feature=oembed&w=500&h=281]

    More information about Suryo’s project can be found here on its Edge Impulse docs page

    The post Improving comfort and energy efficiency in buildings with automated windows and blinds appeared first on Arduino Blog.

    Website: LINK

  • Nothin’ but (neural) net: Track your basketball score with a Nano 33 BLE Sense

    Nothin’ but (neural) net: Track your basketball score with a Nano 33 BLE Sense

    Reading Time: 2 minutes

    When playing a short game of basketball, few people enjoy having to consciously track their number of successful throws. Yet when it comes to automation, nearly all systems rely on infrared or visual proximity detection as a way to determine when a shot has gone through the basket versus missed. This is what inspired one team from the University of Ljubljan to create a small edge ML-powered device that can be suspended from the net with a pair of zip ties for real-time scorekeeping.

    After collecting a total of 137 accelerometer samples via an Arduino Nano 33 BLE Sense and labeling them as either a miss, a score, or nothing within the Edge Impulse Studio, the team trained a classification model and reached an accuracy of 84.6% on real-world test data. Getting the classification results from the device to somewhere readable is handled by the Nano’s onboard BLE server. It provides two services, with the first for reporting the current battery level and the second for sending score data.

    Once the firmware had been deployed, the last step involved building a mobile application to view the relevant information. The app allows users to connect to the basketball scoring device, check if any new data has been received, and then parse/display the new values onscreen.

    [youtube https://www.youtube.com/watch?v=93X_wOuFTdY?feature=oembed&w=500&h=281]

    To read more about this project, you can head over to its write-up on Hackster.io.

    The post Nothin’ but (neural) net: Track your basketball score with a Nano 33 BLE Sense appeared first on Arduino Blog.

    Website: LINK

  • Helping robot dogs feel through their paws

    Helping robot dogs feel through their paws

    Reading Time: 2 minutes

    Your dog has nerve endings covering its entire body, giving it a sense of touch. It can feel the ground through its paws and use that information to gain better traction or detect harmful terrain. For robots to perform as well as their biological counterparts, they need a similar level of sensory input. In pursuit of that goal, the Autonomous Robots Lab designed TRACEPaw for legged robots.

    TRACEPaw (Terrain Recognition And Contact force Estimation Paw) is a sensorized foot for robot dogs that includes all of the hardware necessary to calculate force and classify terrain. Most systems like this use direct sensor readings, such as those from force sensors. But TRACEPaw is unique in that it uses indirect data to infer this information. The actual foot is a deformable silicone hemisphere. A camera looks at that and calculates the force based on the deformation it sees. In a similar way, a microphone listens to the sound of contact and uses that to judge the type of terrain, like gravel or dirt.

    To keep TRACEPaw self-contained, Autonomous Robots Lab chose to utilize an Arduino Nicla Vision board. That has an integrated camera, microphone, six-axis motion sensor, and enough processing power for onboard machine learning. Using OpenMV and TensorFlow Lite, TRACEPaw can estimate the force on the silicone pad based on how much it deforms during a step. It can also analyze the audio signal from the microphone to guess the terrain, as the silicone pad sounds different when touching asphalt than it does when touching loose soil.

    More details on the project are available on GitHub.

    The post Helping robot dogs feel through their paws appeared first on Arduino Blog.

    Website: LINK

  • This smart diaper knows when it is ready to be changed

    This smart diaper knows when it is ready to be changed

    Reading Time: 2 minutes

    The traditional method for changing a diaper starts when someone smells or feels the that the diaper has been soiled, and while it isn’t the greatest process, removing the soiled diaper as soon as possible is important for avoiding rashes and infections. Justin Lutz has created an intelligent solution to this situation by designing a small device that alerts people over Bluetooth® when the diaper is ready to be changed.

    Because a dirty diaper gives off volatile organic compounds (VOCs) and small particulates, Lutz realized he could use the Arduino Nicla Sense ME’s built-in BME688 sensor which can measure VOCs, temperature/humidity, and air quality. After gathering 29 minutes of gas and air quality measurements in the Edge impulse Studio for both clean and soiled diapers, he trained a classification model for 300 epochs, resulting in a model with 95% accuracy.

    Based on his prior experience with the Nicla Sense ME’s BLE capabilities and MIT App Inventor, Lutz used the two to devise a small gadget that wirelessly connects to a phone app so it can send notifications when it’s time for a new diaper.

    [youtube https://www.youtube.com/watch?v=Q1BknhEv9cQ?feature=oembed&w=500&h=281]

    To read more about this project, you can check out Lutz’s write-up here on the Edge Impulse docs page.

    The post This smart diaper knows when it is ready to be changed appeared first on Arduino Blog.

    Website: LINK

  • This Nicla Vision-based fire detector was trained entirely on synthetic data

    This Nicla Vision-based fire detector was trained entirely on synthetic data

    Reading Time: 2 minutes

    Due to an ever-warming planet thanks to climate change and greatly increasing wildfire chances because of prolonged droughts, being able to quickly detect when a fire has broken out is vital for responding while it’s still in a containable stage. But one major hurdle to collecting machine learning model datasets on these types of events is that they can be quite sporadic. In his proof of concept system, engineer Shakhizat Nurgaliyev shows how he leveraged NVIDIA Omniverse Replicator to create an entirely generated dataset and then deploy a model trained on that data to an Arduino Nicla Vision board.

    The project started out as a simple fire animation inside of Omniverse which was soon followed by a Python script that produces a pair of virtual cameras and randomizes the ground plane before capturing images. Once enough had been created, Nurgaliyev utilized the zero-shot object detection application Grounding DINO to automatically draw bounding boxes around the virtual flames. Lastly, each image was brought into an Edge Impulse project and used to develop a FOMO-based object detection model.

    By taking this approach, the model achieved an F1 score of nearly 87% while also only needing a max of 239KB of RAM and a mere 56KB of flash storage. Once deployed as an OpenMV library, Nurgaliyev shows in his video below how the MicroPython sketch running on a Nicla Vision within the OpenMV IDE detects and bounds flames. More information about this system can be found here on Hackster.io.

    [youtube https://www.youtube.com/watch?v=OFCwgWvivHo?feature=oembed&w=500&h=375]

    The post This Nicla Vision-based fire detector was trained entirely on synthetic data appeared first on Arduino Blog.

    Website: LINK

  • Small-footprint keyword spotting for low-resource languages with the Nicla Voice

    Small-footprint keyword spotting for low-resource languages with the Nicla Voice

    Reading Time: 2 minutes

    Speech recognition is everywhere these days, yet some languages, such as Shakhizat Nurgaliyev and Askat Kuzdeuov’s native Kazakh, lack sufficiently large public datasets for training keyword spotting models. To make up for this disparity, the duo explored generating synthetic datasets using a neural text-to-speech system called Piper, and then extracting speech commands from the audio with the Vosk Speech Recognition Toolkit.

    Beyond simply building a model to recognize keywords from audio samples, Nurgaliyev and Kuzdeuov’s primary goal was to also deploy it onto an embedded target, such as a single-board computer or microcontroller. Ultimately, they went with the Arduino Nicla Voice development board since it contains not just an nRF52832 SoC, a microphone, and an IMU, but an NDP120 from Syntiant as well. This specialized Neural Decision Processor helps to greatly speed up inferencing times thanks to dedicated hardware accelerators while simultaneously reducing power consumption. 

    With the hardware selected, the team began to train their model with a total of 20.25 hours of generated speech data spanning 28 distinct output classes. After 100 learning epochs, it achieved an accuracy of 95.5% and only consumed about 540KB of memory on the NDP120, thus making it quite efficient.

    [youtube https://www.youtube.com/watch?v=1E0Ff0ds160?feature=oembed&w=500&h=375]

    To read more about Nurgaliyev and Kuzdeuov’s project and how they deployed an embedded ML model that was trained solely on generated speech data, check out their write-up here on Hackster.io.

    The post Small-footprint keyword spotting for low-resource languages with the Nicla Voice appeared first on Arduino Blog.

    Website: LINK

  • This recycling bin sorts waste using audio classification

    This recycling bin sorts waste using audio classification

    Reading Time: 2 minutes

    Although a large percentage of our trash can be recycled, only a small percentage actually makes it to the proper facility due, in part, to being improperly sorted. So as an effort to help keep more of our trash out of landfills without the need for extra work, Samuel Alexander built a smart recycling bin that relies on machine learning to automatically classify the waste being thrown in and sort it into separate internal compartments.

    Because the bin must know what trash is being tossed in, Alexander began this project by first constructing a minimal rig with an Arduino Nano 33 BLE Sense to capture sounds and send them to an Edge Impulse project. From here, the samples were split into 60 one-second samples for each rubbish type, including cans, paper, bottles, and random background noise. The model, once trained, was then deployed to the Nano as a custom Arduino library.

    With the board now able to determine what type of garbage has been thrown in, Alexander got to work on the remaining portions of the smart bin. The base received a stepper motor which spins the four compartments to line up with a servo-actuated trap door while a LiPo battery pack provides power to everything for fully wireless operation.

    [youtube https://www.youtube.com/watch?v=roWY29RNFU0?feature=oembed&w=500&h=281]

    To read more about how this bin was created, you can visit Alexander’s write-up here on Hackaday.io.

    The post This recycling bin sorts waste using audio classification appeared first on Arduino Blog.

    Website: LINK

  • Predicting soccer games with ML on the UNO R4 Minima

    Predicting soccer games with ML on the UNO R4 Minima

    Reading Time: 2 minutes

    Based on the Renesas RA4M1 microcontroller, the new Arduino UNO R4 boasts 16x the RAM, 8x the flash, and a much faster CPU compared to the previous UNO R3. This means that unlike its predecessor, the R4 is capable of running machine learning at the edge to perform inferencing of incoming data. With this fact in mind, Roni Bandini wanted to leverage his UNO R4 Minima by training a model to predict the likelihood of a FIFA team winning their match.

    Bandini began his project by first downloading a dataset containing historical FIFA matches, including the country, team, opposing team, ranking, and neutral location. Next, the data was added to Edge impulse as a time-series dataset which feeds into a Keras classifier ML block and produces “win” and “lose/draw” values. Once trained, the model achieved an accuracy of 69% with a loss value of 0.58.

    Inputting the desired country and rank to make a prediction is done by making selections on a DFRobot LCD shield, and these values are then used to populate the input tensor for the model before it gets invoked and returns its classification results. Bandini’s device demonstrates how much more powerful the Arduino UNO R4 is over the R3, and additional information on the project can be found here in his post.

    [youtube https://www.youtube.com/watch?v=dYTukgY9kEU?feature=oembed&w=500&h=281]

    The post Predicting soccer games with ML on the UNO R4 Minima appeared first on Arduino Blog.

    Website: LINK

  • Meet Arduino Pro at tinyML EMEA Innovation Forum 2023

    Meet Arduino Pro at tinyML EMEA Innovation Forum 2023

    Reading Time: 3 minutes

    On June 26th-28th, the Arduino Pro team will be in Amsterdam for the tinyML EMEA Innovation Forum – one of the year’s major events for the world where AI models meet agile, low-power devices.

    This is an exciting time for companies like Arduino and anyone interested in accelerating the adoption of tiny machine learning: technologies, products, and ideas are converging into a worldwide phenomenon with incredible potential – and countless applications already.

    At the summit, our team will indeed present a selection of demos that leverage tinyML to create useful solutions in a variety of industries and contexts. For example, we will present:

    • A fan anomaly detection system based on the Nicla Sense ME. In this solution developed with SensiML, the Nicla module leverages its integrated accelerometer to constantly measure the vibrations generated by a computer fan. Thanks to a trained model, condition monitoring turns into anomaly detection – the system is able to determine whether the fan is on or off, notify users of any shocks, and even alert them if its super precise and efficient sensor detects sub-optimal airflow.
    • A vineyard pest monitoring system with the Nicla Vision and MKR WAN 1310. Machine vision works at the service of smart agriculture in this solution: even in the most remote field, a pheromone is used to attract insects inside a case lined with glue traps. The goal is not to capture all the insects, but to use a Nicla Vision module to take a snapshot of the captured bugs, recognize the ones that pose a real threat, and send updated data on how many specimens were found. New-generation farmers can thus schedule interventions against pests as soon as needed, before the insects get out of control and cause damage to the crops. Leveraging LoRa® connectivity, this application is both low-power and high-efficiency.
    • An energy monitoring-based anomaly detection solution for DC motors, with the Opta. This application developed with Edge Impulse leverages an Opta WiFi microPLC to easily implement industrial-level, real-time monitoring and fault detection – great to enable predictive maintenance, reducing downtime and overall costs. A Hall effect current sensor is attached in series with the supply line of the DC motor to acquire real-time data, which is then analyzed using ML algorithms to identify patterns and trends that might indicate faulty operation. The DC motor is expected to be in one of two statuses – ON or OFF – but different conditions can be simulated with the potentiometer. When unexpected electric consumption is shown, the Opta WiFi detects the anomaly and turns on a warning LED.

    The Arduino Pro team is looking forward to meeting customers and partners in Amsterdam – championing open source, accessibility, and flexibility in industrial-grade solutions at the tinyML EMEA Innovation Forum!

    The post Meet Arduino Pro at tinyML EMEA Innovation Forum 2023 appeared first on Arduino Blog.

    Website: LINK

  • Enabling automated pipeline maintenance with edge AI

    Enabling automated pipeline maintenance with edge AI

    Reading Time: 2 minutes

    Pipelines are integral to our modern way of life, as they enable the fast transportation of water and energy between central providers and the eventual consumers of that resource. However, the presence of cracks from mechanical or corrosive stress can lead to leaks, and thus waste of product or even potentially dangerous situations. Although methods using thermal cameras or microphones exist, they’re hard to use interchangeably across different pipeline types, which is why Kutluhan Aktar instead went with a combination of mmWave radar and an ML model running on an Arduino Nicla Vision board to detect these issues before they become a real problem.

    The project was originally conceived as an arrangement of parts on a breadboard, including a Seeed Studio MR60BHA1 60GHz radar module, an ILI9341 TFT screen, an Arduino Nano for interfacing with the sensor and display, and a Nicla Vision board. From here, Kutluhan designed his own Dragonite-themed PCB, assembled the components, and began collecting training and testing data for a machine learning model by building a small PVC model, introducing various defects, and recording the differences in data from the mmWave sensor. The system is able to do this by measuring the minute variations in vibrations as liquids move around, with increased turbulence often being correlated with defects.

    After configuring a time-series impulse, a classification model was trained with the help of Edge Impulse that would use the three labels (cracked, clogged, and leakage) to see if the pipe had any damage. It was then deployed to the Nicla Vision where it achieved an accuracy of 90% on real-world data. With the aid of the screen, operators can tell the result of the classification immediately, as well as send the data to a custom web application. 

    [youtube https://www.youtube.com/watch?v=ghSaefzzEXY?feature=oembed&w=500&h=281]

    More details on the project be found here in its Edge Impulse docs page.

    The post Enabling automated pipeline maintenance with edge AI appeared first on Arduino Blog.

    Website: LINK

  • These projects from CMU incorporate the Arduino Nano 33 BLE Sense in clever ways

    These projects from CMU incorporate the Arduino Nano 33 BLE Sense in clever ways

    Reading Time: 4 minutes

    With an array of onboard sensors, Bluetooth® Low Energy connectivity, and the ability to perform edge AI tasks thanks to its nRF52840 SoC, the Arduino Nano 33 BLE Sense is a great choice for a wide variety of embedded applications. Further demonstrating this point, a group of students from the Introduction to Embedded Deep Learning course at Carnegie Mellon University have published the culmination of their studies through 10 excellent projects that each use the Tiny Machine Learning Kit and Edge Impulse ML platform.

    Wrist-based human activity recognition

    Traditional human activity tracking has relied on the use of smartwatches and phones to recognize certain exercises based on IMU data. However, few have achieved both continuous and low-power operation, which is why Omkar Savkur, Nicholas Toldalagi, and Kevin Xie explored training an embedded model on combined accelerometer and microphone data to distinguish between handwashing, brushing one’s teeth, and idling. Their project continuously runs inferencing on incoming data and then displays the action on both a screen and via two LEDs. 

    Categorizing trash with sound

    In some circumstances, such as smart cities or home recycling, knowing what types of materials are being thrown away can provide a valuable datapoint for waste management systems. Students Jacky Wang and Gordonson Yan created their project, called SBTrashCat, to recognize trash types by the sounds they make when being thrown into a bin. Currently, the model can three different kinds, along with background noise and human voices to eliminate false positives.

    Distributed edge machine learning

    The abundance of Internet of Things (IoT) devices has meant an explosion of computational power and the amount of data needing to be processed before it can become useful. Because a single low-cost edge device does not possess enough power on its own for some tasks, Jong-Ik Park, Chad Taylor, and Anudeep Bolimera have designed a system where each device runs its own “slice” of an embedded model in order to make better use of available resources. 

    Predictive maintenance for electric motors

    Motors within an industrial setting require constant smooth and efficient operation in order to ensure consistent uptime, and recognizing when one is failing often necessitates manual inspection before a problem can be discovered. By taking advantage of deep learning techniques and an IMU/camera combination, Abhishek Basrithaya and Yuyang Xu developed a project that could accurately identify motor failure at the edge. 

    Estimating inventory in real-time with computer vision

    Warehouses greatly rely on having up-to-date information about the locations of products, inventory counts, and incoming/outgoing items. From these constraints, Netra Trivedi, Rishi Pachipulusu, and Cathy Tungyun collaborated to gather a dataset of 221 images labeled with the percentage of space remaining on the shelf. This enables the Nano 33 BLE Sense to use an attached camera to calculate empty shelf space in real-time. 

    Dog movement tracking

    Fitness trackers such as the FitBit and Apple Watch have revolutionized personal health tracking, but what about our pets? Ajith Potluri, Eion Tyacke, and Parker Crain addressed this hole in the market by building a dog collar that uses the Nano’s IMU to recognize daily activities and send the results to a smartphone via Bluetooth. This means the dog’s owner has the ability to get an overview of their pet’s day-to-day activity levels across weeks or months.

    Intelligent bird feeding system

    Owners of backyards everywhere encounter the same problem: “How do I keep the squirrels away from a birdfeeder while also allowing birds?” Eric Wu, Harry Rosmann, and Blaine Huey worked together on a Nano 33 BLE Sense-powered system that employs a camera module to identify if the animal at the feeder is a bird or a squirrel. If it is the latter, an alarm is played from a buzzer. Otherwise, the bird’s species is determined through another model and an image is saved to an SD card for future viewing. 

    Improving one’s exercise form

    Exercise, while being essential to a healthy lifestyle, must also be done correctly in order to avoid accidental injuries or chronic pain later on, and maintain proper form is an easy way to facilitate this. By using both computer vision on an NVIDIA Jetson Nano and anomaly detection via an IMU on a Nano 33 BLE Sense, Addesh Bhargava, Varun Jain, and Rohan Paranjape built a project that was more accurate than typical approaches to squatting form detection. 

    The post These projects from CMU incorporate the Arduino Nano 33 BLE Sense in clever ways appeared first on Arduino Blog.

    Website: LINK