Drone flight controllers do so much more than simply receive signals and tell the drone which way to move. They’re responsible for constantly tweaking the motor speeds in order to maintain stable flight, even with shifting winds and other unpredictable factors. For that reason, most flight controllers are purpose-built for the job. But element14’s Milos Rasic was building his own drone from scratch and found that the Arduino Nicla Vision board makes a great flight controller.
To perform that critical job of keeping the drone stable, the flight controller needs precises information about the orientation of the drone and any movement in three-dimensional space. Luckily, the Nicla Vision has an integrated six-axis motion sensor that is perfect for the job. It has also a powerful STM32H7 microcontroller, a built-in camera for machine vision and learning tasks, onboard Wi-Fi and Bluetooth connectivity, and more. And because it is very small (22.86×22.86mm) and very light, it is a good choice for a drone.
Rasic designed and made the entire drone from zero, using 8520 brushed DC motors and a 3D-printed frame. That is cool, but it isn’t uncommon. The Nicla Vision-based flight controller is what stands out the most.
Rasic developed a custom PCB for the Nicla Vision that acts like a breakout board and contains a few other useful components, such as for regulating and boosting power. But it didn’t need much, as the Nicla Vision already has most of the necessary hardware.
While he could have turned to existing flight controller firmware, Rasic chose to develop his own and that is the most impressive part of this project. That necessitated the creation of three PID (proportional-integral-derivative) controller algorithms for balancing pitch, roll, and yaw. Those work with control inputs to let the drone hover and move stably. The control signals come from a PC over Wi-Fi, with the pilot providing input through a USB flight stick.
The drone isn’t yet flying well, as PID tuning is a challenge for even the most experienced drone builders. But the foundation is there for Rasic to build on.
We’re thrilled to announce the launch of Nicla Sense Env: the latest addition to our portfolio of system-on-modules and sensor nodes, empowering innovators with the tools to unlock new possibilities. This tiny yet powerful sensor node is designed to elevate your environmental sensing projects to new heights. Whether you’re a seasoned professional or just starting your journey with Arduino, Nicla Sense Env is here to help sense the world around you with precision and ease.
“With Nicla Sense Env, we’re taking a critical step toward addressing one of the most pressing challenges of our time: protecting the environment. This powerful module allows developers to monitor air quality and environmental conditions with precision, paving the way for smarter, more sustainable solutions. By equipping professionals, educators, and makers with the right tools, we’re helping to build a future where technology and environmental stewardship go hand in hand. The compact nature of the Nicla form factor broadens the number of possible applications, spanning from prototyping to testing and volume production for OEMs.” – Fabio Violante, CEO of Arduino
“Renesas is proud to be the technology supplier of choice for the Arduino Nicla Sense Env, the new modular board to measure real-time indoor air quality, temperature, and humidity at the edge of the IoT network. Renesas’ system architecture, based on the RA2E1 microcontroller and environmental industrial-grade sensors with onboard AI including the ZMOD4410, ZMOD4510 and HS4001, enables Nicla Sense Env to be deployed in a variety of smart building applications, HVAC and air purifier systems, gas leak detection systems, fumes and fire detection systems, and smart city air quality management, with little integration effort.” — Brad Rex, Senior Director of Global Systems and Solutions Team at Renesas
Nicla Sense Env might be small in size, but it’s packed with advanced features that make it a powerhouse for environmental monitoring.
Monitor indoor and outdoor environments with AI-ready Renesas sensors. Nicla Sense Env offers temperature and humidity monitoring through the HS4001 sensor and AI-enabled gas detection with the ZMOD4410AI1V and ZMOD4510AI1V sensors. These provide real-time data on air quality, including the detection of TVOCs, NO2, O3, and other gasses, both indoors and outdoors.
22.86 x 22.86 mm = huge potential. With the tiny form factor the Nicla family is known for, Nicla Sense Env can easily fit into any project, allowing you to integrate environmental sensing without compromising on space or design.
Robust, reliable, and ready to stand the test of time. Built with industrial-grade sensors, Nicla Sense Env is engineered for durability and accuracy, ensuring reliable performance even in challenging conditions. What’s more, it was designed for 24/7 operation: ultra-low power consumption makes it ideal for long-term deployments in any situation.
Fits right in, with seamless integration and wide compatibility. Whether you’re working with Portenta SOMs or MKR products, Nicla Sense Env connects effortlessly via ESLOV (I2C) or header pins. It’s also compatible with Arduino IDE and MicroPython, so you can start programming right out of the box. And of course, it works great with a variety of libraries and tutorials available through the Arduino ecosystem.
Real-world applications? We sense endless possibilities!
Nicla Sense Env is a versatile and accessible tool for environmental monitoring: it’s your new ally whether you’re developing something new or enhancing an existing project, working on a prototype or full-fledged industrial-scale solution.
Nicla Sense Env fits perfectly into HVAC systems, helping you monitor air quality, humidity, and temperature to keep smart buildings comfortable and compliant with environmental regulations. In air purifiers, it provides real-time data that allow for energy-efficient operation and better air quality by detecting harmful gasses and adjusting the system as needed. When it comes to safety, it can play a critical role in detecting fumes and smoke, triggering early warnings to prevent potential hazards both indoors and outdoors. In industrial settings, it can monitor air quality and detect toxic substances, ensuring that machinery runs safely and efficiently. And these are only the first examples of applications that come to mind!
Add a breath of fresh air to your projects
We look forward to seeing how you will leverage the capabilities of the Arduino Nicla Sense Env to create innovative solutions – whether you’re developing climate control systems, enhancing air quality monitoring, or ensuring safety in industrial environments.
So, head to the Arduino Store to check out full product details and specifications, and let’s continue to push the boundaries of innovation together – one “tiny” step at a time!
Shortly after attending a recent tinyML workshop in Sao Paolo, Brazil, Joao Vitor Freitas da Costa was looking for a way to incorporate some of the technologies and techniques he learned into a useful project. Given that he lives in an area which experiences elevated levels of pickpocketing and automotive theft, he turned his attention to a smart car security system.
His solution to a potential break-in or theft of keys revolves around the incorporation of an Arduino Nicla Vision board running a facial recognition model that only allows the vehicle to start if the owner is sitting in the driver’s seat. The beginning of the image detection/processing loop involves grabbing the next image from the board’s camera and sending it to a classification model where it receives one of three labels: none, unknown, or Joao, the driver. Once the driver has been detected for 10 consecutive seconds, the Nicla Vision activates a relay in order to complete the car’s 12V battery circuit, at which point the vehicle can be started normally with the ignition.
Through this project, da Costa was able to explore a practical application of vision models at-the-edge to make his friend’s car safer to use. To see how it works in more detail, you can check out the video below and delve into the tinyML workshop he attended here.
Having constant, reliable access to a working HVAC system is vital for our way of living, as they provide a steady supply of fresh, conditioned air. In an effort to decrease downtime and maintenance costs from failures, Yunior González and Danelis Guillan have developed a prototype device that aims to leverage edge machine learning to predict issues before they occur.
The duo went with a Nicla Sense ME due to its onboard accelerometer, and after collecting many readings from each of the three axes at a 10Hz sampling rate, they imported the data into Edge Impulse to create the model. This time, rather than using a classifier, they utilized a K-means clustering algorithm — which is great at detecting anomalous readings, such as a motor spinning erratically, compared to a steady baseline.
Once the Nicla Sense ME had detected an anomaly, it needed a way to send this data somewhere else and generate an alert. González and Guillan’s setup accomplishes the goal by connecting a Microchip AVR-IoT Cellular Mini board to the Sense ME along with a screen, and upon receiving a digital signal from the Sense ME, the AVR-IoT Cellular Mini logs a failure in an Azure Cosmos DB instance where it can be viewed later on a web app.
Whether it’s an elf that stealthily watches from across the room or an all-knowing Santa Claus that keeps a list of one’s actions, spying during the holidays is nothing new. But when it comes time to receive presents, the more eager among us might want to know what presents await us a few days in advance under the tree, which is what prompted element14 Presents host Milos Rasic to build a robotic ornament equipped with vision and a compact movement system.
On the hardware side, Rasic went with an Arduino Nicla Vision board as it contains a camera and the ability to livestream the video feed over the network. A pair of continuous servo motors allow the mobile robot platform to move along the ground while another set of servos open the ornament’s trapdoor to expose the wheels and carefully lower it from the tree through a clever system of bands and thread.
The livestreaming portion of the project was based off an existing MJPEG RTP example that exposes a web API endpoint for fetching the latest frame from the Nicla’s onboard camera and delivering it via Wi-Fi. To control the robot, including winching, driving, and toggling the lights, Rasic created a Node-RED interface that sent MQTT messages to the Nicla.
To see more about how this creative device was designed, watch Rasic’s video below or read his full write-up here.
As Jallson Suryo discusses in his project, adding voice controls to our appliances typically involves an internet connection and a smart assistant device such as Amazon Alexa or Google Assistant. This means extra latency, security concerns, and increased expenses due to the additional hardware and bandwidth requirements. This is why he created a prototype based on an Arduino Nicla Voice that can provide power for up to four outlets using just a voice command.
Suryo gathered a dataset by repeating the words “one,” “two,” “three,” “four,” “on,” and “off” into his phone and then uploaded the recordings to an Edge Impulse project. From here, he split the files into individual words before rebalancing his dataset to ensure each label was equally represented. The classifier model was trained for keyword spotting and used Syntiant NDP120-optimal settings for voice to yield an accuracy of around 80%.
Apart from the Nicla Voice, Suryo incorporated a Pro Micro board to handle switching the bank of relays on or off. When the Nicla Voice detects the relay number, such as “one” or “three”, it then waits until the follow-up “on” or “off” keyword is detected. With both the number and state now known, it sends an I2C transmission to the accompanying Pro Micro which decodes the command and switches the correct relay.
We recently showed you Becky Stern’s recreation of the “computer book” carried by Penny in the Inspector Gadget cartoon, but Stern didn’t stop there. She also built a replica of Penny’s most iconic gadget: her watch. Penny was a trendsetter and rocked that decades before the Apple Watch hit the market. Stern’s replica looks just like the cartoon version and even has some of the same features.
The centerpiece of this project is an Arduino Nicla Voice board. The Arduino team designed that board specifically for speech recognition on the edge, which made it perfect for recognizing Penny’s signature “come in, Brain!” voice command. Stern used Edge Impulse to train an AI to recognize that phrase as a wake word. When the Nicla Voice board hears that, it changes the image on the smart watch screen to a new picture of Brain the dog.
The Nicla Vision board and an Adafruit 1.69″ color IPS TFT screen fit inside a 3D-printed enclosure modeled on Penny’s watch from the cartoon. That even has a clever 3D-printed watch band with links connected by lengths of fresh filament. Power comes from a small lithium battery that also fits inside the enclosure.
This watch and Stern’s computer book will both be part of an Inspector Gadget display put on by Digi-Key at Maker Faire Rome, so you can see it in person if you attend.
When dealing with indoor climate controls, there are several variables to consider, such as the outside weather, people’s tolerance to hot or cold temperatures, and the desired level of energy savings. Windows can make this extra challenging, as they let in large amounts of light/heat and can create poorly insulated regions, which is why Jallson Suryo developed a prototype that aims to balance these needs automatically through edge AI techniques.
Suryo’s smart building ventilation system utilizes two separate boards, with an Arduino Nano 33 BLE Sense handling environmental sensor fusion and a Nicla Voice listening for certain ambient sounds. Rain and thunder noises were uploaded from an existing dataset, split and labeled accordingly, and then used to train a Syntiant audio classification model for the Nicla Voice’s NDP120 processor. Meanwhile, weather and ambient light data was gathered using the Nano’s onboard sensors and combined into time-series samples with labels for sunny/cloudy, humid, comfortable, and dry conditions.
After deploying the board’s respective classification models, Suryo added some additional code that writes new I2C data from the Nicla Voice to the Nano that indicates if rain/thunderstorm sounds are present. If they are, the Nano can automatically close the window via servo motors while other environmental factors can set the position of the blinds. With this multi-sensor technique, a higher level of accuracy can be achieved for more precision control over a building’s windows, and thus attempt to lower the HVAC costs.
Your dog has nerve endings covering its entire body, giving it a sense of touch. It can feel the ground through its paws and use that information to gain better traction or detect harmful terrain. For robots to perform as well as their biological counterparts, they need a similar level of sensory input. In pursuit of that goal, the Autonomous Robots Lab designed TRACEPaw for legged robots.
TRACEPaw (Terrain Recognition And Contact force Estimation Paw) is a sensorized foot for robot dogs that includes all of the hardware necessary to calculate force and classify terrain. Most systems like this use direct sensor readings, such as those from force sensors. But TRACEPaw is unique in that it uses indirect data to infer this information. The actual foot is a deformable silicone hemisphere. A camera looks at that and calculates the force based on the deformation it sees. In a similar way, a microphone listens to the sound of contact and uses that to judge the type of terrain, like gravel or dirt.
To keep TRACEPaw self-contained, Autonomous Robots Lab chose to utilize an Arduino Nicla Vision board. That has an integrated camera, microphone, six-axis motion sensor, and enough processing power for onboard machine learning. Using OpenMV and TensorFlow Lite, TRACEPaw can estimate the force on the silicone pad based on how much it deforms during a step. It can also analyze the audio signal from the microphone to guess the terrain, as the silicone pad sounds different when touching asphalt than it does when touching loose soil.
More details on the project are available on GitHub.
The traditional method for changing a diaper starts when someone smells or feels the that the diaper has been soiled, and while it isn’t the greatest process, removing the soiled diaper as soon as possible is important for avoiding rashes and infections. Justin Lutz has created an intelligent solution to this situation by designing a small device that alerts people over Bluetooth® when the diaper is ready to be changed.
Because a dirty diaper gives off volatile organic compounds (VOCs) and small particulates, Lutz realized he could use the Arduino Nicla Sense ME’s built-in BME688 sensor which can measure VOCs, temperature/humidity, and air quality. After gathering 29 minutes of gas and air quality measurements in the Edge impulse Studio for both clean and soiled diapers, he trained a classification model for 300 epochs, resulting in a model with 95% accuracy.
Based on his prior experience with the Nicla Sense ME’s BLE capabilities and MIT App Inventor, Lutz used the two to devise a small gadget that wirelessly connects to a phone app so it can send notifications when it’s time for a new diaper.
Due to an ever-warming planet thanks to climate change and greatly increasing wildfire chances because of prolonged droughts, being able to quickly detect when a fire has broken out is vital for responding while it’s still in a containable stage. But one major hurdle to collecting machine learning model datasets on these types of events is that they can be quite sporadic. In his proof of concept system, engineer Shakhizat Nurgaliyev shows how he leveraged NVIDIA Omniverse Replicator to create an entirely generated dataset and then deploy a model trained on that data to an Arduino Nicla Vision board.
The project started out as a simple fire animation inside of Omniverse which was soon followed by a Python script that produces a pair of virtual cameras and randomizes the ground plane before capturing images. Once enough had been created, Nurgaliyev utilized the zero-shot object detection application Grounding DINO to automatically draw bounding boxes around the virtual flames. Lastly, each image was brought into an Edge Impulse project and used to develop a FOMO-based object detection model.
By taking this approach, the model achieved an F1 score of nearly 87% while also only needing a max of 239KB of RAM and a mere 56KB of flash storage. Once deployed as an OpenMV library, Nurgaliyev shows in his video below how the MicroPython sketch running on a Nicla Vision within the OpenMV IDE detects and bounds flames. More information about this system can be found here on Hackster.io.
Despite snoring itself being a relatively harmless condition, those who do snore while asleep can also be suffering from sleep apnea — a potentially serious disorder which causes the airway to repeatedly close and block oxygen from getting to the lungs. As an effort to alert those who might be unaware they have sleep apnea, Naveen Kumar devised a small device using an Arduino Pro Nicla Voice to detect when a person is snoring and gently alert them via haptic feedback in their pillow.
Although many boards have microphones and can run sound recognition machine learning models, the Nicla Voice contains a Syntiant NDP120 Neural Decision Processor that is specifically designed to accelerate deep learning workloads while also decreasing the amount of power needed to do so. Apart from the board, Kumar added an Adafruit DRV2605L haptic motor driver and haptic motor as a way to wake up the user without disturbing others nearby.
The model was created by first downloading a snoring dataset that contains hundreds of short samples of either snoring or non-snoring. After adding them to the Edge Impulse Studio, Kumar constructed an impulse from the Syntiant Audio blocks and trained a model that achieved a 94.6% accuracy against the test dataset. The code integrating the model continuously collects new audio samples from the microphone, passes them to the NDP120 for classification, and triggers the haptic motor if snoring is sensed.
Speech recognition is everywhere these days, yet some languages, such as Shakhizat Nurgaliyev and Askat Kuzdeuov’s native Kazakh, lack sufficiently large public datasets for training keyword spotting models. To make up for this disparity, the duo explored generating synthetic datasets using a neural text-to-speech system called Piper, and then extracting speech commands from the audio with the Vosk Speech Recognition Toolkit.
Beyond simply building a model to recognize keywords from audio samples, Nurgaliyev and Kuzdeuov’s primary goal was to also deploy it onto an embedded target, such as a single-board computer or microcontroller. Ultimately, they went with the Arduino Nicla Voice development board since it contains not just an nRF52832 SoC, a microphone, and an IMU, but an NDP120 from Syntiant as well. This specialized Neural Decision Processor helps to greatly speed up inferencing times thanks to dedicated hardware accelerators while simultaneously reducing power consumption.
With the hardware selected, the team began to train their model with a total of 20.25 hours of generated speech data spanning 28 distinct output classes. After 100 learning epochs, it achieved an accuracy of 95.5% and only consumed about 540KB of memory on the NDP120, thus making it quite efficient.
To read more about Nurgaliyev and Kuzdeuov’s project and how they deployed an embedded ML model that was trained solely on generated speech data, check out their write-up here on Hackster.io.
Massimo Banzi and the Arduino Pro team will be crossing the Channel soon for a short tour of Southern England, touching base with long-time partners and meeting many new Arduino fans!
On July 11th at 4PM BST, Massimo has been invited to give a Tech Talk at Arm’s headquarters in Cambridge, as part of the company’s ongoing series where “leading experts cover topics across the industry, including artificial intelligence, automotive, consumer technology, infrastructure, and IoT.” Register now to attend the talk remotely, anywhere in the world.
Fancy a pint and a fireside chat? Come and meet us in London at the Cittie of Yorke, July 12th at 6PM in Holborn. You can learn about Arduino’s latest products and future vision, straight from the co-founder himself. The event is free and no registration is required, but admission will be regulated depending on the venue’s capacity – get there early to save your seat!
Finally, on July 13th we are excited to announce Arduino Pro will debut with a booth at Hardware Pioneers Max. Come visit us at the Business Design Center in London, booth #48, to chat with our experts. Not sure where to begin? Our demos make great conversation starters! At the show, look for these:
An industrial-grade computer built with a Portenta X8 and Max Carrier. The X8’s hybrid combination of microprocessor and microcontroller yields unprecedented flexibility to simultaneously run Linux apps and perform real-time tasks. Pair that with the Max Carrier and an 8″ screen and you have a secure and powerful computer to deploy advanced AI algorithms and ML on the edge. The Portenta X8 can also act as a multi-protocol gateway: data from onsite sensors and controllers (e.g. temperature, operation time, warning codes) are collected and processed thanks to the module’s supported industrial protocols, then sent to the Cloud or ERP system via Wi-Fi, LoRa®, NB/IoT or LTE Cat.M1.
A vibration-based condition monitoring system to detect anomalies with Nicla Sense ME. Developed in collaboration with SensiML, this solution makes great use of Nicla’s self-learning AI smart sensor – with integrated accelerometer and gyroscope – to measure vibrations generated by a computer fan. With the intelligence of a trained ML model, the system monitors the fan’s conditions and can determine whether it is on or off, if there are any shocks, and even if the airflow is simply sub-optimal.
A solution to monitor vineyard pests, thanks to Nicla Vision and MKR WAN 1310. Smart farming leverages machine vision and valuable data on pest behavior, seasonality, and population size to optimize manual interventions against the dangerous Popillia japonica. Insects are attracted by pheromones inside the trap, where a low-power sensing solution leverages an ML model trained, tested and deployed with Edge Impulse to recognize and count insects, sending real-time data via LoRa® connectivity to the Cloud for remote monitoring.
And don’t miss Massimo’s talk, “Everything you think you know about Arduino is WRONG” at 4PM (see the event agenda). It’s your chance to find out how the brand that made tech accessible for the first generation of makers is now evolving to support a new generation of innovators.
Shortly after setting the desired temperature of a room, a building’s HVAC system will engage and work to either raise or lower the ambient temperature to match. While this approach generally works well to control the local environment, the strategy also leads to tremendous wastes of energy since it is unable to easily adapt to changes in occupancy or activity. In contrast, Jallson Suryo’s smart HVAC project aims to tailor the amount of cooling to each zone individually by leveraging computer vision to track certain metrics.
Suryo developed his proof of concept as a 1:50 scale model of a plausible office space, complete with four separate rooms and a plethora of human figurines. Employing Edge Impulse and a smartphone, 79 images were captured and had bounding boxes drawn around each person for use in a FOMO-based object detection model. After training, Suryo deployed the OpenMV firmware onto an Arduino Nicla Vision board and was able to view detections in real-time.
The last step involved building an Arduino library containing the model and integrating it into a sketch that communicates with an Arduino Nano peripheral board over I2C by relaying the number of people per quadrant. Based on this data, the Nano dynamically adjusts one of four 5V DC fans to adjust the temperature while displaying relevant information on an OLED screen. To see how this POC works in more detail, you can visit Suryo’s write-up on the Edge Impulse docs page.
Pipelines are integral to our modern way of life, as they enable the fast transportation of water and energy between central providers and the eventual consumers of that resource. However, the presence of cracks from mechanical or corrosive stress can lead to leaks, and thus waste of product or even potentially dangerous situations. Although methods using thermal cameras or microphones exist, they’re hard to use interchangeably across different pipeline types, which is why Kutluhan Aktar instead went with a combination of mmWave radar and an ML model running on an Arduino Nicla Vision board to detect these issues before they become a real problem.
The project was originally conceived as an arrangement of parts on a breadboard, including a Seeed Studio MR60BHA1 60GHz radar module, an ILI9341 TFT screen, an Arduino Nano for interfacing with the sensor and display, and a Nicla Vision board. From here, Kutluhan designed his own Dragonite-themed PCB, assembled the components, and began collecting training and testing data for a machine learning model by building a small PVC model, introducing various defects, and recording the differences in data from the mmWave sensor. The system is able to do this by measuring the minute variations in vibrations as liquids move around, with increased turbulence often being correlated with defects.
After configuring a time-series impulse, a classification model was trained with the help of Edge Impulse that would use the three labels (cracked, clogged, and leakage) to see if the pipe had any damage. It was then deployed to the Nicla Vision where it achieved an accuracy of 90% on real-world data. With the aid of the screen, operators can tell the result of the classification immediately, as well as send the data to a custom web application.
Mark your calendars: May 23rd-25th we’ll be at SPS Italia, one of the country’s leading fairs for smart, digital, sustainable industry and a great place to find out what’s new in automation worldwide. We expect a lot of buzz around AI for IoT applications – and, of course, we’ll come prepared to give our own, open-source perspective on the AIoT trend.
At Arduino Pro’s booth C012, pavilion 7, our experts will be presenting some of the latest additions to our ever-growing ecosystem, which includes everything companies need to fully embrace digital transformation with professional performance paired with Arduino’s ease of use and open-source philosophy. You can explore our complete digital brochure here, but let us point out some recent highlights.
Meet the Arduino Pro ecosystem at SPS Italia 2023
Over the years, Arduino Pro has built quite the presence on the market with SOMs like the Portenta H7 and X8, recently joined by the Portenta C33: a cost-effective, high-performance option that makes automation accessible to more users than ever, based on the RA6M5, an Arm® Cortex®-M33 microcontroller from Renesas.
Our Nicla family of ultra-compact boards also expanded: after Nicla Sense ME and Nicla Vision, Nicla Voice packs all the sensors, intelligence and connectivity you need for speech recognition on the edge, leveraging AI and ML.
What’s more, the Arduino ecosystem also includes turnkey solutions like the Portenta Machine Control and the new Opta, our very first microPLC, designed in partnership with Finder to support the Arduino programming experience with the main PLC standard languages – and available in 3 variants with different connectivity features: Opta Lite, Opta RS485, and Opta WiFi. Both the Portenta Machine Control and Opta can be programmed via the new PLC IDE, designed to help you boost production and build automation with your own Industry 4.0 control system.
Finally, since SPS Italy’s last edition we have launched Arduino Cloud for Business: a dedicated Cloud plan for professional users requiring advanced features for secure device management including OTA updates, user-friendly fleet management, and RBAC to safely share dashboards among multiple users and organizations. Specific optional add-ons allow you to further customize your solution with Portenta X8 Manager, LoRaWAN Device Manager or Enterprise Machine Learning Tool – accelerating your IoT success, whatever the scale of your enterprise may be.
Images from SPS Italy 2022
Team Arduino Pro at SPS Italy 2022
If you are attending SPS Italia, don’t miss the conference by our own Head of Arduino Pro Customer Success Andrea Richetta, joined by Product Managers Marta Barbero and Francesca Gentile (in Italian): on May 24th at 2:30pm they will dive deep on the tools Arduino Pro makes available for all companies ready to take part in the IoT revolution, with a unique combination of performance and ease of use. This is your chance to discover how you too can integrate safe and professional Industry 4.0 solutions in new or existing applications, quickly growing from prototype to large-scale production with sensors, machine vision, embedded machine learning, edge computing, and more.
Curious? Register to access the fair if you are an industry professional, and reach out to book a meeting with a member of our team.
The task of gathering enough data to classify distinct sounds not captured in a larger, more robust dataset can be very time-consuming, at least until now. In his write-up, Shakhizat Nurgaliyev describes how he used an array of AI tools to automatically create a keyword spotting dataset without the need for speaking into a microphone.
The pipeline is split into three main parts. First, the Piper text-to-speech engine was downloaded and configured via a Python script to output 904 distinct samples of the TTS model saying Nurgaliyev’s last name in a variety of ways to decrease overfitting. Next, background noise prompts were generated with the help of ChatGPT and then fed into AudioLDM which produces the audio files based on the prompts. Finally, all of the WAV files, along with “unknown” sounds from the Google Speech Commands Dataset, were uploaded to an Arduino ML project.
Training the model for later deployment on a Nicla Voice board was accomplished by adding a Syntiant audio processing block and then generating features to train a classification model. The resulting model could accurately determine when the target word was spoken around 96% of the time — all without the need for manually gathering a dataset.
Personal safety is a growing concern in a variety of settings: from high-risk jobs where HSE managers must guarantee workers’ security to the increasingly common work and study choices that drive family and friends far apart, sometimes leading to more isolated lives. In all of these situations, having a system capable of sensing and automatically contacting help in case of emergency can not only give people peace of mind, but save lives.
A particularly interesting case – as the world population ages – regards the increasing number of elderly people who are still healthy enough to be independent yet must also accept the fact their bodies are becoming weaker and their bones more fragile. This specific target is more prone to falls, which can result in fractures, head injuries, and other serious accidents that can severely impact the quality of life. Detecting falls early can allow for prompt medical attention and prevent serious consequences. Additionally, detecting falls can help identify underlying health issues or environmental factors that may be contributing to accidents, allowing for appropriate interventions to be put in place to avoid future falls.
A variety of person-down systems and fall detection methods exist, ranging from threshold-based algorithms to traditional machine learning applications. The biggest challenge they all share is they suffer from high false-positive triggers. In other words, they cause unnecessary alarm and distress to both the seniors and their caregivers, resulting in unwarranted actions.
Our solution
A tiny but mighty deployment device: Nicla Sense ME
For its project, Aizip selected the Nicla Sense ME: a compact module integrating multiple cutting-edge sensors from Bosch Sensortec, enabling sensor fusion applications directly at the edge. Additionally, the module houses an Arm® Cortex®-M4 microcontroller (nRF52832) leveraging Bluetooth® 4.2. Aizip’s neural network model fits right in with the remaining resources of the microcontroller, thanks to its compact footprint. The result? A small and lightweight device that can be clipped onto one’s belt and worn all day without hassle, able to monitor health parameters and immediately alert assistance in case of fall, with near-zero latency and full respect for privacy.
A more accurate fall detection algorithm
Aizip’s fall detection solution integrates a neural network algorithm with sensor fusion to greatly enhance detection accuracy, while also being lightweight enough it can run in real time on a microcontroller. The neural network within the microcontroller continuously processes sensor readings from the accelerometer (BHI260AP) and the pressure sensor (BMP390). Upon detecting a fall, the device sends an alarm via Bluetooth and activates an on-board LED. In order to minimize frequent false alarms that could significantly affect user experience, the neural network is optimized to differentiate real falls from abrupt movements such as jumping, sprinting, and quickly sitting down. The neural network-based algorithm excels at capturing subtle features in inputs, leading to a substantial reduction in false alarm rates compared to threshold-based approaches or traditional machine learning algorithms.
Typical neural networks offer superior performances but also pose additional challenges, when deploying them onto resource-constrained microcontroller devices, due to the extensive computing and memory resources required. The simultaneous need for Bluetooth connectivity and sensor fusion further compounds this issue. However, Aizip’s proprietary efficient neural network architecture makes this solution stand out because it minimizes resource requirements while maintaining high accuracy. The neural network is quantized to 8-bit and deployed onto the microcontroller using Aizip’s automated design tool. The implemented model achieves a 94% fall detection accuracy and a <0.1% false positive rate, all while utilizing less than 3KB of RAM. A perfect fit for the low-consumption Nicla Sense ME!
Solving it with Arduino Pro
Now let’s explore how we could put all of this together and what we would need for deployment both in terms of hardware and software stack. The Arduino Pro ecosystem is the latest generation of Arduino solutions bringing users the simplicity of integration and scalable, secure, professionally supported services.
When personal safety is a concern, smart wearables that leverage AI can help. And processing the data required to monitor health conditions and prevent falls doesn’t have to come at the expense of comfort or privacy. Thanks to extremely efficient models like Aizip’s and compact yet high-performance modules like Arduino Pro’s Nicla Sense ME, you can create a discreet and reliable solution able to immediately call for help when needed (and only when needed).
Hazardous pollution in the form of excess CO2, nitrogen dioxide, microscopic particulates, and volatile organic compounds has become a growing concern, especially in developing countries where access to cleaner technologies might not be available or widely adopted. Krazye Karthik’s Environmental Sense Mask (ES-Mask) focuses on bringing attention to these harmful compounds by displaying ambient air quality measurements in real-time.
In order to get values for the air quality index (AQI), CO2, volatile organic compounds (VOCs), and temperature/humidity, Karthik selected the Nicla Sense ME due to its onboard Bosch BME688 sensor module. In addition to providing this data over Bluetooth® Low Energy, the Nicla Sense ME also sends it over I2C to a MKR WiFi 1010 which is responsible for parsing the data. Once done, a comment is generated for the current AQI ranging from “excellent” to “hazardous.” This reading is displayed on an attached OLED screen and a ring of 24 NeoPixel LEDs are illuminated according to the level of dangerous pollutants.
Beyond the microcontroller and sensor components, Karthik added a 5V fan to a mask along with a few air filters to help increase the cleanliness of the air he was breathing. Last of all, he built a mobile app that grabs the data via BLE and shows it in an organized format.
When a baby cries, it is almost always due to something that is wrong, which could include, among other things, hunger, thirst, stomach pain, or too much noise. In his project, Nurgaliyev Shakhizat demonstrated how he was able to leverage ML tools to build a cry-detection system without the need for collecting real-world data himself.
The process is as follows: ChatGPT generates a series of text prompts that all involve a crying baby in some manner. These prompts are then passed to AudioLDM which creates sounds according to the prompts. Finally, Shakhizat used the Arduino Cloud’s Machine Learning Tools integration, powered by Edge Impulse, to train a tinyML model for deployment onto an Arduino Nicla Voice board. To create the sounds themselves, Shakhizat configured a virtual Python environment with the audioldm package installed. His script takes the list of prompts, executes them within an AudioLDM CLI command, and saves the generated sound data as a WAV file.
Once this process was done, he configured a project in the Edge Impulse Studio which trains a classifier model. The result after training completed was a model that could accurately distinguish between background noise and a crying baby 90% of the time, and deploying it onto the Arduino Nicla Voice showed the effectiveness of how synthetic datasets and embedded models can be used in the real world.
Maintaining accurate records for both the quantities and locations of inventory is vital when running any business operations efficiently and at scale. By leveraging new technologies such as AI and computer vision, items in warehouses, store shelves, and even a customer’s hand can be better managed and used to forecast changes demand. As demonstrated by the Zalmotek team, a tiny Arduino Nicla Vision board can be tasked with recognizing different types of containers and sending the resulting data to the cloud automatically.
The hardware itself was quite simple, as the Nicla Vision already contained the processor, camera, and connectivity required for the proof-of-concept. Once configured, Zalmotek used the OpenMV IDE to collect a large dataset featuring images of each type of item. Bounding boxes were then drawn using the Edge Impulse Studio, after which a FOMO-specific MobileNetV2 0.35 model was trained and could accurately determine the locations and quantities of objects in each test image.
Deploying the model was simple thanks to the OpenMV firmware export option, as it could be easily incorporated into the main Python script. In essence, the program continually gathers new images, passes them to the model, and gets the number of detected objects. Afterwards, these counts are published via the MQTT protocol to a cloud service for remote viewing.
Um dir ein optimales Erlebnis zu bieten, verwenden wir Technologien wie Cookies, um Geräteinformationen zu speichern und/oder darauf zuzugreifen. Wenn du diesen Technologien zustimmst, können wir Daten wie das Surfverhalten oder eindeutige IDs auf dieser Website verarbeiten. Wenn du deine Einwillligung nicht erteilst oder zurückziehst, können bestimmte Merkmale und Funktionen beeinträchtigt werden.
Funktional
Immer aktiv
Die technische Speicherung oder der Zugang ist unbedingt erforderlich für den rechtmäßigen Zweck, die Nutzung eines bestimmten Dienstes zu ermöglichen, der vom Teilnehmer oder Nutzer ausdrücklich gewünscht wird, oder für den alleinigen Zweck, die Übertragung einer Nachricht über ein elektronisches Kommunikationsnetz durchzuführen.
Vorlieben
Die technische Speicherung oder der Zugriff ist für den rechtmäßigen Zweck der Speicherung von Präferenzen erforderlich, die nicht vom Abonnenten oder Benutzer angefordert wurden.
Statistiken
Die technische Speicherung oder der Zugriff, der ausschließlich zu statistischen Zwecken erfolgt.Die technische Speicherung oder der Zugriff, der ausschließlich zu anonymen statistischen Zwecken verwendet wird. Ohne eine Vorladung, die freiwillige Zustimmung deines Internetdienstanbieters oder zusätzliche Aufzeichnungen von Dritten können die zu diesem Zweck gespeicherten oder abgerufenen Informationen allein in der Regel nicht dazu verwendet werden, dich zu identifizieren.
Marketing
Die technische Speicherung oder der Zugriff ist erforderlich, um Nutzerprofile zu erstellen, um Werbung zu versenden oder um den Nutzer auf einer Website oder über mehrere Websites hinweg zu ähnlichen Marketingzwecken zu verfolgen.