Schlagwort: TinyML

  • Controlling a bionic hand with tinyML keyword spotting

    Controlling a bionic hand with tinyML keyword spotting

    Reading Time: 2 minutes

    Arduino TeamAugust 31st, 2022

    Traditional methods of sending movement commands to prosthetic devices often include electromyography (reading electrical signals from muscles) or simple Bluetooth modules. But in this project, Ex Machina has developed an alternative strategy that enables users to utilize voice commands and perform various gestures accordingly.

    The hand itself was made from five SG90 servo motors, with each one moving an individual finger of the larger 3D-printed hand assembly. They are all controlled by a single Arduino Nano 33 BLE Sense, which collects voice data, interprets the gesture, and sends signals to both the servo motors and an RGB LED for communicating the current action.

    In order to recognize certain keywords, Ex Machina collected 3.5 hours of audio data split amongst six total labels that covered the words “one,” “two,” “OK,” “rock,” “thumbs up,” and “nothing” — all in Portuguese. From here, the samples were added to a project in the Edge Impulse Studio and sent through an MFCC processing block for better voice extraction. Finally, a Keras model was trained on the resulting features and yielded an accuracy of 95%.

    Once deployed to the Arduino, the model is continuously fed new audio data from the built-in microphone so that it can infer the correct label. Finally, a switch statement sets each servo to the correct angle for the gesture. For more details on the voice-controlled bionic hand, you can read Ex Machina’s Hackster.io write-up here.

    [youtube https://www.youtube.com/watch?v=0mc9VOxiwgo?feature=oembed&w=500&h=281]

    Website: LINK

  • Reading analog gauges with the Nicla Vision

    Reading analog gauges with the Nicla Vision

    Reading Time: 2 minutes

    Arduino TeamAugust 13th, 2022

    Analog instruments are everywhere and used to measure pressure, temperature, power levels, and much more. Due to the advent of digital sensors, many of these became quickly obsolete, leaving the remaining ones to require either conversions to a digital format or frequent human monitoring. However, the Zalmotek team has come up with a solution that incorporates embedded machine learning and computer vision in order to autonomously read these values.

    Mounted inside of a custom enclosure, their project relies on an Arduino Pro Nicla Vision board, which takes periodic images for further processing and inference. They began by generating a series of synthetic gauge pictures that have the dial at various positions, and labeled them either low, normal, or high. This collection was then imported into the Edge Impulse Studio and used to train a machine learning model on the 96x96px samples due to the limited memory. Once created, the neural network could successfully determine the gauge’s state about 92% of the time.

    The final step of this project involved deploying the firmware to the Nicla Vision and setting the image size to the aforementioned 96x96px size. By opting to use this technique of computer vision, frequent readings can be taken while also minimizing cost and power consumption.

    More details on Zalmotek’s system can be found here on its Edge Impulse docs page

    Website: LINK

  • This system detects leaks by listening to water flowing through pipes

    This system detects leaks by listening to water flowing through pipes

    Reading Time: 2 minutes

    Arduino TeamJuly 27th, 2022

    Damaged, leaking pipes are not only a nuisance to clean up after, but they can also create major inefficiencies within water delivery systems, leading to a loss in both the water itself and the electricity required to disinfect and pump it. Over the past decade, water pipeline detection systems have been upgraded to include state-of-the-art sensors, which can precisely locate where a leak is. Due to their high price, Manivannan Sivan designed his own leak detection system that can be produced for far less cost.

    Sivan’s project involves the placement of two microphones next to a pipe and reading the acoustic signatures they pick up. For this task, he chose a single Arduino Portenta H7 and an accompanying Vision Shield due to its pair of onboard mics and fast processor. He then collected samples for no water flow, water flow without leaks, and water flow with leaks. The resulting machine learning model achieved an accuracy of 99.1% and a mere 0.02 loss.

    After deploying the model to his board and placing it near a pipe, the Portenta now had the ability to identify when the pipe started to leak — and potentially notify someone thanks to its wireless connectivity, if Sivan decided to add that feature.

    [youtube https://www.youtube.com/watch?v=EzDfbBDurm4?feature=oembed&w=500&h=281]

    For more details on this project, read its write-up here on the Edge Impulse blog.

    Website: LINK

  • Industrial IoT anomaly detection on microcontrollers

    Industrial IoT anomaly detection on microcontrollers

    Reading Time: 2 minutes

    Arduino TeamJuly 22nd, 2022

    Consumer IoT (Internet of Things) devices provide convenience and the consequences of a failure are minimal. But industrial IoT (IIoT) devices monitor complex and expensive machinery. When that machinery fails, it can cost serious money. For that reason, it is important that technicians get alerts as soon as an abnormality in operation occurs. That’s why Tomasz Szydlo at AGH University of Science and Technology in Poland researched IIoT anomaly detection techniques for low-cost microcontrollers.

    When you only have a single sensor value to monitor, it is easy to detect an anomaly. For example, it is easy for your car to identify when engine temperature exceeds an acceptable range and then turn on a warning light. But this becomes a serious challenge when a complex machine has many sensors with values that vary depending on conditions and jobs — like a car engine becoming hot because of hard acceleration or high ambient temperatures, as opposed to a cooling problem. 

    In complex scenarios, it is difficult to hard code acceptable ranges to account for every situation. Fortunately, that is exactly the kind of problem that machine learning excels at solving. Machine learning models don’t understand the values they see, but they are very good at recognizing patterns and when values deviate from those patterns. Such a deviation indicates an anomaly that should raise a flag so a technician can look for an issue. 

    Szydlo’s research focuses on running machine learning models on IIoT hardware for this kind of anomaly detection. In his tests, he used an Arduino Nano 33 BLE board as an IIoT accelerometer monitor for a simple USB fan. He employed FogML to create a machine learning model efficient enough to run on the relatively limited hardware of the Nano’s nRF52840 microcontroller.

    The full results are available in Szydlo’s paper, but his experiments were a success. This affordable hardware was able to detect anomalies with the fan speed. This is a simple application, but as Szydlo notes, it is possible to expand the concept to handle more complex machinery.

    Image: arXiv:2206.14265 [cs.LG]

    Website: LINK

  • This device detects different household sounds through tinyML

    This device detects different household sounds through tinyML

    Reading Time: 2 minutes

    Arduino TeamJuly 14th, 2022

    For people who suffer from hearing loss or other auditory issues, maintaining situational awareness can be vital for keeping safe and autonomous. This problem is what inspired the team of Lucia Camacho Tiemblo, Spiros Kotsikos, and Maria Alifieri to create a small device that can alert users to certain household sounds on their phone.

    The team decided to incorporate embedded machine learning in order to recognize ambient sounds, so they opted for an Arduino Nano 33 BLE Sense. After recording many samples of various events, such as a conversation, knocking on the door, the TV, a doorbell, and silence, they fed them into a tinyML model with the help of Edge Impulse’s Studio. The resulting model was able to successfully differentiate between events around 90% of the time.

    Beyond merely outputting the recognized audio to a serial monitor, the team’s firmware also allows for the results to be sent over Bluetooth® Low Energy where a connected smartphone can read the data and display it. The mobile app contains three simple buttons for accessing a list of sounds, certain settings, and a submenu for managing the connection with the Arduino.

    You can read more about this accessibility project here on Hackster.io.

    Website: LINK

  • Detecting harmful gases with a single sensor and tinyML

    Detecting harmful gases with a single sensor and tinyML

    Reading Time: 2 minutes

    Arduino TeamJuly 11th, 2022

    Experiencing a chemical and/or gas leak can be potentially life-threatening to both people and the surrounding environment, which is why detecting them as quickly as possible is vital. But instead of relying on simple thresholds, Roni Bandini was able to come up with a system that can spot custom leaks by recognizing subtle changes in gas level values through machine learning.

    To accomplish this, Bandini took a single MiCS-4514 and connected it to an Arduino Nano 33 BLE Sense, along with an OLED screen, fan, and buzzer for sending out alerts. The MiCS-4514 is a multi-gas sensor that is able to detect methane, ethanol, hydrogen, ammonia, carbon monoxide, and nitrogen dioxide. This capability means that explosive and/or poisonous gas can be identified well before it builds up to a critical level indoors.

    Once several samples had been collected that ranged from typical to dangerous levels, Bandini fed the dataset into the Edge Impulse Studio in order to train a neural network classifier on the time-series samples. Whenever the device starts up, the sensor is calibrated for a preset amount of time and can be used to distinguish harmful air quality within 1.5 seconds. The display shows any high sensor readings and what if a leak has been detected.

    To see more about this project, you can read Bandini’s tutorial or watch this demonstration video below.

    [youtube https://www.youtube.com/watch?v=3ViArIxUhDY?feature=oembed&w=500&h=281]

    Website: LINK

  • Meet Nikola, a camera-enabled smart companion robot

    Meet Nikola, a camera-enabled smart companion robot

    Reading Time: 2 minutes

    Arduino TeamJune 6th, 2022

    For this year’s Embedded Vision Summit, Hackster.io’s Alex Glow created a companion robot successor to her previous Archimedes bot called Nikola. This time, the goal was to embed a privacy-focused camera and microphone system as well as several other components that would increase its adorability.

    The vision system uses a Nicla Vision board to read a QR code within the current frame thanks to the OpenMV IDE and the code Glow wrote. After it detects a code containing the correct URL, it activates Nikola’s red LED to signify that it’s taking a photo and storing it automatically.

    Apart from the vision portion, Glow also included a pair of ears that move with the help of two micro servos controlled by a Gemma M0 board from Adafruit, which give it some extra character. And lastly, Nikola features an internal mount that holds a microphone for doing interviews, thus letting the bot itself get held near the interviewee. 

    Nikola is a great compact and fuzzy companion robot that can be used not just for events, but also for interviews and simply meeting people. You can see how Glow made the robot in more detail here on Hackster.io or by watching her video below!

    [youtube https://www.youtube.com/watch?v=E4bL_V7fydc?feature=oembed&w=500&h=281]

    Website: LINK

  • These intelligent slippers sense regular activities and falls using machine learning

    These intelligent slippers sense regular activities and falls using machine learning

    Reading Time: 2 minutes

    Arduino TeamJune 1st, 2022

    When it comes to activity monitors such as smartwatches, rings, and pendants, they are often considered cumbersome or too difficult to keep track of, especially for the elderly with memory or dexterity problems. This is why the team of Jure Špeh, Jan Adamic, Luka Mali, and Blaz Ardaljon Mataln Smehov decided to create the SmartSlippers project, which is a far more integrated method for detecting steps and falls.

    The hardware portion of the SmartSlippers prototype is just a Nano 33 BLE Sense board due to its onboard inertial measurement unit (IMU) and Bluetooth® Low Energy capability. At first, the team collected 14 minutes of five different types of movements: walking, running, stairs, falling, and idle within the Edge Impulse Studio. From here, they trained a neural network on these samples, which resulted in an accuracy of around 84%.

    With the Nano now able to detect motion, the next step was to get the board to talk with a phone so that emergency services could be called in the event of a fall. Their firmware sets up a BLE device and adds a characteristic that sends data to the connected phone when an event occurs. And finally, their custom Android mobile app displays the current status of the SmartSlippers and can even call someone if a fall is detected.

    To read more about the project, you can visit its write-up here on Hackster.io.

    [youtube https://www.youtube.com/watch?v=jR502K2P8eA?feature=oembed&w=500&h=281]

    Website: LINK

  • SafeDrill uses tinyML to encourage proper drilling technique

    SafeDrill uses tinyML to encourage proper drilling technique

    Reading Time: 2 minutes

    Arduino TeamMay 31st, 2022

    For those new to DIY projects that involve the use of power tools, knowing when a tool is being used in an unsafe manner is of utmost importance. For many, this can include employing the wrong drill bit for a given material, such as a concrete bit in a soft wood plank. This is why a team from the University of Ljubljana created the SafeDrill, which aims to quickly determine when misuse is occurring and notify the user.

    The team’s prototype consists of a small 3D-printed enclosure that contains a Nano 33 BLE Sense while allowing a USB cable to attach for power at the front. Once attached to a cordless drill with a pair of zip ties, they captured 100 seconds of data for each of the nine different classes that include three drill bits combined with three types of materials. From here, they trained a model in the Edge Impulse Studio in order to recognize the material/bit combination.

    The last part of the SafeDrill project was the mobile app. Built with the help of MIT App Inventor, the application receives data over Bluetooth® Low Energy from the Nano 33 BLE Sense and displays it to the user. For safe combinations, the text appears green whereas unsafe combinations show up in red.

    To read more about SafeDrill, check out the team’s tutorial on Hackster.io.

    [youtube https://www.youtube.com/watch?v=_f-P35rEFbA?feature=oembed&w=500&h=281]

    Website: LINK

  • This Arduino device can anticipate power outages with tinyML

    This Arduino device can anticipate power outages with tinyML

    Reading Time: 2 minutes

    Arduino TeamMay 24th, 2022

    Our reliance on electronic devices and appliances has never been higher, so when the power goes out, it can quickly become an unpleasant and inconvenient situation, especially for those who are unable to prepare in time. To help combat this problem, Roni Bandini has devised a device he calls “EdenOff,” which is placed inside an electrical outlet and utilizes machine learning at the edge to intelligently predict when an outage might occur.

    Developed with the use of Edge Impulse, Bandini began by creating a realistic dataset that consisted of three columns that pertain to different aspects of an outlet: its voltage, the ambient temperature, and how long the service has been working correctly. After training a model based on one dataset for regular service and the other for a failure, his model achieved an excellent F-1 score of .96, indicating that the model can forecast when an outage might take place with a high degree of accuracy. 

    Bandini then deployed this model to a DIY setup by first connecting a Nano 33 BLE Sense with its onboard temperature sensor to an external ZMPT101B voltage sensor. Users can view the device in operation with its seven-segment display and hear the buzzer if a failure is detected. Lastly, the entire package is portable thanks to its LiPo battery and micro-USB charging circuitry.

    For more details on this project, you can watch its demonstration video below and view its public project within the Edge Impulse Studio.

    [youtube https://www.youtube.com/watch?v=Fs-zV0d9BIU?feature=oembed&w=500&h=281]

    Website: LINK

  • Introvention is a wearable device that can help diagnose movement disorders early

    Introvention is a wearable device that can help diagnose movement disorders early

    Reading Time: 2 minutes

    Arduino TeamMay 17th, 2022

    Conditions such as Parkinson’s disease and essential tremors often present themselves as uncontrollable movements or spasms, especially near the hands. By recognizing when these troubling symptoms appear, earlier treatments can be provided and improve the prognosis for the patient compared to later detection. Nick Bild had the idea to create a small wearable band called “Introvention” that could sense when smaller tremors occur in hopes of catching them sooner.

    An Arduino Nano 33 IoT was used to both capture the data and send it to a web server since it contains an onboard accelerometer and has WiFi support. At first, Bild collected many samples of typical activities using the Edge Impulse Studio and fed them into a K-means clustering algorithm which detects when a movement is outside of the “normal” range. Once deployed to the Arduino, the edge machine learning model can run entirely on the board without the need for an external service.

    If anomalous movements are detected by the model, a web request gets sent to a custom web API running on the Flask framework where it’s then stored in a database. A dashboard shows a chart that plots the number of events over time for easily seeing trends.

    To read more about Bild’s project, check out its write-up here on Hackster.io.

    Website: LINK

  • Estimating indoor CO2 levels using tinyML and computer vision

    Estimating indoor CO2 levels using tinyML and computer vision

    Reading Time: 2 minutes

    Arduino TeamMay 6th, 2022

    The ongoing COVID-19 pandemic has drawn attention to how clean our indoor environments are, and by measuring the CO2 levels within a room, infection risks can be approximated since more CO2 is correlated with poor ventilation. Software engineer Swapnil Verma had the idea to use computer vision to count the number of occupants within a space and attempt to gauge the concentration of the gas accordingly.

    The hardware powering this project is an Arduino Portenta H7 combined with a Vision Shield add-on that allows the board to capture images. From here, Verma used a subset of the PIROPO dataset, which contains recordings of indoor rooms and ran the YOLOv5-based auto labeling utility within Edge Impulse to draw bounding boxes around people. Once labeled, a FOMO model was trained with a respectable F1 score of 91.6%.

    Testing the system was done by observing how well the Portenta H7, running the object detection model from Edge Impulse, did at tracking a person moving throughout a room. Even though the model only takes an input of 240x240px image data, it still performed admirably in this task. For the last step of estimating CO2 levels, Verma’s code simply takes the number of people detected in the frame and multiplies it by a constant. For more details, you can read his post here.

    Website: LINK

  • Train yourself to avoid using filler words with the tinyML-powered Mind the Uuh device

    Train yourself to avoid using filler words with the tinyML-powered Mind the Uuh device

    Reading Time: 2 minutes

    Arduino TeamApril 21st, 2022

    Listening to a speaker who interjects words such as “um,” “uuh,” and “so” can be extremely distracting and take away from the message being conveyed, which is why Benedikt Groß, Maik Groß, Thibault Durand set out to build a small device that can help encourage speakers to make their language more concise. Their experimental solution, called Mind the “Uuh,” constantly listens to the words being spoken and generates an audible alert if the word “uuh” is detected.

    The team began by collecting around 1,500 samples of audio that ranged in length from 300ms to 1s and contained either noise, random words, or the word “uuh.” Then, after running it through a filter and training a Keras neural network using Edge Impulse, deployed it onto a Nano 33 BLE Sense. The board was connected to a seven-segment display via two shift registers that show the current “uuh” count, as well as a servo motor that dings a bell to generate the alert. 

    Once assembled and placed inside a 3D-printed case, the Mind the ‘Uuh’ gadget was able to successfully detect whenever the dreaded “uuh” filler word was spoken. As a minor extension, the team also created a small website that hosts the same machine learning model but instead uses a microphone from a web browser.

    Website: LINK

  • Arduino device uses tinyML to help wearers recover from shoulder injury

    Arduino device uses tinyML to help wearers recover from shoulder injury

    Reading Time: 2 minutes

    Arduino TeamMarch 15th, 2022

    Shoulder injuries can be quite complex and require months of careful physical therapy to overcome, which is what led to Roni Bandini to build a tinyML-powered wearable that monitors a patient’s rotator cuff movements to aid in the recovery process. His system is designed around a Nano 33 BLE Sense and its onboard accelerometer that measures both the types and frequencies of certain shoulder motions. 

    After 3D printing a small case to house the Arduino along with a battery pack and an OLED display, Bandini created a new project using the Edge Impulse Studio. The impulse takes in time-series three-axis accelerometer data, runs it through a spectral analysis block, and then infers the current movement being performed by the wearer. 

    Once switched on, the system initializes a set of three movement counts to zero: right, left, and up, while the last type, idle, is not counted. Then throughout the day, the patient is encouraged to perform various exercises in order to fill up the bars completely. The eventual goal is to make steady progress towards having a recovered rotator cuff joint with a larger range of motion than immediately after the injury.

    Bandini’s video explaining this shoulder recovery system can be viewed below, and the project’s design files/code can be found here on Hackster.io.

    Website: LINK

  • Arduino device uses tinyML to help wearers recover from shoulder injury

    Arduino device uses tinyML to help wearers recover from shoulder injury

    Reading Time: 2 minutes

    Arduino TeamMarch 15th, 2022

    Shoulder injuries can be quite complex and require months of careful physical therapy to overcome, which is what led to Roni Bandini to build a tinyML-powered wearable that monitors a patient’s rotator cuff movements to aid in the recovery process. His system is designed around a Nano 33 BLE Sense and its onboard accelerometer that measures both the types and frequencies of certain shoulder motions. 

    After 3D printing a small case to house the Arduino along with a battery pack and an OLED display, Bandini created a new project using the Edge Impulse Studio. The impulse takes in time-series three-axis accelerometer data, runs it through a spectral analysis block, and then infers the current movement being performed by the wearer. 

    Once switched on, the system initializes a set of three movement counts to zero: right, left, and up, while the last type, idle, is not counted. Then throughout the day, the patient is encouraged to perform various exercises in order to fill up the bars completely. The eventual goal is to make steady progress towards having a recovered rotator cuff joint with a larger range of motion than immediately after the injury.

    Bandini’s video explaining this shoulder recovery system can be viewed below, and the project’s design files/code can be found here on Hackster.io.

    Website: LINK

  • From embedded sensors to advanced intelligence: Driving Industry 4.0 innovation with TinyML

    From embedded sensors to advanced intelligence: Driving Industry 4.0 innovation with TinyML

    Reading Time: 5 minutes

    Wevolver’s previous article about the Arduino Pro ecosystem outlined how embedded sensors play a key role in transforming machines and automation devices to Cyber Physical Production Systems (CPPS). Using CPPS systems, manufacturers and automation solution providers capture data from the shop floor and use it for optimizations in areas like production schedules, process control, and quality management. These optimizations leverage advanced data Internet of Things (IoT) analytics over manufacturing datasets, which is the reason why data are the new oil.

    Deployment Options for IoT Analytics: From Cloud Analytics to TinyML

    IoT analytics entail statistical data processing and employ Machine Learning (ML) functions, including Deep Learning (DL) techniques i.e., ML based on deep neural networks. Many manufacturing enterprises deploy IoT analytics in the cloud. Cloud IoT analytics use the vast amounts of cloud data to train accurate DL models. Accuracy is important for many industrial use cases like Remaining Useful Life calculation in predictive maintenance. Nevertheless, it is also possible to execute analytics at the edge of the network. Edge analytics are deployed within embedded devices or edge computing clusters at the factory’s Local Area Network (LAN). They are appropriate for real-time use cases that demand low latency such as real-time detection of defects. Edge analytics are more power-efficient than cloud analytics. Moreover, they offer increased data protection as data stays within the LAN.

    During the last couple of years, industrial organizations use TinyML to execute ML models within CPU and memory-constrained devices. TinyML is faster, real-time, more power-efficient, and more privacy-friendly than any other form of edge analytics. Therefore, it provides benefits for many Industry 4.0 use cases.

    TinyML is the faster, real-time, most power-efficient, and most privacy friendly form of edge analytics. Image credit: Carbon Robotics.

    Building TinyML Applications

    The process of developing and deploying TinyML applications entails:

    1. Getting or Producing a Dataset, which is used for training the TinyML model. In this direction, data from sensors or production logs can be used.
    2. Train an ML or DL Model, using standard tools and libraries like Jupyter Notebooks and Python packages like TensorFlow and NumPy. The work entails Exploratory Data Analysis steps towards understanding the data, identifying proper ML models, and preparing the data for training them.
    3. Evaluate the Model’s Performance, using the trained model predictions and calculating various error metrics Depending on the achieved performance, the TinyML engineer may have to improve the model and avoid overfitting on the data. Different models must be tested to find the best one.
    4. Make the Model Appropriate to Run on an Embedded Device, using tools like TensorFlow Lite which provides a “converter” library that turns a model into a space-efficient format. TensorFlow Lite provides also an “interpreter” library that runs the converted model using the most efficient operations for a given device. In this step, a C/C++ sketch is produced to enable on device deployment.
    5. On-device Inference and Binary Development, which involves the C/C++ and embedded systems development part and produces a binary application for on-device inference.
    6. Deploying the Binary to a Microcontroller, which makes the microcontroller able to analyse data and derive real-time insights.
    Building a Google Assistant using tinyML. Image credit: Arduino.

    Leveraging AutoML for Faster Development with Arduino Pro

    Nowadays, Automatic Machine Learning (AutoML) tools are used to develop TinyML on various boards, including Arduino boards. Emerging platforms such as Edge Impulse, Qeexo and SensiML, among others, provide AutoML tools and developers’ resources for embedded ML development. Arduino is collaborating with such platforms as part of their strategy to make complex technologies open and simple to use by anyone.

    Within these platforms, users collect real-world sensor data, train ML models on the cloud, and ultimately deploy the model back to an Arduino device. It is also possible to integrate ML models with Arduino sketches based on simple function calls. AutoML pipelines ease the tasks of (re)developing and (re)deploying models to meet complex requirements.

    The collaboration between Arduino and ML platforms enables thousands of developers to build applications that embed intelligence in smart devices such as applications that recognize spoken keywords, gestures, and animals. Implementing applications that control IoT devices via natural language or gestures is relatively straightforward for developers who are familiar with Arduino boards.

    Arduino has recently introduced its new Arduino Pro ecosystem of industrial-grade products and services, which support the full development, production and operation lifecycle from Hardware and Firmware to Low Code, Clouds, and Mobile Apps. The Pro ecosystem empowers thousands of developers to jump into Industry 4.0 development and to employ advanced edge analytics.

    Big opportunity at every scale

    The Arduino ecosystem provides excellent support for TinyML, including boards that ease TinyML development, as well as relevant tools and documentation. For instance, the Arduino Nano 33 BLE Sense board is one of the most popular boards for TinyML. It comes with a well-known form factor and various embedded sensors. The latter include a 9-axis inertial sensor that makes the board ideal for wearable devices, as well as for humidity and temperature sensors. As another example, Arduino’s Portenta H7 board includes two asymmetric cores, which enables simultaneously runs of high level code such as protocol stacks, machine learning or even interpreted languages (e.g., MicroPython or JavaScript). Furthermore, the Arduino IDE (Integrated Development Environment) provides the means for customizing embedded ML pipelines and deploying them in Arduino boards.

    In a Nutshell

    ML and AI models need not always to run over powerful clouds and related High Performance Computing services. It is also possible to execute neural networks over tiny memory-limited devices like microcontrollers, which opens unprecedented opportunities for pervasive intelligence. The Arduino ecosystem offers developers the resources they need to ride the wave of Industry 4.0 and TinyML. Arduino boards and the IDE lower the barriers for thousands of developers to engage with IoT analytics for industrial intelligence.

    Read the full article on Wevolver.

    Categories:H7

    Website: LINK

  • This contactless system combines embedded ML and sensors to improve elevator safety

    This contactless system combines embedded ML and sensors to improve elevator safety

    Reading Time: 2 minutes

    Arduino TeamJanuary 29th, 2022

    As an entry into the 5th IEEE National Level Project Competition, Anway Pimpalkar and his team wanted to design a system that could help improve safety and usability within elevators by detecting if a human is present, the floor they wish to travel towards, and automatically go to the ground floor in the event of a fire. 

    For determining when a person is standing within the elevator’s cabin, Pimpalkar used a Nano 33 BLE Sense and an OV7675 camera module that take advantage of embedded machine learning for facial detection. From there, the Nano will notify the user via a blinking LED that it is ready to accept a verbal command for the floor number and will transport the user when processed. Perhaps most importantly, an MQ-2 smoke sensor and LM-35 temperature sensor were added to the custom PCB. These two pieces of hardware are responsible for sensing if there is a fire nearby and subsequently activating an alarm and then moving the cabin to the ground floor if needed. 

    Altogether, this project is a great showcase of how powerful tinyML can be when it comes to both safety and accessibility. To read more about the system, you can check out Pimpalkar’s GitHub repository here.

    Website: LINK

  • Instead of sensing the presence of metal, this tinyML device detects rock (music)

    Instead of sensing the presence of metal, this tinyML device detects rock (music)

    Reading Time: 2 minutes

    Arduino TeamJanuary 29th, 2022

    After learning about the basics of embedded ML, industrial designer and educator Phil Caridi had the idea to build a metal detector, but rather than using a coil of wire to sense eddy currents, his device would use a microphone to determine if metal music is playing nearby. 

    Caridi started out by collecting around two hours of music and then dividing the samples into two labels: “metal” and “non_metal” using Edge Impulse. After that, he began the process of training a neural network after passing each sample through an MFE filter. The end result was a model capable of detecting if a given piece of music is either metal or non-metal with around 88.2% accuracy. This model was then deployed onto a Nano 33 BLE Sense, which tells the program what kind of music is playing, but Caridi wasn’t done yet. He also 3D-printed a mount and gauge that turns a needle further to the right via a servo motor as the confidence of “metal music” increases.

    As seen in his video, the device successfully shows the difference between the band Death’s “Story to Tell” track and the much tamer and non-metal song “Oops!… I Did It Again” by Britney Spears. For more details about this project, you can read Caridi’s blog post.

    Website: LINK

  • AIfES releases exciting new version of TinyML library for Arduino

    AIfES releases exciting new version of TinyML library for Arduino

    Reading Time: 2 minutes

    Arduino TeamJanuary 17th, 2022

    Last July AIfES (Artificial Intelligence for Embedded Systems) from the Fraunhofer Institute for Microelectronic Circuits and Systems (IMS) was launched. This open source solution makes it possible to run, and even train, artificial neural networks (ANN) on almost any hardware, including the Arduino UNO. 

    The team hasn’t stopped work on this exciting machine learning platform, and an update just landed that you’ll definitely want to check out.

    The new AIfES-Express API

    AIfES-Express is an alternative, simplified API that’s integrated directly into the library. The new features allow you to run and train a feed-forward neural network (FNN) with only a few lines of code.

    Q7 weight quantization

    This update enables the simple Q7 (8-bit) quantization of the weights of a trained FNN. This significantly reduces the memory required. And depending where it’s being deployed, it brings a significant increase in speed along with it.

    This is especially true for controllers without FPU (Floating Point Unit). The quantization can be handled directly in AIfES® (and AIfES-Express) on the controller, PC, or wherever you’re using it. There are even example Python scripts to perform the quantization directly in Keras or PyTorch. The quantized weights can then be used in AIfES®.

    Advanced Arm CMSIS integration

    AIfES® now provides the option to use the Arm CMSIS (DSP and NN) library for a faster runtime.

    New examples to help you get building

    A simple gesture recognition application can be trained on-device for different Arduino boards, including:

    You can play tic-tac-toe against a microcontroller, with a pre-trained net that’s practically impossible to defeat. There are F32 and quantized Q7 versions to try. The Q7 version even runs on the Arduino UNO. The AIfES® team do issue a warning that it can be demoralizing to repeatedly lose against an 8-bit controller!

    This Portenta H7 example is particularly impressive. It shows you how to train in the background on one core, while using the other to run a completely different task. In the example, the M7 core of the Portenta H7 can even give the M4 core a task to train an FNN. The optimized weights can then be used by the M7 to perform the FNN with no delay, due to the training.

    Here’s a link to the GitHub repository so you can give this a go yourself.

    Website: LINK

  • This Arduino device knows how a bike is being ridden using tinyML

    This Arduino device knows how a bike is being ridden using tinyML

    Reading Time: 2 minutes

    Arduino TeamDecember 28th, 2021

    Fabio Antonini loves to ride his bike, and while nearly all bike computers offer information such as cadence, distance, speed, and elevation, they lack the ability to tell if the cyclist is sitting or standing at any given time. So, after doing some research, he came across an example project that utilized Edge Impulse and an Arduino Nano 33 BLE 33 Sense’s onboard accelerometer to distinguish between various kinds of movements. Based on this previous work, he opted to create his own ML device using the same general framework. 

    Over the course of around 20 minutes, Fabio collected data for both standing and sitting by strapping a Nano 33 BLE Sense to his arm and connecting it to a laptop. Once the data had been processed and fed through a training algorithm, his freshly minted model was then deployed back to the board for real-time processing. 

    The program Antonini made classifies incoming data from the IMU into one of four different states: seated on a plain, seated on an uphill, jumping on the pedals during an uphill, or pushing on a sprint while on a plain. From here, the built-in RGB LED changes its color to notify the user of what was inferred.

    You can read more about the creation process and usage of this project here in Antonini’s Medium blog post.

    Website: LINK

  • This Arduino device can detect which language is being spoken using tinyML

    This Arduino device can detect which language is being spoken using tinyML

    Reading Time: 2 minutes

    Arduino TeamDecember 8th, 2021

    Although smartphone users have had the ability to quickly translate spoken words into nearly any modern language for years now, this feat has been quite tough to accomplish on small, memory-constrained microcontrollers. In response to this challenge, Hackster.io user Enzo decided to create a proof-of-concept project that demonstrated how an embedded device can determine the language currently being spoken without the need for an Internet connection. 

    This so-called “language detector” is based on an Arduino Nano 33 BLE Sense, which is connected to a common PCA9685 motor driver that is, in turn, attached to a set of three micro servo motors — all powered by a single 9V battery. Enzo created a dataset by recording three words: “oui” (French), “si” (Italian), and “yes” (English) for around 10 minutes each for a total of 30 minutes of sound files. He also added three minutes of random background noise to help distinguish between the target keywords and non-important words. 

    Once a model had been trained using Edge Impulse, Enzo exported it back onto his Nano 33 BLE Sense and wrote a small bit of code that reads audio from the microphone, classifies it, and determines which word is being spoken. Based on the result, the corresponding nation’s flag is raised to indicate the language.

    You can see the project in action below and read more about it here on Hackster.io.

    Website: LINK

  • This system classifies different types of clouds using tinyML

    This system classifies different types of clouds using tinyML

    Reading Time: 2 minutes

    Arduino TeamDecember 6th, 2021

    At the basis of each weather forecast is data — and a lot of it. And although the vast majority of atmospheric data collection is fully automated, determining cloud volumes and types are still done manually. This problem is what inspired Swapnil Verma to create a project that utilizes machine learning to categorize six different classes of clouds.

    The hardware for this system consists of an Arduino Portenta H7 due to its powerful processor and array of connectivity features, along with a Portenta Vision Shield for the camera. Both of these boards were mounted to a custom base on top of a tripod and powered by a battery bank over USB-C. 

    The MicroPython software installed on the Portenta H7 relies on the OpenMV library for capturing images from the Vision Shield and performing a small amount of processing on them. From there, Verma trained an image classification model on nearly 2,100 images of various labeled cloud types — clear sky, patterned cloud, thin white cloud, thick white cloud, thick dark cloud, and veil cloud — using Edge Impulse and deployed it back to the board. As the Portenta runs, it collects an image, classifies it locally, and then sends the result via MQTT to client devices, which lets them read the incoming data remotely. Verma even included a mode that takes images at a slow rate and sleeps in between to save battery power. 

    To read more about the Verma’s cloud classifier project, you can visit its writeup here on Hackster.io and watch the demo below.

    Website: LINK