Schlagwort: Nicla Voice

  • Empowering the transportation of the future, with the Ohio State Buckeye Solar Racing Team

    Empowering the transportation of the future, with the Ohio State Buckeye Solar Racing Team

    Reading Time: 3 minutes

    Arduino is ready to graduate its educational efforts in support of university-level STEM and R&D programs across the United States: this is where students come together to explore the solutions that will soon define their future, in terms of their personal careers and more importantly of their impact on the world.

    Case in point: the groundbreaking partnership with the Ohio State University Buckeye Solar Racing Team, a student organization at the forefront of solar vehicle technology, committed to promoting sustainable transportation by designing, building, and racing solar-powered vehicles in national and international competitions. This collaboration will see the integration of advanced Arduino hardware into the team’s cutting-edge solar vehicles, enhancing driver displays, data transmission, and cockpit metric monitoring.

    In particular, the team identified the Arduino Pro Portenta C33 as the best option for their car: “extremely low-powered, high-quality and reliable, it also has a CAN interface – which is how we will be getting data from our sensors,” team lead Vasilios Konstantacos shared.

    We have also provided Arduino Student Kits for prototyping and, most importantly, accelerating the learning curve for new members. “Our goal is to rapidly equip our newcomers with vital skills, enabling them to contribute meaningfully to our team’s progress. Arduino’s hardware is a game-changer in this regard,” Vasilios stated.
    In addition, the team received Nicla Vision, Nicla Sense ME, and Nicla Voice modules to integrate essential sensors in the car, and more Portenta components to make their R&D process run faster (pun intended!): Portenta Breakout to speed up development on the Portenta C33, Portenta H7 to experiment with AI models for vehicle driving and testing, and Portenta Cat. M1/NB IoT GNSS Shield to connect the H7 to the car wirelessly, replacing walkie-talkie communication, and track the vehicle’s location.

    Combining our beginner-friendly approach with the advanced features of the Arduino Pro range is the key to empower students like the members of the Buckeye Solar Racing Team to learn and develop truly innovative solutions with the support of a qualified industrial partner and high-performance technological products. In particular, the Arduino ecosystem offers a dual advantage in this case: components’ extreme ruggedness, essential for race vehicle operations, paired with the familiarity and ease of use of the Arduino IDE.

    The partnership will empower Ohio State University students to experiment with microcontrollers and sensors in a high-performance setting, fostering a seamless, hands-on learning experience and supporting the institution’s dedication to providing unparalleled opportunities for real-world application of engineering and technology studies. Arduino’s renowned reliability and intuitive interface make it an ideal platform for students to develop solutions that are not only effective in the demanding environment of solar racing but also transferable to their future professional pursuits.

    “We are thrilled to collaborate with the Ohio State University Buckeye Solar Racing Team,” commented Jason Strickland, Arduino’s Higher Education Sales Manager. “Our mission has always been to make technology accessible and foster innovation. Seeing our hardware contribute to advancing solar racing technology and education is a proud moment for Arduino.”

    The post Empowering the transportation of the future, with the Ohio State Buckeye Solar Racing Team appeared first on Arduino Blog.

    Website: LINK

  • Controlling a power strip with a keyword spotting model and the Nicla Voice

    Controlling a power strip with a keyword spotting model and the Nicla Voice

    Reading Time: 2 minutes

    As Jallson Suryo discusses in his project, adding voice controls to our appliances typically involves an internet connection and a smart assistant device such as Amazon Alexa or Google Assistant. This means extra latency, security concerns, and increased expenses due to the additional hardware and bandwidth requirements. This is why he created a prototype based on an Arduino Nicla Voice that can provide power for up to four outlets using just a voice command.

    Suryo gathered a dataset by repeating the words “one,” “two,” “three,” “four,” “on,” and “off” into his phone and then uploaded the recordings to an Edge Impulse project. From here, he split the files into individual words before rebalancing his dataset to ensure each label was equally represented. The classifier model was trained for keyword spotting and used Syntiant NDP120-optimal settings for voice to yield an accuracy of around 80%.

    Apart from the Nicla Voice, Suryo incorporated a Pro Micro board to handle switching the bank of relays on or off. When the Nicla Voice detects the relay number, such as “one” or “three”, it then waits until the follow-up “on” or “off” keyword is detected. With both the number and state now known, it sends an I2C transmission to the accompanying Pro Micro which decodes the command and switches the correct relay.

    To see more about this voice-controlled power strip, be sure to check out Suryo’s Edge Impulse tutorial.

    [youtube https://www.youtube.com/watch?v=9PRjhA38jBE?feature=oembed&w=500&h=281]

    The post Controlling a power strip with a keyword spotting model and the Nicla Voice appeared first on Arduino Blog.

    Website: LINK

  • Improve recycling with the Arduino Pro Portenta C33 and AI audio classification

    Improve recycling with the Arduino Pro Portenta C33 and AI audio classification

    Reading Time: 2 minutes

    In July 2023, Samuel Alexander set out to reduce the amount of trash that gets thrown out due to poor sorting practices at the recycling bin. His original design relied on an Arduino Nano 33 BLE Sense to capture audio through its onboard microphone and then perform edge audio classification with an embedded ML model to automatically separate materials based on the sound they make when tossed inside. But in this latest iteration, Alexander added several large improvements to help the concept scale much further.

    Perhaps the most substantial modification, the bin now uses an Arduino Pro Portenta C33 in combination with an external Nicla Voice or Nano 33 BLE Sense to not only perform inferences to sort trash, but also send real-time data to a cloud endpoint. By utilizing the Arduino Cloud through the Portanta C33, each AI-enabled recycling bin can now report its current capacity for each type of waste and then send an alert when collection must occur.

    While not as practical for household use, this integration could be incredibly effective for municipalities looking to create a network of bins that can be deployed in a city park environment or another public space.

    Thanks to these upgrades, Alexander was able to submit his prototype for consideration in the 2023 Hackaday Prize competition where he was awarded the Protolabs manufacturing grant. To see more about this innovative project, you can check out its write-up here and watch Alexander’s detailed explanation video below.

    [youtube https://www.youtube.com/watch?v=jqvCssm-7A4?feature=oembed&w=500&h=281]

    The post Improve recycling with the Arduino Pro Portenta C33 and AI audio classification appeared first on Arduino Blog.

    Website: LINK

  • Building the OG smartwatch from Inspector Gadget

    Building the OG smartwatch from Inspector Gadget

    Reading Time: 2 minutes

    We recently showed you Becky Stern’s recreation of the “computer book” carried by Penny in the Inspector Gadget cartoon, but Stern didn’t stop there. She also built a replica of Penny’s most iconic gadget: her watch. Penny was a trendsetter and rocked that decades before the Apple Watch hit the market. Stern’s replica looks just like the cartoon version and even has some of the same features.

    The centerpiece of this project is an Arduino Nicla Voice board. The Arduino team designed that board specifically for speech recognition on the edge, which made it perfect for recognizing Penny’s signature “come in, Brain!” voice command. Stern used Edge Impulse to train an AI to recognize that phrase as a wake word. When the Nicla Voice board hears that, it changes the image on the smart watch screen to a new picture of Brain the dog.

    The Nicla Vision board and an Adafruit 1.69″ color IPS TFT screen fit inside a 3D-printed enclosure modeled on Penny’s watch from the cartoon. That even has a clever 3D-printed watch band with links connected by lengths of fresh filament. Power comes from a small lithium battery that also fits inside the enclosure.

    This watch and Stern’s computer book will both be part of an Inspector Gadget display put on by Digi-Key at Maker Faire Rome, so you can see it in person if you attend.

    [youtube https://www.youtube.com/watch?v=Yd74FYTvGX8?feature=oembed&w=500&h=281]

    The post Building the OG smartwatch from Inspector Gadget appeared first on Arduino Blog.

    Website: LINK

  • Improving comfort and energy efficiency in buildings with automated windows and blinds

    Improving comfort and energy efficiency in buildings with automated windows and blinds

    Reading Time: 2 minutes

    When dealing with indoor climate controls, there are several variables to consider, such as the outside weather, people’s tolerance to hot or cold temperatures, and the desired level of energy savings. Windows can make this extra challenging, as they let in large amounts of light/heat and can create poorly insulated regions, which is why Jallson Suryo developed a prototype that aims to balance these needs automatically through edge AI techniques.

    Suryo’s smart building ventilation system utilizes two separate boards, with an Arduino Nano 33 BLE Sense handling environmental sensor fusion and a Nicla Voice listening for certain ambient sounds. Rain and thunder noises were uploaded from an existing dataset, split and labeled accordingly, and then used to train a Syntiant audio classification model for the Nicla Voice’s NDP120 processor. Meanwhile, weather and ambient light data was gathered using the Nano’s onboard sensors and combined into time-series samples with labels for sunny/cloudy, humid, comfortable, and dry conditions.

    After deploying the board’s respective classification models, Suryo added some additional code that writes new I2C data from the Nicla Voice to the Nano that indicates if rain/thunderstorm sounds are present. If they are, the Nano can automatically close the window via servo motors while other environmental factors can set the position of the blinds. With this multi-sensor technique, a higher level of accuracy can be achieved for more precision control over a building’s windows, and thus attempt to lower the HVAC costs.

    [youtube https://www.youtube.com/watch?v=mqk1IRz76HM?feature=oembed&w=500&h=281]

    More information about Suryo’s project can be found here on its Edge Impulse docs page

    The post Improving comfort and energy efficiency in buildings with automated windows and blinds appeared first on Arduino Blog.

    Website: LINK

  • A snore-no-more device designed to help those with sleep apnea

    A snore-no-more device designed to help those with sleep apnea

    Reading Time: 2 minutes

    Despite snoring itself being a relatively harmless condition, those who do snore while asleep can also be suffering from sleep apnea — a potentially serious disorder which causes the airway to repeatedly close and block oxygen from getting to the lungs. As an effort to alert those who might be unaware they have sleep apnea, Naveen Kumar devised a small device using an Arduino Pro Nicla Voice to detect when a person is snoring and gently alert them via haptic feedback in their pillow.

    Although many boards have microphones and can run sound recognition machine learning models, the Nicla Voice contains a Syntiant NDP120 Neural Decision Processor that is specifically designed to accelerate deep learning workloads while also decreasing the amount of power needed to do so. Apart from the board, Kumar added an Adafruit DRV2605L haptic motor driver and haptic motor as a way to wake up the user without disturbing others nearby.

    The model was created by first downloading a snoring dataset that contains hundreds of short samples of either snoring or non-snoring. After adding them to the Edge Impulse Studio, Kumar constructed an impulse from the Syntiant Audio blocks and trained a model that achieved a 94.6% accuracy against the test dataset. The code integrating the model continuously collects new audio samples from the microphone, passes them to the NDP120 for classification, and triggers the haptic motor if snoring is sensed.

    [youtube https://www.youtube.com/watch?v=9jKJgnxQAnQ?feature=oembed&w=500&h=281]

    To read more about this project, you can check out Kumar’s write-up here.

    The post A snore-no-more device designed to help those with sleep apnea appeared first on Arduino Blog.

    Website: LINK

  • Small-footprint keyword spotting for low-resource languages with the Nicla Voice

    Small-footprint keyword spotting for low-resource languages with the Nicla Voice

    Reading Time: 2 minutes

    Speech recognition is everywhere these days, yet some languages, such as Shakhizat Nurgaliyev and Askat Kuzdeuov’s native Kazakh, lack sufficiently large public datasets for training keyword spotting models. To make up for this disparity, the duo explored generating synthetic datasets using a neural text-to-speech system called Piper, and then extracting speech commands from the audio with the Vosk Speech Recognition Toolkit.

    Beyond simply building a model to recognize keywords from audio samples, Nurgaliyev and Kuzdeuov’s primary goal was to also deploy it onto an embedded target, such as a single-board computer or microcontroller. Ultimately, they went with the Arduino Nicla Voice development board since it contains not just an nRF52832 SoC, a microphone, and an IMU, but an NDP120 from Syntiant as well. This specialized Neural Decision Processor helps to greatly speed up inferencing times thanks to dedicated hardware accelerators while simultaneously reducing power consumption. 

    With the hardware selected, the team began to train their model with a total of 20.25 hours of generated speech data spanning 28 distinct output classes. After 100 learning epochs, it achieved an accuracy of 95.5% and only consumed about 540KB of memory on the NDP120, thus making it quite efficient.

    [youtube https://www.youtube.com/watch?v=1E0Ff0ds160?feature=oembed&w=500&h=375]

    To read more about Nurgaliyev and Kuzdeuov’s project and how they deployed an embedded ML model that was trained solely on generated speech data, check out their write-up here on Hackster.io.

    The post Small-footprint keyword spotting for low-resource languages with the Nicla Voice appeared first on Arduino Blog.

    Website: LINK

  • Training embedded audio classifiers for the Nicla Voice on synthetic datasets

    Training embedded audio classifiers for the Nicla Voice on synthetic datasets

    Reading Time: 2 minutes

    The task of gathering enough data to classify distinct sounds not captured in a larger, more robust dataset can be very time-consuming, at least until now. In his write-up, Shakhizat Nurgaliyev describes how he used an array of AI tools to automatically create a keyword spotting dataset without the need for speaking into a microphone.

    The pipeline is split into three main parts. First, the Piper text-to-speech engine was downloaded and configured via a Python script to output 904 distinct samples of the TTS model saying Nurgaliyev’s last name in a variety of ways to decrease overfitting. Next, background noise prompts were generated with the help of ChatGPT and then fed into AudioLDM which produces the audio files based on the prompts. Finally, all of the WAV files, along with “unknown” sounds from the Google Speech Commands Dataset, were uploaded to an Arduino ML project

    Training the model for later deployment on a Nicla Voice board was accomplished by adding a Syntiant audio processing block and then generating features to train a classification model. The resulting model could accurately determine when the target word was spoken around 96% of the time — all without the need for manually gathering a dataset.

    [youtube https://www.youtube.com/watch?v=4ike-duV0G8?feature=oembed&w=500&h=281]

    To read more about this project, you can check out Nurgaliyev’s detailed write-up on Hackster.io.

    The post Training embedded audio classifiers for the Nicla Voice on synthetic datasets appeared first on Arduino Blog.

    Website: LINK

  • Detect a crying baby with tinyML and synthetic data

    Detect a crying baby with tinyML and synthetic data

    Reading Time: 2 minutes

    When a baby cries, it is almost always due to something that is wrong, which could include, among other things, hunger, thirst, stomach pain, or too much noise. In his project, Nurgaliyev Shakhizat demonstrated how he was able to leverage ML tools to build a cry-detection system without the need for collecting real-world data himself.

    The process is as follows: ChatGPT generates a series of text prompts that all involve a crying baby in some manner. These prompts are then passed to AudioLDM which creates sounds according to the prompts. Finally, Shakhizat used the Arduino Cloud’s Machine Learning Tools integration, powered by Edge Impulse, to train a tinyML model for deployment onto an Arduino Nicla Voice board. To create the sounds themselves, Shakhizat configured a virtual Python environment with the audioldm package installed. His script takes the list of prompts, executes them within an AudioLDM CLI command, and saves the generated sound data as a WAV file.

    Once this process was done, he configured a project in the Edge Impulse Studio which trains a classifier model. The result after training completed was a model that could accurately distinguish between background noise and a crying baby 90% of the time, and deploying it onto the Arduino Nicla Voice showed the effectiveness of how synthetic datasets and embedded models can be used in the real world.

    To read more, you can check out Shakhizat’s write-up here on Hackster.io.

    [youtube https://www.youtube.com/watch?v=6Qe1PPLstW8?feature=oembed&w=500&h=281]

    The post Detect a crying baby with tinyML and synthetic data appeared first on Arduino Blog.

    Website: LINK

  • Have you heard? Nicla Voice is out at CES 2023!

    Have you heard? Nicla Voice is out at CES 2023!

    Reading Time: 2 minutes

    As announced at CES 2023 in Las Vegas, our tiny form factor family keeps growing: the 22.86 x 22.86 mm Nicla range now includes Nicla Voice, allowing for easy implementation of always-on speech recognition on the edge.

    How? Let’s break it down.

    1. The impressive sensor package. Nicla Voice comes with a full set of sensors: microphone, smart 6-axis motion sensor and magnetometer – so it can not only listen to you, your machines, the environment around it, but also recognize gestures, vibrations and other movements. 

    2. The high-performance AI brains. Nicla Voice runs audio inputs through the powerful Syntiant NDP120 Neural Decision processor, which mimics human neural pathways to run multiple AI algorithms and automate complex tasks. In other words, it hears different events and keywords simultaneously, and is capable of understanding and learning what sounds mean.

    3. The easy connectivity features. It connects to existing devices thanks to onboard Bluetooth® Low Energy connectivity. 

    4. The effortless integration with custom boards. Thanks to its headers and castellated pins, Nicla Voice is ready to go from prototype to industrial-scale production, fitting right into any custom carrier board you develop.

    5. The Edge Impulse compatibility. In line with our mission to make complex technologies accessible to all, Nicla Voice is compatible with Edge Impulse, the leading development platform for machine learning on edge devices.

    6. The minimal power needs. And last but absolutely not least, it is so ultra-low power it can be the brain of always-on – and even battery-operated – solutions. No need to run dedicated power lines, no switches or interfaces to activate the system. It’s ready to listen, 24/7, anywhere you want to install it.

    Speechless? We’re sure you’ll find your voice soon. With Nicla Voice’s ready-to-use combination of sensors and processing power, you can prototype and develop new solutions that leverage voice detection and voice recognition, or interpret any other audio input – from machines that need maintenance to water dripping, and from glass breaking to alarms that must get through headphones’ noise-canceling features. We can’t wait to hear what you’ll create with it!

    Need to hear a pin drop? Nicla Voice is all ears. 

    To find out more, access our free online documentation or check out the technical details from the Arduino Store page.

    The post Have you heard? Nicla Voice is out at CES 2023! appeared first on Arduino Blog.

    Website: LINK