You would probably recognize a thermal printer as the thing that spits out receipts at a cash register. They offer a two key advantages: they do not require ink cartridges and they are compact. But because they print by applying heat to special paper that darkens when hot, they have low resolution and fidelity. If that’s a price you’re willing to pay for your next project, then Vaclav Krejci (AKA Upir on YouTube) has a great video tutorial that will show you how to control a thermal printer with your Arduino.
This model, a QR204, and most others like it, receive print content through an RS232 serial communications port. It has an internal microcontroller that lets it interpret what it receives over serial. If that is simple text, then it will print the text and move to the next line. But it also accepts commands in the form of special characters to modify the output, such as increasing the text size. It can also print low-resolution images sent in the form of bitmap arrays. Krejci explains how to do all of that in the video.
To follow along, you can use an Arduino Uno like Krejci or any other Arduino board. You only need to connect five jumper wires from the printer to the Arduino: ground, RX, TX, DTR, and NC. From there, all you need is a simple Sketch that sends serial output at 9600 baud through the pins you define. To print a line of text, use the standard Serial.println(“your text”) function. When you want to do something more complex, like print an image, Krejci has instructions on how to do so.
Many people (especially those with autism spectrum disorder) have difficulty communicating with others around them. That is always a challenge, but becomes particularly noticeable when one cannot convey their emotions through body language. If someone can’t show that they’re not in the mood to talk, that may lead to confusing interactions. To help people express their emotions, University of Stuttgart students Clara Blum and Mohammad Jafari came up with this wearable device that makes them obvious.
The aptly named Emotion Aid sits on the user’s shoulders like a small backpack. The prototype was designed to attach to a bra, but it could be tweaked to be worn by those who don’t use bras. It has two functions: detecting the user’s emotions and communicating those emotions. It uses an array of different sensors to detect biometric indicators, such as temperature, pulse, and sweat, to try and determine the user’s emotional state. It then conveys that emotional state to the surrounding world with an actuated fan-like apparatus.
An Arduino Uno Rev3 handles these functions. Input comes from a capacitive moisture sensor, a temperature sensor, and a pulse sensor. The Arduino actuates the fan mechanism using a small hobby servo motor. Power comes from a 9V battery. The assembly process is highly dependent on the way the device is to be worn, but the write-up illustrates how to attach the various sensors to a bra. There are many possible variations, so the creators of the Emotion Aid encourage people to experiment with the idea.
You can read more about the Emotion Aid, which was developed by Blum and Jafari as part of the University of Stuttgart’s ITECH master’s program, here on Instructables.
Spectrum analysis is a technique that allows someone to observe the amplitude of various frequency ranges within a signal. The most common use case is in the world of audio engineering, as it is useful for tuning audio output. It can, for example, show you that a particular audio signal has little amplitude in the low bands and that tells you that you should turn up the bass. If you want to try this for yourself, Sam Dartel designed a DIY spectrum analyzer that is easy for beginners to build.
For a spectrum analyzer to work, it needs to be able to break an electrical signal down into a series of frequency ranges. In an audio signal, frequency is pitch. That means that higher frequency ranges correspond to higher notes in the audio. This spectrum analyzer utilizes an MSGEQ7 IC, which is an equalizer filter, to pull seven frequency ranges from an audio signal. It outputs the peak of each band, giving a real-time reading of each band’s amplitude.
There are two versions of this spectrum analyzer: one powered by a battery and one powered via USB. Both are shields for An Arduino Nano board, which takes the output from the MSGEQ7 and uses the FastLED library to set the number of LEDs lit on seven WS2812B individually addressable RGB LED strips. Each strip is a 2D display and that is enough for the amplitude of each band, but the color and brightness of the LEDs introduce two other possible dimensions. This spectrum analyzer uses those for different effects patterns.
To build this spectrum analyzer, you’ll need to have one of the two shield PCB designs fabricated. All of the components are through-hole to make assembly easy.
Movement and LED-based projects to try for yourself
Raspberry Pi Sense HAT Pong
Paying homage to the original computer game, this walkthrough on how combine a Sense HAT and Raspberry Pi, plus some Python code to create classic Pong, is bound to appeal to retro gamers. There’s even a link to a video of the original game being played on an oscilloscope.
Orientation sensors on the Sense HAT detect which way the virtual marble (in the guise of coloured LEDs) is travelling in this tricky maze puzzle. Have fun following the setup guide, before setting a timer and challenging your friends to escape the maze in the fastest time.
Lorna Jane’s tutorial shows you how the LED matrix can be used as a useful and quirky display that can be used as an at-a-glance bedside time check, without having to turn on the light or phone display.
Now we’ve whetted your appetite about things a Sense HAT can do, head to this useful guide to set up. As this guide explains, the Sense HAT contains a number of on-board sensors to measure temperature, humidity, colour, movement, and orientation, and also has an LED grid to display the results of your investigations. With its origins as a sensing device destined to go into space with the European Space Agency, and as a teaching aid as part of the Astro Pi project, there’s little the Sense HAT can’t do.
This tutorial walks you through displaying text, images, and measuring the orientation of the Sense HAT device, before detecting movement using the joystick for input, and putting it all together to create your own projects that sense and react to their surroundings.
The official documentation will walk you through some of the more advanced aspects of Sense HAT and how to use this Raspberry Pi accessory with different computer languages, including C++ as well as Scratch and Python. Each of the sensors (gyroscope and magnetometer and barometric pressure sensor among them) is covered, along with calibration for the magnetometer and accelerometer. The official documentation also covered reading and writing the EEPROM data. Also take a look at the corresponding Sense HAT Python documentation, linked to from the official documentation. Here you will find Python API examples and the Sense HAT API reference guide, ideal for all your coding needs.
Senior care homes may get a bad rap in popular culture, but they serve a legitimate and important purpose. As people age, their requirements for care can exceed the capability of their families. When that becomes the case, professional care may be prudent. To help those professionals care for residents, Hayden designed this system that can monitor up to 40 doors in a nursing home.
The senior care facility where Hayden works already had a system for monitoring each resident room with a PIR (passive infrared) sensor. But that system was no longer functional and wasn’t serving any purpose. Instead of buying a whole new system, Hayden chose to tap into the existing sensors. To do that, they used five Arduino Mega 2560 boards to create hub units. Those hubs were spread around the building and each one monitors the PIR sensors from a handful of rooms.
Those hub units contain nRF24L01 radio transceiver modules so they can communicate sensor status to a central control unit at the nurses station, also powered by an Arduino Mega. That control unit features a 240×128 LCD to show the status of each room along with a real-time clock, an SD card module for logging data to CSV, and a piezo buzzer and LED for alerts.
But attendants aren’t always near their desk, so Hayden built a handful of handheld devices as well. Those are more rudimentary and only include nRF24 modules, small screens, and batteries. Employees can carry those smaller devices with them at all times and they will display any room with activity that triggers the PIR sensor, ensuring that a resident receives immediate attention if they leave their room when they shouldn’t.
Every day, most of us both consume and create data. For example, we interpret data from weather forecasts to predict our chances of a good weather for a special occasion, and we create data as our carbon footprint leaves a trail of energy consumption information behind us. Data is important in our lives, and countries around the world are expanding their school curricula to teach the knowledge and skills required to work with data, including at primary (K–5) level.
Kate FarrellProf. Judy Robertson
In our most recent research seminar, attendees heard about a research-based initiative called Data Education in Schools. The speakers, Kate Farrell and Professor Judy Robertson from the University of Edinburgh, Scotland, shared how this project aims to empower learners to develop data literacy skills and succeed in a data-driven world.
“Data literacy is the ability to ask questions, collect, analyse, interpret and communicate stories about data.”
– Kate Farrell & Prof. Judy Robertson
Being a data citizen
Scotland’s national curriculum does not explicitly mention data literacy, but the topic is embedded in many subjects such as Maths, English, Technologies, and Social Studies. Teachers in Scotland, particularly in primary schools, have the flexibility to deliver learning in an interdisciplinary way through project-based learning. Therefore, the team behind Data Education in Schools developed a set of cross-curricular data literacy projects. Educators and education policy makers in other countries who are looking to integrate computing topics with other subjects may also be interested in this approach.
Data citizens have skills they need to thrive in a world shaped by digital technology.
The Data Education in Schools projects are aimed not just at giving learners skills they may need for future jobs, but also at equipping them as data citizens in today’s world. A data citizen can think critically, interpret data, and share insights with others to effect change.
Kate and Judy shared an example of data citizenship from a project they had worked on with a primary school. The learners gathered data about how much plastic waste was being generated in their canteen. They created a data visualisation in the form of a giant graph of types of rubbish on the canteen floor and presented this to their local council.
Sorting food waste from lunch by type of material
As a result, the council made changes that reduced the amount of plastic used in the canteen. This shows how data citizens are able to communicate insights from data to influence decisions.
A cycle for data literacy projects
Across its projects, the Data Education in Schools initiative uses a problem-solving cycle called the PPDAC cycle. This cycle is a useful tool for creating educational resources and for teaching, as you can use it to structure resources, and to concentrate on areas to develop learner skills.
The PPDAC data problem-solving cycle
The five stages of the cycle are:
Problem: Identifying the problem or question to be answered
Plan: Deciding what data to collect or use to answer the question
Analysis: Preparing, modelling, and visualising the data, e.g. in a graph or pictogram
Conclusion: Reviewing what has been learned about the problem and communicating this with others
Smaller data literacy projects may focus on one or two stages within the cycle so learners can develop specific skills or build on previous learning. A large project usually includes all five stages, and sometimes involves moving backwards — for example, to refine the problem — as well as forwards.
Data literacy for primary school learners
At primary school, the aim of data literacy projects is to give learners an intuitive grasp of what data looks like and how to make sense of graphs and tables. Our speakers gave some great examples of playful approaches to data. This can be helpful because younger learners may benefit from working with tangible objects, e.g. LEGO bricks, which can be sorted by their characteristics. Kate and Judy told us about one learner who collected data about their clothes and drew the results in the form of clothes on a washing line — a great example of how tangible objects also inspire young people’s creativity.
As learners get older, they can begin to work with digital data, including data they collect themselves using physical computing devices such as BBC micro:bit microcontrollers or Raspberry Pi computers.
Coming soon: the recording of Kate’s and Judy’s seminar for you to watch. You can access their slides here.
Free resources for primary (and secondary) schools
For many attendees, one of the highlights of the seminar was seeing the range of high-quality teaching resources for learners aged 3–18 that are part of the Data Education in Schools project. These include:
Data 101 videos: A set of 11 videos to help primary and secondary teachers understand data literacy better.
Lesson resources: Lots of projects to develop learners’ data literacy skills. These are mapped to the Scottish primary and secondary curriculum, but can be adapted for use in other countries too.
More resources are due to be published later in 2023, including a set of prompt cards to guide learners through the PPDAC cycle, a handbook for teachers to support the teaching of data literacy, and a set of virtual data-themed escape rooms.
You may also be interested in the units of work on data literacy skills that are part of The Computing Curriculum, our complete set of classroom resources to teach computing to 5- to 16-year-olds.
Join our next seminar on primary computing education
At our next seminar we welcome Aim Unahalekhaka from Tufts University, USA,who will share research about a rubric to evaluate young learners’ ScratchJr projects. If you have a tablet with ScratchJr installed, make sure to have it available to try out some activities. The seminar will take place online on Tuesday 6 June at 17.00 UK time, sign up now to not miss out.
To find out more about connecting research to practice for primary computing education, you can see a list of our upcoming monthly seminars on primary (K–5) teaching and learning and watch the recordings of previous seminars in this series.
One of the most popular machines in any arcade is the Cyclone game, where a ring of LEDs illuminate in sequence and the player must push a button at the exact moment that a specific LED lights up. Get it just right and you can win a whole pile of tickets to spend in the arcade shop! But the machines in arcades tend to be rigged, with the timing altered slightly to make the game more difficult. This mini Cyclone game saves you a trip to the arcade and doesn’t employ cheats (unless you want it to).
This is the second Cyclone game built by Mirko Pavleski. The first was much larger, with a ring of 60 LEDs. The new version is smaller and simpler. It has a ring of only 12 LEDs. The original increased the speed with each round, but this version sets a random speed (within a predefined range) each time. It tracks the number of rounds completed by a player before they fail and saves that high score in EEPROM storage so it is persistent every time someone turns on the game.
The hardware is affordable and easy to find. It includes an Arduino Nano board, a WS2812B LED ring, a 16×2 character LCD with I2C interface, two buttons, a power switch, and a buzzer. Those components all mount to a basic stand made of PVC board and plywood wrapped in self-adhesive wallpaper. If you’re a fan of Cyclone games, this would make a great weekend project.
While it is easier now than ever before, getting into robotics is still daunting. In the past, aspiring roboticists were limited by budget and inaccessible technology. But today the challenge is an overwhelming abundance of different options. It is hard to know where to start, which is why Saul designed a set of easy-to-build and affordable robots called Bolt Bots.
There are currently five different Bolt Bot versions to suit different applications and you can assemble all of them with the same set of hardware. Once you finish one, you can repurpose the components to make another. The current designs include a large four-leg walker (V1), a tiny four-leg walker (V2), a robot arm (V3), a car (V4), and a hanging plotter that can draw (V5). They all have a shared designed language and utilize 3D-printed mechanical parts with off-the-shelf fasteners.
Every robot has an Arduino Micro board paired with an nRF24L01 radio transceiver module for control. Users can take advantage of existing RC transmitters or build a remote also designed by Saul. The other components include servo motors, an 18650 lithium battery, and miscellaneous parts likes wires and screws. Some of the Bolt Bots require different servo motors, like continuous-rotation and mini 1.8g models, but most of them are standard 9g hobby servo motors.
Because there are five Bolt Bot variations that use the same components, this is an awesome ecosystem for getting started in robotics on a budget — especially for kids and teens.
One of the newer trends with exercise bikes these days is to have your cycling power a video of a wonderful environment that you could be cycling through if you had the ability to do so. The technical side of it is fairly simple – with some kind of coder or sensor, you can track how much you’re pedalling and translate that to video playback or, in this case, graphic generation.
This project by paddywwoof uses a Hall effect sensor to keep track of the speed of the bike, feeding into a Python program that renders karst, fjords, and alpine environments as you pedal. It’s easy enough to edit your own maps if you have the time too.
Some of the instructions are for a slightly older version of Raspberry Pi OS and the Python that comes with it. However, it should still work just fine.
During 2020, when a lot of people were working from home, James Wong decided to improve his workout routine. He also wanted to combine it with his research into machine learning on Raspberry Pi. Hence, combining HIIT (high‑intensity interval training) with Raspberry Pi to track his workout and give him useful data on it.
How is your data used? To score yourself against others, naturally, taking advantage of competitive streaks to get you improving the efficiency of your workout. A Coral Edge TPU is used to aid in the machine learning part, improving the sampling rate to 30 frames per second.
The code is open-source and is readily available on GitHub. James believes it should be easily adaptable to other sports or workout types – yoga springs to mind when it comes to tracking body metrics!
Gamifying your workout has its upsides and downsides. However, having a target to beat, both in terms of a high score and pop culture characters to focus on, can make a workout much more fun. Making use of the kind of technology used by Ivan Dragon in Rocky IV, or even punch machines at arcades.
Raspberry Pi acts as an accelerometer to measure the power of your punch to calculate how many hit points (HP) are taken off the character you’re fighting – in some cases this means Luke Skywalker or Darth Vader. A simple LED screen is used to let you know how much more fightin’ you’ll need to do.
Like all good games, the different characters increase in power, meaning you’ll be worn out by the time you take down the Dark Lord of the Sith. Or, maybe you’ll need to face him twice – depends how strong you are.
An encouraging weight tracker for those who need it
Wii Fit was a revolution for fitness video games, spawning loads of motion-controlled workouts. If you don’t really like to use your Balance Board much anymore, you can hack it to be a pair of smart scales using a Raspberry Pi and a pencil.
The Wii Balance Board connects via Bluetooth much like Wiimotes, although you’ll need a bit of custom code to keep it paired permanently – included with the rest of the code for this project. It does require you to press a button underneath the Balance Board though, achieved by the pencil we mentioned before – a simple solution that won’t need much maintenance.
Once you step on it, 250 measurements are taken to create an average (due to electronic noise) and it then sends the info to an IoT service called Initial State which does the tracking. It also sends you texts to let you know your progress, including encouraging messages as well.
Optimizing manufacturing processes is a requirement in any industry today, with electricity consumption in particular representing a major concern due to increased costs and instability. Analyzing energy use has therefore become a widespread need – and one that can also lead to early identification of anomalies and predictive maintenance: two important activities to put in place in order to minimize unexpected downtime and repair costs.
In particular, this approach can be applied to DC motors: used in a wide range of applications, from small household appliances to heavy industrial equipment; these motors are critical components that require regular maintenance to ensure optimal performance and longevity. Unfortunately, traditional maintenance practices based on fixed schedules or reactive repairs can be time-consuming, expensive, and unreliable. This is where energy monitoring-based anomaly detection comes in: it can provide a crucial solution for the early detection of potential issues and malfunctions before they can cause significant damage to the motor.
This more proactive approach to maintenance continuously monitors the energy consumption of the motor and analyzes the data to identify any deviations from normal operating conditions. By tracking energy usage patterns over time, the system can detect early warning signs of potential problems, such as excessive wear and tear, imbalances or misalignments, and alert maintenance personnel to take corrective actions before the issue escalates.
Our solution
This Arduino-powered solution implements an energy monitoring-based anomaly detection system using a current sensor and machine learning models running on edge devices. By capturing the electricity flowing in and out of a machine, it can collect large amounts of data on energy usage patterns over time. This data is then used to train a machine learning model capable of identifying anomalies in energy consumption behaviors and alerting operators to potential issues. The solution offers a cost-effective and scalable method for maintaining equipment health and maximizing energy efficiency, while also reducing downtime and maintenance costs.
Motor Current Signature Analysis (MCSA)
In this application, a technique called Motor Current Signature Analysis is used. MCSA involves monitoring the electrical signature of the motor’s current overtime to detect any anomalies that may indicate potential issues or faults. To acquire real-time data, a Hall effect current sensor is attached in series with the supply line of the DC motor. The data are then analyzed using machine learning algorithms to identify patterns and trends that might indicate a faulty motor operation. MCSA can be used to detect a number of issues like bearings wear, rotor bar bendings or even inter-turn short circuits.
Depending on the dimensions of the motor, using a non-invasive clamp-style current sensor – also known as a Split-Core Current Transformer – is recommended if a larger current draw is expected.
Edge ML
To monitor the current fluctuation and run the anomaly-detecting ML model, the solution uses anArduino Opta WiFi: a micro PLC suitable for Industrial IoT, which is excellent for this project because of its real-time data classification capabilities, based on a powerful STM32H747XI dual-core Cortex®-M7 +M4 MCU. The Arduino Opta WiFi works with both analog and digital inputs and outputs, allowing it to interact with a multitude of sensors and actuators. The Arduino Opta WiFi also features an Ethernet port, an RS485 half duplex connectivity interface and WiFi/Bluetooth® Low Energy connectivity, which makes it ideal for industrial retrofitting applications. You can find the full datasheet here.
To train the anomaly detection model, the project leverages the Edge Impulse platform: being integrated within the Arduino ecosystem, it makes it easy to develop, train, and deploy machine learning models on Arduino devices.
Connectivity
Once the machine learning model was successfully deployed on the Arduino Opta, the anomaly detection results were forwarded via WiFi to the Arduino IoT Cloud. This enables easy monitoring and analysis of the data from multiple sensor nodes in real time.
Solving it with Arduino Pro
Let’s take a look at how we can put all of this together and what hardware and software solutions we would need for deployment. The Arduino Pro ecosystem is the most recent version of Arduino solutions, offering users the benefits of easy integration along with a range of scalable, secure, and professionally supported services.
The Arduino IDE 2.0 was used to program the Arduino Opta WiFi using C/C++. To train the Edge Impulse model, data was gathered from the current sensor for two classes: Normal Operation and Machine Off. The Motor Current Signature Analysis (MCSA) technique was implemented by extracting the frequency and power characteristics of the signal through a Spectral Analysis block. Additionally, an anomaly detection block was incorporated to identify any abnormal patterns in the data.
Here is a screenshot from a dashboard created directly in the Arduino Cloud, showcasing data received from the sensor nodes:
Here is an overview of the software stack and how a minimum deployment with one of each hardware module communicates to fulfil the proposed solution:
Conclusion
Through the implementation of a predictive maintenance system on an Arduino Opta WiFi PLC, using Edge Impulse ML models and the Arduino Cloud, this solution demonstrates the powerful potential of IoT technologies in industrial applications. With the use of current sensors and AI-driven anomaly detection models, the system enables real-time monitoring and fault detection of DC motors, providing valuable insights for predictive maintenance. The flexibility and scalability of the Arduino Opta WiFi platform make it a robust and cost-effective solution for implementing predictive maintenance systems in various industrial processes. Overall, the project highlights the significant advantages that MCSA and machine learning can offer in promoting efficiency, productivity, and cost savings for industrial processes.
For those who have to put up with a snoring partner or roommate, the scourge of listening to those droning sounds can be maddening and lead to a decrease in the quality of one’s sleep. So, as an attempt to remedy this situation, Flin van Asperen devised an “anti-snoring machine” out of readily accessible components that aims to teach people to stop snoring so much.
The bulk of the device was made by cutting foam boards to size and painting them gray before gluing each piece together to form a box. That enclosure was then given a pair of ears and covered in fake moss (to ‘blend in’ with other houseplants). Next, the circuit was created by connecting an Arduino Uno to a sound sensor element that detects if someone’s snoring has gotten too loud. Once the level has been reached, an MP3 player module is instructed to play a short audio clip via a speaker while a micro servo knocks over a cup onto the snorer’s head below.
Through this combination of sound and physical correction, van Asperen hopes that his anti-snoring contraption will be successful in stopping a person’s snoring. You can read more about the project here in its [translated] Instructables post.
The simplest MIDI (Musical Instrument Digital Interface) input devices use good ol’ fashioned buttons: push a button and the device sends a MIDI message to trigger a specific note. But that control scheme doesn’t replicate the flexibility of a real instrument very well, because a standard button is a binary mechanism. To introduce more range, Xavier Dumont developed this breath-controlled MIDI device.
This looks like a cross between a flute, an ocarina, and an old cell phone. The front face has 35 buttons to trigger specific notes. But there are two ways for the player to gain almost analog control over the output: a mouthpiece with a breath sensor and a linear touch sensor. The breath sensor lets the player control the intensity of a note by blowing into the mouthpiece like a wind instrument. The linear touch sensor, mounted on the bottom of the device, lets the user bend the pitch of the notes with their thumb.
Inside the 3D-printed enclosure is a custom PCB. Almost every component mounts directly onto that board. The exception is the touch sensor, which connects to the PCB through a jumper cable. An Arduino Micro monitors the keypad matrix, the touch sensor, and the breath sensor. It outputs MIDI messages to a computer connected via USB. There is a TFT screen for the control interface, which lets the user change modes, switch octaves, and tweak settings
Experienced servers are masters of balance and coordination, being able to carry several full glasses on a tray without spilling a drop. But many of us lack that skill and can’t carry even a single glass across a room without splashing half of it on our feet. To help the clumsy among us, YouTuber The Fedmog Challenge created this robotic beer tray that automatically balances glasses to avoid spills.
This robotic beer tray relies on the same kind of control algorithm used by self-balancing robots and drones: PID (proportional-integral-derivative). That acronym isn’t very informative unless you were a math major, but it means that the robot uses fancy calculations to compensate for movement in real time through a closed-feedback loop. In this case, the beer tray is constantly checking to see if it is level. If it isn’t, then it uses motors to bring itself back to level as fast as it can without overcompensating.
The Fedmog Challenge made this machine using 3D-printed parts. The user holds the base and that connects to the tray on top via four servo-actuated linkages. An MPU6050 gyro/accelerometer module mounts to the tray and to detect its position. An Arduino Nano board monitors the MPU6050 and adjusts the servo motor angles as necessary to keep the tray level.
There are a couple of problems with this design that keep it from being practical, though. First, the servos aren’t strong enough to handle much weight. Second, keeping the tray level isn’t enough to avoid spills. To do that, it would need to tilt to compensate for horizontal inertia. But we still like the idea and the build is fun to watch.
Carl comes from a background of making and simulators, working in space operations in Munich, training astronauts and flight controllers on the Columbus module of the International Space Station, and also being the technical director of the company that made the first commercial Crystal Mazes.
Renewable interest
Why did Carl choose Raspberry Pi, though?
“I wanted a small but powerful single-board computer with Wi-Fi and sound, a wide range of off-the-shelf third party hardware and software, and a suitable well-supported language and IDE to code it with,” Carl says. “Raspberry Pi and Python turned out to be the perfect combination.”
Building it took a while, and was apparently very hard, as everything else was designed and built from scratch using the Red Robotics RedBoard+, an add-on robotics controller for Raspberry Pi. It does quite a lot, though.
“There are software models of renewable electricity sources (wind and solar) driven by a weather model, non-renewables (fossil fuels and nuclear) which are controllable, storage devices (batteries and pumped hydro), and consumer demand,” Carl explains. “There is a working model of a wind farm (with three turbines) and a sunlamp. The idea is to control the fossil fuel and nuclear power to meet demand without blackouts or surplus and to keep the storage devices close to half-full.
“There is an operator control panel to monitor and control the grid, and a visitor control panel to set up a game. There are three ‘characters’ who help visitors understand what is going on with spoken messages: a robotic system voice and two human guides, as well as sound effects and music to add ambience.”
As it’s a game, players get a result which is uploaded to a dedicated website, including a summary of what you did and a certificate related to your final ‘score’. “In a museum installation, there will also be a themed landscape with physical mock-ups of the various elements,” Carl adds.
Futuristic energy
The project is still ongoing, although is already quite impressive. Carl has ideas for what he’d like to do with it, though.
“I’m hoping to interest museums of science and technology, energy supply companies and possibly the National Grid itself, as well as schools and universities,” Carl says. “I’m also keen to develop both the hardware and software to make it more modular, generic, smarter, and connected. It would be nice to use live data streamed from the National Grid and some AI in the control system.”
Almost all haptic feedback today comes in the form of vibration. But vibratory haptic feedback is clearly lacking, as it cannot convey information with any kind of precision or granularity. The user notices the vibration and very course patterns may be recognizable, but that is a rudimentary approach that requires a lot of user focus. To help people navigate as they walk through cities, a team from the Max Planck Institute for Intelligent Systems developed a shape-changing interface called S-BAN.
The researchers designed S-BAN (Shape-Based Assistance for Navigation) to work with existing GPS navigation systems, such as Google Maps on smartphones, but to provide a better user experience. The S-BAN device looks like a small remote and the fore end actuates in two dimensions. It can move forward and backward, and pivot left and right to guide the user. If, for instance, the user needs to make an immediate left turn, it will pivot left. This lets people with visual impairments navigate through touch and helps everyone else walk with their eyes up instead of focused on their phones.
The prototype S-BAN unit contains an Arduino Nano board, a Bluetooth module for communication with the user’s smartphone, an IMU to monitor the current orientation of the device, a LiPo battery, and two miniature linear actuators. The complete package, in a 3D-printed enclosure, is very compact and could even double as a smartphone case to make its use more convenient.
So much of the research and development in the area of haptic feedback focuses on universal devices that can create a wide range of tactile sensations. But that has proven to be a massive challenge, as it is very difficult to package the number of actuators necessary for that flexibility in a device that is practical for the consumer market. That’s why TactorBots — devised by researchers from University of Colorado’s ATLAS Institute and Parsons School of Design — sidesteps the issue with a complete toolkit of robotic touch modules.
TactorBots includes both software and hardware, with the hardware coming in several different modules. Each module is wearable on the user’s wrist and has a unique way of touching their arm. One Tactor module strokes the user’s arm, while another taps them. There are other Tactor modules for rubbing, shaking, squeezing, patting, and pushing. Because each module only needs to perform a single tactile motion, they can do their jobs very well. It is also possible to chain several modules together so the user can feel the different sensations across their arm.
Custom web-based software running on a PC controls the Tactor modules, activating them to match virtual on-screen content, through a host module built around an Arduino Nano board. That host module is also wearable on the arm. Each Tactor module has a servo motor that connects directly to the host module through standard JST wires. The module enclosures, along with the sensation-specific mechanisms, were all 3D-printed. The mechanisms differ based on the sensation they were designed to create, but they’re also simple and only require a single servo to operate.
Non-formal learning initiatives are a popular way to engage children in computing from a young age and introduce them to the fun, creative world of coding and digital making. As part of our commitment to an evidence-based approach, we are partnering with Durham University on an exciting evaluation project to study the impact non-formal activities like Code Club have on young people in UK schools. Your school is invited to take part in the project.
We’re inviting UK schools to take part
The project will explore students’ attitudes to learning coding, and to learning generally. We hope to understand more about how extracurricular activities affect students’ confidence and skills. If you’re a teacher at a UK school, we would love for you to register your interest in taking part — your school doesn’t need to have a Code Club to participate. Taking part is easy: simply have some of your students fill in a few short surveys.
As a token of our appreciation for your school’s involvement, you will receive some cool swag and an exclusive invitation to an online, educator-focused workshop where you will explore digital making with us. We’ll even provide you with all the kit you need to make something great, including a Raspberry Pi Pico. Your involvement will contribute to better computing education for UK students.
Computing in UK classrooms and in Code Clubs
In the UK, computing is taught at school, providing children with the opportunity to learn the importance of the subject and its many applications from a young age. In addition, non-formal education can play a pivotal role in fostering a positive learning experience, particularly in computing. Research on computing education indicates that non-formal settings are linked to improvement in students’ self-efficacy and interest in computing. Through participation in non-formal computing education, learners can gain valuable hands-on experience and develop problem-solving, collaboration, and presentation skills.
That’s the thinking behind Code Clubs, which offer students a relaxed environment that encourages creativity, teamwork, and self-paced learning. By providing students with project-based learning opportunities and access to resources and mentors, Code Clubs help foster a passion for computing while also strengthening their understanding of key concepts.
A previous evaluation showed that students who participated in Code Clubs reported improvement in their coding skills and a positive perception about their coding abilities. Code Clubs have already made a significant impact on learners worldwide, with over 3500 Code Clubs around the world currently reaching tens of thousands of young people and inspiring a new generation of digital makers.
Help us with this project
Your school’s participation in this project will help increase our understanding of what works in computing education. Together we can ensure that young people are equipped with the skills and confidence to realise their full potential through the power of computing and digital technologies.
To register your interest in joining the project, simply fill out our short form and we’ll be in touch soon.
Mark your calendars: May 23rd-25th we’ll be at SPS Italia, one of the country’s leading fairs for smart, digital, sustainable industry and a great place to find out what’s new in automation worldwide. We expect a lot of buzz around AI for IoT applications – and, of course, we’ll come prepared to give our own, open-source perspective on the AIoT trend.
At Arduino Pro’s booth C012, pavilion 7, our experts will be presenting some of the latest additions to our ever-growing ecosystem, which includes everything companies need to fully embrace digital transformation with professional performance paired with Arduino’s ease of use and open-source philosophy. You can explore our complete digital brochure here, but let us point out some recent highlights.
Meet the Arduino Pro ecosystem at SPS Italia 2023
Over the years, Arduino Pro has built quite the presence on the market with SOMs like the Portenta H7 and X8, recently joined by the Portenta C33: a cost-effective, high-performance option that makes automation accessible to more users than ever, based on the RA6M5, an Arm® Cortex®-M33 microcontroller from Renesas.
Our Nicla family of ultra-compact boards also expanded: after Nicla Sense ME and Nicla Vision, Nicla Voice packs all the sensors, intelligence and connectivity you need for speech recognition on the edge, leveraging AI and ML.
What’s more, the Arduino ecosystem also includes turnkey solutions like the Portenta Machine Control and the new Opta, our very first microPLC, designed in partnership with Finder to support the Arduino programming experience with the main PLC standard languages – and available in 3 variants with different connectivity features: Opta Lite, Opta RS485, and Opta WiFi. Both the Portenta Machine Control and Opta can be programmed via the new PLC IDE, designed to help you boost production and build automation with your own Industry 4.0 control system.
Finally, since SPS Italy’s last edition we have launched Arduino Cloud for Business: a dedicated Cloud plan for professional users requiring advanced features for secure device management including OTA updates, user-friendly fleet management, and RBAC to safely share dashboards among multiple users and organizations. Specific optional add-ons allow you to further customize your solution with Portenta X8 Manager, LoRaWAN Device Manager or Enterprise Machine Learning Tool – accelerating your IoT success, whatever the scale of your enterprise may be.
Images from SPS Italy 2022
Team Arduino Pro at SPS Italy 2022
If you are attending SPS Italia, don’t miss the conference by our own Head of Arduino Pro Customer Success Andrea Richetta, joined by Product Managers Marta Barbero and Francesca Gentile (in Italian): on May 24th at 2:30pm they will dive deep on the tools Arduino Pro makes available for all companies ready to take part in the IoT revolution, with a unique combination of performance and ease of use. This is your chance to discover how you too can integrate safe and professional Industry 4.0 solutions in new or existing applications, quickly growing from prototype to large-scale production with sensors, machine vision, embedded machine learning, edge computing, and more.
Curious? Register to access the fair if you are an industry professional, and reach out to book a meeting with a member of our team.
On a basic level, making sure to take breaks while making can be a safety precaution. Nobody should be soldering while they can barely stay awake, after all. More importantly, sometimes you need to put some space between yourself and a project to allow your brain to rest. Constantly hitting it against a problem you’re having is hardly a good way to solve it.
Sleep on it
I once read a (possibly apocryphal) story about how a scientist fell asleep while trying to figure out how they got some specific results – while asleep they dreamt about a solution and, after waking up, found out it was correct.
I’ve never quite had a eureka moment in my dreams like that myself, but there have been plenty of times when a bit of engineering and/or code have stumped me until I looked at it with fresh eyes the next morning.
Even writing stuff for the magazine can benefit from a break. Sometimes an angle or a subject isn’t quite making sense and that little bit of time apart helps focus my thoughts. And in terms of focus, the Pomodoro method of 30 minutes of work with a five minute break also really helps me. We had a project about making your own Pomodoro timer in issue 103 which I should make. However, I’ve just been using my phone and its Focus feature.
At the other end of the spectrum, all-nighters really are overrated I feel, although as I near my fourties, they’re a little harder to do anyway. It means I’m doing them much less though.
On hiatus
As well as short breaks, sometimes you need to just take time off a hobby. Burnout is very real, whether it’s with work or with something you’re doing for fun, and you don’t want to ruin your relationship with your favourite hobby because you forced yourself to keep doing something. When taking breaks from one hobby in the past, I’ve focussed on another hobby instead. Flexing a different part of your mind and/or skill set is always good for growth – and can even aid you in other hobbies. Although, sometimes, you have that con crunch and need to get your prop working by any means necessary. Just make sure not to do any soldering in the hotel room; I speak from experience. It’s not a very suitable space.
A great number of activities require the precise application of force with the fingertips. When playing a guitar, for example, you must exert the proper amount of force to push a string against the fret board. Training is difficult, because new guitarists don’t know how much force to apply. This wearable system controls fingertip force to help users learn how to perform new activities.
Developed by NTT Corporation researchers, the system needs two parts to enable fingertip force control: stimulation and feedback. EMS (electronic muscle stimulation) handles the former by pulsing a small amount of electric current through the user’s muscles, forcing them to contract. That is commonplace technology today, with uses ranging from legitimate medical therapy to more homeopathic remedies. For feedback, the system utilizes bioacoustic technology (a transducer and piezoelectric sensor) to determine the amount of force applied by a user’s finger.
An Arduino Uno Rev3 board paired with a function generator gives the system precise control over the EMS unit, allowing it to adjust muscle stimulation as necessary. It does so in real-time in response to fingertip force estimated by a machine-learning regression model. An expert in the activity could use the system to train it on the proper amount of force for an action, then the system could provide the amount of stimulation necessary for a new student to replicate the expert’s force. With practice, the student would gain a feel for the force and then could perform the activity on their own without the aid of the system.
Modern consumer devices are fantastic at providing visual and auditory stimulation, but they fail to excite any of the other senses. At most, we get some tactile sensation in the form of haptic feedback. But those course vibrations do little more than provide an indication that something is happening, which is why researchers look for alternatives. Developed by a team of City University of Hong Kong researchers, Emoband provides a new kind of tactile feedback in the form of stroking and squeezing of the user’s wrist.
Emoband looks a bit like an over-sized smartwatch with three bands. Two of those bands are just normal straps that secure the device to the user’s wrist. The third band, in the middle, can be made of several different materials. It attaches to two spools on the device, which can reel in or out the material. If both reel in the band, then it will squeeze the user’s wrist. If one reels in while the other reels out, then the band strokes the user’s wrist. Depending on the material, those sensations may elicit different emotional responses from the user.
The prototype Emoband unit uses an Arduino Mega 2560 board to control two servo motors that turn the spools for the material band. A laptop communicates with the Arduino through serial, telling it how to move the band to mirror the onscreen content. Two load cells provide feedback on the amount of squeezing pressure. The prototype device’s frame and spools were 3D-printed.
In the future, it could be possible to integrate this functionality into the smartwatches that people already wear—if the general public decided that they want this kind of tactile feedback. Initial testing showed the users certainly noticed the feedback, but it isn’t clear if they thought it was worthwhile or practical. More details on the project can be found in the researchers’ paper here.
The task of gathering enough data to classify distinct sounds not captured in a larger, more robust dataset can be very time-consuming, at least until now. In his write-up, Shakhizat Nurgaliyev describes how he used an array of AI tools to automatically create a keyword spotting dataset without the need for speaking into a microphone.
The pipeline is split into three main parts. First, the Piper text-to-speech engine was downloaded and configured via a Python script to output 904 distinct samples of the TTS model saying Nurgaliyev’s last name in a variety of ways to decrease overfitting. Next, background noise prompts were generated with the help of ChatGPT and then fed into AudioLDM which produces the audio files based on the prompts. Finally, all of the WAV files, along with “unknown” sounds from the Google Speech Commands Dataset, were uploaded to an Arduino ML project.
Training the model for later deployment on a Nicla Voice board was accomplished by adding a Syntiant audio processing block and then generating features to train a classification model. The resulting model could accurately determine when the target word was spoken around 96% of the time — all without the need for manually gathering a dataset.
Um dir ein optimales Erlebnis zu bieten, verwenden wir Technologien wie Cookies, um Geräteinformationen zu speichern und/oder darauf zuzugreifen. Wenn du diesen Technologien zustimmst, können wir Daten wie das Surfverhalten oder eindeutige IDs auf dieser Website verarbeiten. Wenn du deine Einwillligung nicht erteilst oder zurückziehst, können bestimmte Merkmale und Funktionen beeinträchtigt werden.
Funktional
Immer aktiv
Die technische Speicherung oder der Zugang ist unbedingt erforderlich für den rechtmäßigen Zweck, die Nutzung eines bestimmten Dienstes zu ermöglichen, der vom Teilnehmer oder Nutzer ausdrücklich gewünscht wird, oder für den alleinigen Zweck, die Übertragung einer Nachricht über ein elektronisches Kommunikationsnetz durchzuführen.
Vorlieben
Die technische Speicherung oder der Zugriff ist für den rechtmäßigen Zweck der Speicherung von Präferenzen erforderlich, die nicht vom Abonnenten oder Benutzer angefordert wurden.
Statistiken
Die technische Speicherung oder der Zugriff, der ausschließlich zu statistischen Zwecken erfolgt.Die technische Speicherung oder der Zugriff, der ausschließlich zu anonymen statistischen Zwecken verwendet wird. Ohne eine Vorladung, die freiwillige Zustimmung deines Internetdienstanbieters oder zusätzliche Aufzeichnungen von Dritten können die zu diesem Zweck gespeicherten oder abgerufenen Informationen allein in der Regel nicht dazu verwendet werden, dich zu identifizieren.
Marketing
Die technische Speicherung oder der Zugriff ist erforderlich, um Nutzerprofile zu erstellen, um Werbung zu versenden oder um den Nutzer auf einer Website oder über mehrere Websites hinweg zu ähnlichen Marketingzwecken zu verfolgen.