Kategorie: Linux

  • The Lightwaves is a participatory audio-visual installation

    The Lightwaves is a participatory audio-visual installation

    Reading Time: 2 minutes

    The Lightwaves is a participatory audio-visual installation

    Arduino TeamMarch 5th, 2020

    Music and synchronized lighting can be a beautiful combination, evident by panGenerator’s recent installation that was commissioned by the M?skie Granie concert tour in Poland.

    The interactive sculpture was comprised of 15 drums that trigger waves of light traveling toward a huge helium-filled sphere floating above the area, appearing to charge it with sound and light energy as the instruments are played. 

    “The audience was invited to drum collectively and together create an audio-visual spectacle – intensity of which depended on the speed and intensity of the drumming. That fulfilled the main goal of creating interactive art experience in which the audience can actively participate in the event rather than just passively enjoy the music, gathering and playing together.”

    The project incorporated 200 meters of addressable RGB LEDs and measured in at roughly 300 square meters, making it likely the biggest such build ever seen there. According to the designers, each of the drums featured a custom PCB equipped with an Arduino Nano and microphone, and used an MCP2515-based CAN setup for communication. 

    All of this was assembled and taken down seven times over two months in cities around the country. Be sure to check out this dazzling display in action in the video below! 

    Website: LINK

  • The Lightwaves is a participatory audio-visual installation

    The Lightwaves is a participatory audio-visual installation

    Reading Time: 2 minutes

    The Lightwaves is a participatory audio-visual installation

    Arduino TeamMarch 5th, 2020

    Music and synchronized lighting can be a beautiful combination, evident by panGenerator’s recent installation that was commissioned by the M?skie Granie concert tour in Poland.

    The interactive sculpture was comprised of 15 drums that trigger waves of light traveling toward a huge helium-filled sphere floating above the area, appearing to charge it with sound and light energy as the instruments are played. 

    “The audience was invited to drum collectively and together create an audio-visual spectacle – intensity of which depended on the speed and intensity of the drumming. That fulfilled the main goal of creating interactive art experience in which the audience can actively participate in the event rather than just passively enjoy the music, gathering and playing together.”

    The project incorporated 200 meters of addressable RGB LEDs and measured in at roughly 300 square meters, making it likely the biggest such build ever seen there. According to the designers, each of the drums featured a custom PCB equipped with an Arduino Nano and microphone, and used an MCP2515-based CAN setup for communication. 

    All of this was assembled and taken down seven times over two months in cities around the country. Be sure to check out this dazzling display in action in the video below! 

    Website: LINK

  • How to deal with API clients, the lazy way — from code generation to release management

    How to deal with API clients, the lazy way — from code generation to release management

    Reading Time: 5 minutes

    This post is from Massimiliano Pippi, Senior Software Engineer at Arduino.

    The Arduino IoT Cloud platform aims to make it very simple for anyone to develop and manage IoT applications and its REST API plays a key role in this search for simplicity. The IoT Cloud API at its core consists of a set of endpoints exposed by a backend service, but this alone is not enough to provide a full-fledge product to your users. What you need on top of your API service are:

    • Good documentation explaining how to use the service.
    • A number of plug-and-play API clients that can be used to abstract the API from different programming languages.

    Both those features are difficult to maintain because they get outdated pretty easily as your API evolves but clients are particularly challenging: they’re written in different programming languages and for each of those you should provide idiomatic code that works and is distributed according to best practices defined by each language’s ecosystem.

    Depending on how many languages you want to support, your engineering team might not have the resources needed to cover them all, and borrowing engineers from other teams just to release a specific client doesn’t scale much. 

    Being in this exact situation, the IoT Cloud team at Arduino had no other choice than streamlining the entire process and automate as much as we could. This article describes how we provide documentation and clients for the IoT Cloud API.

    Clients generation workflow

    When the API changes, a number of steps must be taken in order to ship an updated version of the clients, as it’s summarized in the following drawing. 

    As you can see, what happens after an engineer releases an updated version of the API essentially boils down to the following macro steps:

    1. Fresh code is generated for each supported client.
    2. A new version of the client is released to the public.

    The generation process

    Part 1: API definition

    Every endpoint provided by the IoT Cloud API is listed within a Yaml file in OpenAPI v3 format, something like this (the full API spec is here):

    /v2/things/{id}/sketch:
        delete:
          operationId: things_v2#deleteSketch
          parameters:
          - description: The id of the thing
            in: path
            name: id
            required: true
            schema:
              type: string
          responses:
            "200":
              content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/ArduinoThing'
              description: OK
            "401":
              description: Unauthorized
            "404":
              description: Not Found
    

    The format is designed to be human-readable, which is great because we start from a version automatically generated by our backend software that we manually fine-tune to get better results from the generation process. At this stage, you might need some help from the language experts in your team in order to perform some trial and error and determine how good the generated code is. Once you’ve found a configuration that works, operating the generator doesn’t require any specific skill, the reason why we were able to automate it.

    Part 2: Code generation

    To generate the API clients in different programming languages we support, along with API documentation we use a CLI tool called openapi-generator. The generator parses the OpenAPI definition file and produces a number of source code modules in a folder on the filesystem of your choice. If you have more than one client to generate, you will notice very soon how cumbersome the process can get: you might need to invoke openapi-generator multiple times, with different parameters, targeting different places in the filesystem, maybe different git repositories; when the generation step is done, you have to go through all the generated code, add it to version control, maybe tag, push to a remote… You get the gist. 

    To streamline the process described above we use another CLI tool, called Apigentools, which wraps the execution of openapi-generator according to a configuration you can keep under version control. Once Apigentools is configured, it takes zero knowledge of the toolchain to generate the clients – literally anybody can do it, including an automated pipeline on a CI system.

    Part 3: Automation

    Whenever the API changes, the OpenAPI definition file hosted in a GitHub repository is updated accordingly, usually by one of the backend engineers of the team. A Pull Request is opened, reviewed and finally merged on the master branch. When the team is ready to generate a new version of the clients, we push a special git tag in semver format and a GitHub workflow immediately starts running Apigentools, using a configuration stored in the same repository. If you look at the main configuration file, you might notice for each language we want to generate clients for, there’s a parameter called ‘github_repo_name’: this is a killer feature of Apigentools that let us push the automation process beyond the original plan. Apigentools can output the generated code to a local git repository, adding the changes in a new branch that’s automatically created and pushed to a remote on GitHub.

    The release process

    To ease the release process and to better organize the code, each API client has its own repo: you’ll find Python code in https://github.com/arduino/iot-client-py, Go code in https://github.com/arduino/iot-client-go and so on and so forth. Once Apigentools finishes its run, you end up with new branches containing the latest updates pushed to each one of the clients’ repositories on GitHub. As the branch is pushed, another GitHub workflow starts (see the one from the Python client as an example) and opens a Pull Request, asking to merge the changes on the master branch. The maintainers of each client receive a Slack notification and are asked to review those Pull Requests – from now on, the process is mostly manual.

    It doesn’t make much sense automate further, mainly for two reasons:

    1. Since each client has its own release mechanism: Python has to be packaged in a Wheel and pushed to PyPI, Javascript has to be pushed to NPM, for Golang a tag is enough, docs have to be made publicly accessible. 
    2. We want to be sure a human validates the code before it’s generally available through an official release.

    Conclusions

    We’ve been generating API clients for the IoT Cloud API like this for a few months, performing multiple releases for each supported programming language and we now have a good idea of the pros and cons of this approach.

    On the bright side: 

    • The process is straightforward, easy to read, easy to understand.
    • The system requires very little knowledge to be operated.
    • The time between a change in the OpenAPI spec and a client release is within minutes.
    • We had an engineer working two weeks to set up the system and the feeling is that we’re close to paying off that investment if we didn’t already.

    On the not-so-bright side: 

    • If operating the system is trivial, debugging the pipeline if something goes awry requires a high level of skill to deep dive into the tools described in this article.
    • If you stumble upon a weird bug on openapi-generator and the bug doesn’t get attention, contributing patches upstream might be extremely difficult because the codebase is complex.

    Overall we’re happy with the results and we’ll keep building up features on top of the workflow described here. A big shoutout to the folks behind openapi-generator and Apigentools!

    Website: LINK

  • Introducing Raspberry Pi Imager, our new imaging utility

    Introducing Raspberry Pi Imager, our new imaging utility

    Reading Time: 3 minutes

    We’ve made a simpler way to image your microSD card with Raspbian, the official Raspberry Pi operating system, and other operating systems. Introducing our new imaging utility, Raspberry Pi Imager.

    Raspberry Pi Imager

    Simplifying the Raspberry Pi experience

    For me, one of the most important aspects of the Raspberry Pi experience is trying to make it as easy as possible to get started.  To this end, since launching the first Raspberry Pi, we’ve added a GUI to our operating system, a wizard to help you set up your Raspberry Pi the first time you boot it, and lots of books and magazines to get people up and running.  We’ve even developed the Raspberry Pi Desktop Kit to put all the things you need (yes, Alex, I know – except for a monitor) into a single box to make it as easy as possible!

    SD cards can be a bit tricky

    Despite all these moves towards more simplicity, when it comes to microSD cards, programming them with your favourite Raspberry Pi operating system has always been a little bit tricky.

    The main problem comes from the differences between the operating systems that people’s main computers are likely to use: Windows, macOS, and Linux all use different methods of accessing the SD card, which doesn’t help matters. And, for some new Raspberry Pi users, understanding where to find the latest up-to-date image and how to get it onto the microSD card can be a bit confusing, unless you’ve had prior experience with image-flashing tools such as Etcher.

    For that reason, we’ve always suggested that you should buy a pre-loaded NOOBS SD card from your Raspberry Pi Approved Reseller.

    But what if you want to re-image an existing card?

    Introducing the new Raspberry Pi Imager

    Image Utility

    No Description

    From today, Raspberry Pi users will be able to download and use the new Raspberry Pi Imager, available for Windows, macOS and Ubuntu.

    The utility is simple to use and super speedy, thanks to some shortcuts we’ve introduced into the mechanics.

    Firstly, Raspberry Pi Imager downloads a .JSON file from our website with a list of all current download options, ensuring you are always installing the most up-to-date version.

    Once you’ve selected an operating system from the available options, the utility reads the relevant file directly from our website and writes it straight to the SD card. This speeds up the process quite considerably compared to the standard process of reading it from the website, writing it to a file on your hard drive, and then, as a separate step, reading it back from the hard drive and writing it to the SD card.

    During this process, Raspberry Pi Imager also caches the downloaded operating system image – that is to say, it saves a local copy on your computer, so you can program additional SD cards without having to download the file again.

    Open source and ready to go!

    Download the Raspberry Pi Imager from our downloads page today.

    Raspberry Pi Imager is fully open source and was originally written as a modification of the PiBakery tool, later modified and finished by Floris Bos (the original writer of the NOOBS tool and the PiServer tool). You can see Floris’ other software, for data centres, here.

    Website: LINK

  • Control the volume of programs running on your Windows PC like a DJ

    Control the volume of programs running on your Windows PC like a DJ

    Reading Time: < 1 minute

    Control the volume of programs running on your Windows PC like a DJ

    Arduino TeamMarch 4th, 2020

    If you have multiple applications open in Windows, you may want one to be louder than the other, but what if you want to adjust levels with physical sliders like an actual DJ? If that sounds interesting, check out this controller by “Aithorn.

    The device uses an Arduino Nano to read signals from each slider and pass this info over to the computer. A Python script, along with a VBScript helper, runs on the PC to control the master and program-specific volumes. 

    Code for the project, which was actually written by Omri Harel, is available on GitHub. You can see the original version of it the video below, working its magic on a shoebox stand. Print files for Aithorn’s new enclosure can be found here.

    Website: LINK

  • NeoPixel LED Mirror

    NeoPixel LED Mirror

    Reading Time: 3 minutes

    “The project uses a Raspberry Pi 3B+, a Raspberry Pi Camera [Module], Python, 3D printing, and 576 NeoPixel LEDs to create an interactive art piece that shows you your reflection in ‘low resolution’ by lighting up a grid of LEDs,” says Alex.

    In essence, it’s taking your picture using a Raspberry Pi Camera Module, converting it to a low‑resolution picture, and then setting the LEDs to the same colour as the individual pixels in the resulting image. Magic? Yes. Practical? No. Fun? Absolutely.

    A Raspberry Pi handles it all, although another computer is on hand for project info

    Make art with LED and Raspberry Pi

    Where did such an idea come from, though? “I was inspired by the various ‘analogue mirrors’ made by Daniel Rozin,” Alex reveals.

    “The Children’s Museum of Pittsburgh, where I built an exhibit for a Systems Engineer class that I took during graduate school, had one of Mr Rozin’s mirrors on display. The mirror at the Children’s Museum used blocks of wood and servo motors to display images of people who were standing in front of it in low resolution. Ever since then, I’ve been following Daniel’s work, and wanted to build one of his mirrors myself. I thought that such a project would be perfect for my YouTube channel, because it would allow me to put my own twist on the concept while simultaneously teaching people about programming, 3D printing, laser cutting, and more!”

    The build itself has an impressive list of components. Alex created a custom prototype PCB, 3D-printed and laser-cut several parts, and connected 24 strips of 24 LEDs to make the magic 576 number. A Raspberry Pi was used to power it due to its size, ability to run Python and address all the LEDs, along with the Raspberry Pi Camera Module which makes it all possible.

    Alex eventually got a laser cutter to help speed up production – like for these mounting grids

    On display

    With such an unconventional project, you might expect some issues when it was finally unveiled. However, it went down very well.

    “The project made its debut at the 2019 Cleveland Maker Faire, where it ran for over eight hours during the event without a single hiccup,” says Alex. “An advantage of being able to run everything via Python code is that I could adjust camera settings on the fly based on lighting conditions in the location where I was at, making sure that the mirror clearly displayed the reflections of visitors throughout the day.

    Maker Faire attendees interacted with the mirror and stopped by the Super Make Something booth to learn more about the YouTube channel, Raspberry Pi, and Python programming. “One of my favourite observations I made during this event is that the mirror captured the interest of an audience with a broad age range – people between the ages of 5 and 65 were fascinated by the mirror and enjoyed moving their limbs and making faces in front of it, excited to see what would happen.”

    If you’ve not managed to see the mirror in person, all is not lost. Alex has been in discussions to add the mirror to the Great Lakes Science Center, very hopefully with upgrades. Look out for more info on his YouTube channel.

    Diffuser plates are required on all the LEDs – a job for a glue gun

    How to make an mirror from LEDs

    1. Camera settings are locked as the code starts, before capturing the image and selecting a small region of 24×24 pixels. 2. This image is then converted to greyscale, after which the code extracts one of the image’s colour planes from the image as an array. This array contains the brightness information for each pixel of the extracted 24×24 region. This square array is then reshaped into a 1×576 vector, and brightness values are assigned to LEDs. 3. Brightness values are used to light up each pixel, after which the image is cleared and the image capture/display process are repeated. In order to be able to display images as quickly as possible, the Python code is optimised to operate on vectors and to minimise the number of for loops.

  • SD Card Speed Test

    SD Card Speed Test

    Reading Time: 6 minutes

    Since we first launched Raspberry Pi, an SD card (or microSD card) has always been a vital component. Without an SD card to store the operating system, Raspberry Pi is pretty useless*! Over the ensuing eight years, SD cards have become the default removable storage technology, used in cameras, smartphones, games consoles and all sorts of other devices. Prices have plummeted to the point where smaller size cards are practically given away for free, and at the same time storage capacity has increased to the point where you can store a terabyte on your thumbnail.

    SD card speed ratings, and why they matter

    However, the fact that SD cards are now so commonplace sometimes conceals the fact that not all SD cards are created equal. SD cards have a speed rating – how fast you can read or write data to the card – and as card sizes have increased, so have speed ratings. If you want to store 4K video from your digital camera, it is important not just that the card is big enough to hold it, but also that you can write it to the card fast enough to keep up with the huge amount of data coming out of the camera.

    The speed of an SD card will also directly affect how fast your Raspberry Pi runs, in just the same way as the speed of a hard drive affects how fast a conventional desktop computer runs. The faster you can read data from the card, the faster your Raspberry Pi will boot, and the faster programs will load. Equally, write speed will also affect how well any programs which save large quantities of data run – so it’s important to use a good-quality card.

    What speed can I expect from my SD card?

    The speed rating of an SD card should be printed either on the card itself or on the packaging.

    The 32GB card shown below is Class 4, denoted by the 4 inside the letter C – this indicates that it can write at 4MB/s.

    The 64GB card shown below is Class 10, and so can write at 10MB/s. It also shows the logo of UHS (“ultra high speed”) Class 1, the 1 inside the letter U, which corresponds to the same speed.

    More recently, speeds have started to be quoted in terms of the intended use of the card, with Class V10 denoting a card intended for video at 10MB/s, for example. But the most recent speed categorisation – and the one most relevant to use in a Raspberry Pi – is the new A (for “application”) speed class. We recommend the use of Class A1 cards (as the one above – see the A1 logo to the right of the Class 10 symbol) in Raspberry Pi – in addition to a write speed of 10MB/s, these support at least 1500 read operations and 500 write operations per second. All the official Raspberry Pi microSD cards we sell meet this specification.

    A new tool for testing your SD card speed

    We’ve all heard the stories of people who have bought a large capacity SD card at a too-good-to-be-true price from a dodgy eBay seller, and found that their card labelled as 64GB can only actually hold 2GB of data. But that is at least fairly easy to spot – it’s much harder to work out whether your supposedly fast SD card is actually meeting its specified speed, and unscrupulous manufacturers and sellers often mislabel low quality cards as having unachievable speeds.

    Today, as the first part of a new suite of tests which will enable you to perform various diagnostics on your Raspberry Pi hardware, we are releasing a tool which allows you to test your SD card to check that it performs as it should.

    To install the new tool, from a terminal do

    sudo apt update sudo apt install agnostics

    (“agnostics”? In this case it’s nothing to do with religion! I’ll leave you to work out the pun…)

    Once installed, you will find the new application “Raspberry Pi Diagnostics” in the main menu under “Accessories”, and if you launch it, you’ll see a screen like this:

    In future, this screen will show a list of the diagnostic tests, and you will be able to select which you want to run using the checkboxes in the right-hand column. But for now, the only test available is SD Card Speed Test; just press “Run Tests” to start it.

    Understanding your speed test results

    One thing to note is that the write performance of SD cards declines over time. A new card is blank and data can be written to what is effectively “empty” memory, which is fast; but as a card fills up, memory needs to be erased before it can be overwritten, and so writes will become slower the more a card is used. The pass / fail criteria in this test assume a new (or at least freshly formatted) card; don’t be alarmed if the write speed test fails when run on the SD card you’ve been using for six months! If you do notice your Raspberry Pi slowing down over time, it may be worth backing up your SD card using the SD Card Copier tool and reformatting it.

    The test takes a minute or so to run on a Raspberry Pi 4 (it’ll take longer on older models), and at the end you’ll see a results screen with either (hopefully) PASS or (if you are less fortunate) FAIL. To see the detailed results of the speed test, press “Show Log”, which will open the test log file in a text editor. (The log file is also written to your home directory as rpdiags.txt.)

    We are testing against the A1 specification, which requires a sequential write speed of 10MB/s, 500 random write operations per second, and 1500 random read operations per second; we run the test up to three times. (Tests of this nature are liable to errors due to other background operations accessing the SD card while the test is running, which can affect the result – by running the test multiple times we try to reduce the likelihood of a single bad run resulting in a fail.)

    If the test result was a pass, great! Your SD card is good enough to provide optimum performance in your Raspberry Pi. If it failed, have a look in the log file – you’ll see something like:

    Raspberry Pi Diagnostics - version 0.1 Mon Feb 24 09:44:16 2020 Test : SD Card Speed Test Run 1 prepare-file;0;0;12161;23 seq-write;0;0;4151;8 rand-4k-write;0;0;3046;761 rand-4k-read;9242;2310;0;0 Sequential write speed 4151 kb/s (target 10000) - FAIL Note that sequential write speed declines over time as a card is used - your card may require reformatting Random write speed 761 IOPS (target 500) - PASS Random read speed 2310 IOPS (target 1500) - PASS Run 2 prepare-file;0;0;8526;16 ... 

    You can see just how your card compares to the stated targets; if it is pretty close to them, then your card is only just below specification and is probably fine to use. But if you are seeing significantly lower scores than the targets, you might want to consider getting another card.

    br>
    br>
    [*] unless you’re using PXE network or USB mass storage boot modes of course.

    Website: LINK

  • An Arduino Tetris console inside of an NES controller

    An Arduino Tetris console inside of an NES controller

    Reading Time: 2 minutes

    An Arduino Tetris console inside of an NES controller

    Arduino TeamMarch 3rd, 2020

    Tetris was as a perfect complement to Nintendo’s original Game Boy when it came out in 1989, and now “Copper Dragon” has been able to fit an entire system for it — sans monitor or speakers — inside of a faux NES controller

    Impressively, this feat was accomplished with an Arduino Nano and a few passive components, producing not only very believable grayscale blocks, but also playing the familiar tune to accompany the video.

    [youtube https://www.youtube.com/watch?v=HPVpsAUs4aY?feature=oembed&w=500&h=281]

    Two signal pins are used for the gray levels, plus a pin for sync, and video generation is programmed in AVR assembler code. Audio is not just PWM, but a simple DAC circuit created by charging and discharging a capacitor at the video line frequency.

    I wanted to build a game console into the case of a small USB game pad (a NES controler look-alike). To make the work a challenge, I wanted to only use an Arduino Nano clocked at 16 MHz and some passive components (diodes are OK) and create the best possible video and audio signal that is imaginable with such restrictions.

    As it turned out, a monochrome 288p video signal with 4 gray scales is possible when progamming the controller at machine level. 4-channel music is also possible.

    My game of choice is Tetris in a version that comes pretty close to the original GameBoy version with a very similar audio track.

    Website: LINK

  • The MagPi 91: #MonthOfMaking is back for 2020!

    The MagPi 91: #MonthOfMaking is back for 2020!

    Reading Time: 4 minutes

    If you read The MagPi, it’s safe to say you like making in some way. The hobby has exploded in popularity over the last few years, thanks in no small part to a burgeoning online community and the introduction of low-cost computing with Raspberry Pi.

    Last year we decided to celebrate making with a month-long online event called #MonthOfMaking. The idea was simply to get people to share what they’re making online, whatever it was. Whether you’re turning on your first LED with code or sending rockets to the moon, we want to create a space where you can share your proud achievements. So, let’s get making.

    What is #MonthOfMaking?

    #MonthOfMaking is simply an excuse to get people inspired to make something. And by make, we mean electronics, engineering, art, and craft projects. Get your creative powers buzzing and make something that you can show to the world.

    There’s no skill-level threshold to participating either. If you’ve been wanting to start learning, this can be your jumping-on point. By sharing your builds with the community, you can learn and grow. Here are some simple rules to sum it all up:

    1. Find a new project, continue with one you’re working on, or finally crack on with something you’ve been putting off.
    2. Take pictures of your build progress and share it online with the hashtag #MonthOfMaking.
    3. If you can help someone with a problem, give them a hand.
    4. Have fun!

    Getting ideas and inspiration

    We’ve all been there. Sat down at a work bench or desk, staring at some components and thinking… what can I make with this? What would I like to make? Like any other creative pursuit, you’ll need some inspiration. If the projects in the magazine haven’t inspired you, then here are some website suggestions…

    Instructables

    Instructables is one of the oldest sites out there for finding amazing project guides and ideas, and we’ve been fans of it for years. The best part is you can search by specific project types as well, including Raspberry Pi if you’d like to keep it on‑brand. They’ve recently added more arts and crafts stuff if you fancy trying your hand at knitting.

    Hackaday and Hackster

    For more serious hacks for more advanced makers, Hackaday and Hackster have some great projects that really take a deep dive into a project. If you’re curious as to the limits of electronics and programming, these may be the place to look. Equally, if you want to do something huge with a lot of computer power, they should be your first stop.

    Raspberry Pi projects

    There are so many amazing things on the Raspberry Pi projects site that can help you with your first steps in just about any field of making. It’s also home to loads of great and simple home-grown projects that are perfect for young makers and older makers alike.

    Planning your build

    Step 01 Read and understand

    Basing your build on a tutorial you’ve seen? Seen a few things you’d like to combine into something else? Always make sure to read the instructions you’ve found properly so that you know if it’s within your skill level.

    Step 02 Order supplies
    Write a list of what you need. Always double‑check you have the component you think you have. Sometimes you may need to buy from separate places, so just make sure the delivery times work for you.

    Step 03 Follow along and be safe

    Need adult supervision for a project? Absolutely get some. Even adults need to be wary, so always take safety precautions and wear protective clothing when needed. Make sure to follow any tutorials you’ve found as closely as you can.

    Read The MagPi for free!

    The rest of our #MonthOfMaking guide, along with loads more amazing projects and tutorials, can be found in The MagPi #91, out today, including our starter electronics guide! You can get The MagPi #91 online at our store, or in print from the Raspberry Pi Store in Cambridge and all good newsagents and supermarkets. You can also access The MagPi magazine via our Android and iOS apps.

    We have a new US subscription offer!

    Don’t forget our amazing subscription offers, which include a free gift of a Raspberry Pi Zero W when you subscribe for twelve months. Until the end of March, you can get a twelve-month subscription in the US for only $60! Head to magpi.cc/usa to find out more.

    And, as with all our Raspberry Pi Press publications, you can download the free PDF from our website.

    Website: LINK

  • Build a seismograph with Raspberry Shake

    Build a seismograph with Raspberry Shake

    Reading Time: 6 minutes

    Raspberry Shake is a great project for budding geologists and citizen scientists because it’s relatively simple to assemble (although you do need to be careful to handle and level the parts correctly).

    Once built, it’s low maintenance, sitting in a quiet part of your home or office, waiting for the earth to move. And all that time, Raspberry Shake is gathering data, which you can investigate using the new web interface – or you can dive in and play around with the data directly.

    We interviewed Branden Christensen, CEO of Raspberry Shake and seismologist, back in 2018. You can also find a tutorial in The MagPi issue 60.

    In the last two years, Raspberry Shake has come along leaps-and-bounds and it now has a powerful web interface, app interface, and a thriving international community. We think it’s time to revisit Raspberry Shake.

    This month we’re looking at assembling Raspberry Shake and sharing your data with the wider Shake community.

    You can buy all the parts for Raspberry Shake separately (see the ‘You’ll Need’ info) or pick up a turnkey system with all the parts included. You can even buy a fully assembled system, but we think that takes all the fun out of things.

    You’ll Need

    Step 01: Wire up the geophone

    Start by wiring up the RGI-4.5Hz geophone. Ours has two wires: grey and blue. Make sure the wires are twisted and connect the grey cable from the positive ‘+’ connection on the geophone to the ‘+’ pin on the RS1D Raspberry Shake board. Next, connect the blue wire to the ‘-’ connection. Take care not not over-tighten the screws, otherwise you may damage the wires.

    Step 02: Put Raspberry Pi in the enclosure

    Take the bottom of the enclosure and attach the four shorter stand-offs. Tighten them by hand. Place your Raspberry Pi board on top of the four stand-offs using the holes in the Raspberry Pi.

    On top of three of the holes, you need to place a washer and the longer stand-offs (the hole in the middle has just a washer and screw). Take a look at the assembled Raspberry Shake (Figure 1) to see which one doesn’t need the large stand-off. Now insert your microSD card. If you bought it from Raspberry Shake, it will be pre-installed with Raspberry Shake software. Otherwise, flash a card with the image file.

    Step 03: Attach the geophone

    Place the geophone in the hole on the bottom of the enclosure with the wires on the top. Separate the two wires so there is a gap between them. Now place the plastic strap on top of the geophone to hold it in place. Use two washers and two screws to fix the clear plastic strap to the bottom of the enclosure.

    Step 04: Attach Raspberry Shake

    Connect the RS1D Raspberry Shake board to Raspberry Pi’s GPIO pins. The board has only a 26-pin header (like the original Raspberry Pi Model A and B); most Raspberry Pi boards have 40-pin GPIO, so you’ll need to make sure you are connecting the RS1D Raspberry Shake to the correct pins. The board connects to the end of the GPIO pins where the microSD card is (leaving those pins towards the USB sockets free).

    Make sure the Raspberry Shake board orientation is correct (the wires to the geophone should be near to the USB ports). If in doubt, take a close look at Figure 1 below.

    Figure 1: the fully constructed kit

    Now clip in the sides of the enclosure. Look carefully at the holes in each side: the small hole is for the microSD card, the medium hole is for the HDMI port and power, and the large hole is for the Ethernet and USB sockets.

    The lid of the enclosure has three holes in it, which will line up with the long stand-offs (from Step 02). Use three screws to hold the lid in place. It’s recommended to ensure the Raspberry Pi Shake is fully enclosed to prevent any wandering of the results.

    Step 05: Levelling Raspberry Shake

    The Raspberry Shake enclosure comes with three holes protruding from the sides. These are used to level the device with the levelling feet. If you bought an official enclosure, it will come with a small spirit level on the base. Use a screwdriver with the levelling feet to ensure that the bubble in the spirit level is inside the black circle.

    Step 06: Position the Raspberry Shake

    The device is designed to be left running 24 hours a day, monitoring for earth tremors. So you’ll want to find somewhere out of the way. With the Raspberry Pi 3B included in the kit, you should use an Ethernet connection to the router, to avoid possible wireless LAN interference to the geophone (this is not an issue if using a Raspberry Pi Zero or 3B+).

    You’ll need to run an Ethernet wire directly from Raspberry Shake to your router. We used Devolo DLAN Powerline adapters to extend our Ethernet connection across the electrical wiring. We positioned our Raspberry Shake in the conservatory to the rear of our home.

    According to the makers of Shake: “For best results, install your Raspberry Shake on a bare floor (no carpet) and not on top of your desk. A good location for the Shake would likely be on the concrete slab of the lowest floor, near a foundation wall and away from furnaces, washing machines, air conditioners, and such.”

    Step 07: Power up

    With Raspberry Shake in position and connected to your router, use the power adapter to turn on Raspberry Pi. A blue light will appear on top of the Raspberry Shake board. You don’t access Raspberry Pi directly with a keyboard and screen – instead, it is set up for remote connection over your network. Open a web browser from another computer on the network, and go http://rs.local (don’t forget the ‘http://’ part. (Note that ‘rs.local’ replaces the former ‘raspberryshake.local’)). You will see the Raspberry Shake web interface.

    Once the Raspberry Shake device is set up, you can access its settings via a web interface on the local network

    Step 08: Change SSH and setup

    The default SSH username and password are ‘myshake’ and ‘shakeme’. Default SSH passwords are a security risk, so we’re going to change it. Click the Actions icon near the top of the interface, then click on the Actions tab. Now click Change SSH Password. Enter the current password ‘shakeme’ and then your new password. Press ENTER to save the new password.

    See ‘Ready, Set, Get Hacked!’ on the Raspberry Shake website for more information on security.

    Step 09: Join the team

    With your Raspberry Shake password changed, you can turn on data sharing and join the Raspberry Shake community. This enables you to share your data with the citizen science project.

    Click Home and Settings. Fill out your details in the General section and click Set Location. The location data is randomised by a couple of hundred yards to preserve privacy.

    Finally, enter the floor that the device is on – this is zero-indexed, so 0 is the ground floor – and how many floors you have in the house.

    Now click the Data tab and tick the box marked ‘Forward Data’. Read the licensing information and click Save and Restart. Raspberry Shake will restart (you may be prompted to enter your new SSH password from Step 08).

    Raspberry Shake Earthquake View is used to track seismic activity detected by other users around the world

    Step 10: Earthquake and Station View

    Now that you’re part of the wider Raspberry Shake community, it’s time to take a look at earthquake activity around the world. Click Raspberry Shake Earthquake View to see a global map. The circles indicate earthquake activity. The colour of the circle corresponds to its depth, with red circles showing it’s closer to the Earth’s surface. The size of the circle indicates its severity. Click on any circle to see more information.

    If you want to see all the Raspberry Shake devices (including your own), make a note of your station number and click on the Station View icon. Here you will see all the devices running in the world. Click any device to view its data.

  • Build a Nano-based binary Nixie clock with 18 IN-2 tubes

    Build a Nano-based binary Nixie clock with 18 IN-2 tubes

    Reading Time: 2 minutes

    Build a Nano-based binary Nixie clock with 18 IN-2 tubes

    Arduino TeamMarch 2nd, 2020

    Nixie tubes are, of course, an elegant display method from a more civilized age, but actually powering and controlling them can be a challenge. This can mean a great project and learning opportunity, but if you’d rather just skip ahead to programming these amazing lights, then Marcin Saj’s IN-2 binary Nixie clock is definitely worth a look.

    This retro-style unit features a 6 x 3 array of small IN-2 tubes, which are turned to “1” or “0” depending on the time. Reading the results takes a bit of binary math, but it would be good practice for those that would like to improve their skills. 

    The clock is available for purchase, and can be driven by a classic Nano, Nano Every as well as a Nano 33 IoT — the last of which enables you to connect to the NTP server or cloud over WiFi.

    Website: LINK

  • Build a Nano-based binary Nixie clock with 18 IN-2 tubes

    Build a Nano-based binary Nixie clock with 18 IN-2 tubes

    Reading Time: 2 minutes

    Build a Nano-based binary Nixie clock with 18 IN-2 tubes

    Arduino TeamMarch 2nd, 2020

    Nixie tubes are, of course, an elegant display method from a more civilized age, but actually powering and controlling them can be a challenge. This can mean a great project and learning opportunity, but if you’d rather just skip ahead to programming these amazing lights, then Marcin Saj’s IN-2 binary Nixie clock is definitely worth a look.

    This retro-style unit features a 6 x 3 array of small IN-2 tubes, which are turned to “1” or “0” depending on the time. Reading the results takes a bit of binary math, but it would be good practice for those that would like to improve their skills. 

    The clock is available for purchase, and can be driven by a classic Nano, Nano Every as well as a Nano 33 IoT — the last of which enables you to connect to the NTP server or cloud over WiFi.

    Website: LINK

  • A wireless monitoring solution for solar power systems in remote locations

    A wireless monitoring solution for solar power systems in remote locations

    Reading Time: 2 minutes

    A wireless monitoring solution for solar power systems in remote locations

    Arduino TeamMarch 2nd, 2020

    Researchers in Thailand have developed a ZigBee-based wireless monitoring solution for off-grid PV installations capable of tracking the sun across the sky, tilting the panel hourly. The elevation for the setup is adjusted manually once per month for optimum energy collection. The prototype is controlled by a local Arduino Uno board, along an H-bridge motor driver to actuate the motor and a 12V battery that’s charged entirely by solar power.

    The system features a half-dozen sensors for measuring battery terminal voltage, solar voltage, solar current, current to the DC-DC converter, the temperature of the power transistor of DC-DC converter, and the tilt angle of solar panels according to the voltage across the potentiometer. 

    Data is transmitted wirelessly via an XBee ZNet 2.5 module to a remote Uno with an XBee shield. The real-time information is then passed on to and analyzed by a computer, which is also used to set the system’s time.

    More details on the project can be found in the team’s paper.

    Wireless sensing is an excellent approach for remotely operated solar power system. Not only being able to get the sensor data, such as voltage, current, and temperature, the system can also have a proper control for tracking the Sun and sensing real-time data from a controller. In order to absorb the maximum energy by solar cells, it needs to track the Sun with proper angles. Arduino, H-bridge motor driver circuit, and Direct Current (DC) motor are used to alter the tilt angle of the solar Photovoltaic (PV) panel following the Sun while the azimuth and the elevation angles are fixed at noon. Unlike the traditional way, the tilt rotation is proposed to be stepped hourly. The solar PV panel is tilted  in advance of current time to the west to produce more output voltage during an hour. As a result, the system is simple while providing good solar-tracking results and efficient power outputs.

    Website: LINK

  • A wireless monitoring solution for solar power systems in remote locations

    A wireless monitoring solution for solar power systems in remote locations

    Reading Time: 2 minutes

    A wireless monitoring solution for solar power systems in remote locations

    Arduino TeamMarch 2nd, 2020

    Researchers in Thailand have developed a ZigBee-based wireless monitoring solution for off-grid PV installations capable of tracking the sun across the sky, tilting the panel hourly. The elevation for the setup is adjusted manually once per month for optimum energy collection. The prototype is controlled by a local Arduino Uno board, along an H-bridge motor driver to actuate the motor and a 12V battery that’s charged entirely by solar power.

    The system features a half-dozen sensors for measuring battery terminal voltage, solar voltage, solar current, current to the DC-DC converter, the temperature of the power transistor of DC-DC converter, and the tilt angle of solar panels according to the voltage across the potentiometer. 

    Data is transmitted wirelessly via an XBee ZNet 2.5 module to a remote Uno with an XBee shield. The real-time information is then passed on to and analyzed by a computer, which is also used to set the system’s time.

    More details on the project can be found in the team’s paper.

    Wireless sensing is an excellent approach for remotely operated solar power system. Not only being able to get the sensor data, such as voltage, current, and temperature, the system can also have a proper control for tracking the Sun and sensing real-time data from a controller. In order to absorb the maximum energy by solar cells, it needs to track the Sun with proper angles. Arduino, H-bridge motor driver circuit, and Direct Current (DC) motor are used to alter the tilt angle of the solar Photovoltaic (PV) panel following the Sun while the azimuth and the elevation angles are fixed at noon. Unlike the traditional way, the tilt rotation is proposed to be stepped hourly. The solar PV panel is tilted  in advance of current time to the west to produce more output voltage during an hour. As a result, the system is simple while providing good solar-tracking results and efficient power outputs.

    Website: LINK

  • Code a Zaxxon-style axonometric level | Wireframe #33

    Code a Zaxxon-style axonometric level | Wireframe #33

    Reading Time: 4 minutes

    Fly through the space fortress in this 3D retro forced scrolling arcade sample. Mark Vanstone has the details

    A shot from Sega's arcade hit, Zaxxon

    Zaxxon was the first arcade game to use an axonometric viewpoint, which made it look very different from its 2D rivals.

    Zaxxon

    When Zaxxon was first released by Sega in 1982, it was hailed as a breakthrough thanks to its pseudo-3D graphics. This axonometric projection ensured that Zaxxon looked unlike any other shooter around in arcades.

    Graphics aside, Zaxxon offered a subtly different twist on other shooting games of the time, like Defender and Scramble; the player flew over either open space or a huge fortress, where they had to avoid obstacles of varying heights. Players could tell how high they were flying with the aid of an altimeter, and also the shadow beneath their ship (shadows were another of Zaxxon’s innovations). The aim of the game was to get to the end of each level without running out of fuel or getting shot down; if the player did this, they’d encounter an area boss called Zaxxon. Points were awarded for destroying gun turrets and fuel silos, and extra lives could be gained as the player progressed through the levels.

    A shot of our Pygame version of Zaxxon

    Our Zaxxon homage running in Pygame Zero: fly the spaceship through the fortress walls and obstacles with your cursor keys.

    Making our level

    For this code sample, we can borrow some of the techniques used in a previous Source Code article about Ant Attack (see Wireframe issue 15) since it also used an isometric display. Although the way the map display is built up is very similar, we’ll use a JSON file to store the map data. If you’ve not come across JSON before, it’s well worth learning about, as a number of web and mobile apps use it, and it can be read by Python very easily. All we need to do is load the JSON file, and Python automatically puts the data into a Python dictionary object for us to use.

    In the sample, there’s a short run of map data 40 squares long with blocks for the floor, some low walls, higher walls, and a handful of fuel silos. To add more block types, just add data to the blocktypes area of the JSON file. The codes used in the map data are the index numbers of the blocktypes, so the first blocktypes is index 0, the next index 1, and so on. Our drawMap() function takes care of rendering the data into visual form and blits blocks from the top right to the bottom left of the screen. When the draw loop gets to where the ship is, it draws first the shadow and then the ship a little higher up the screen, depending on the altitude of the ship. The equation to translate the ship’s screen coordinates to a block position on the map is a bit simplistic, but in this case, it does the job well enough.

    Cursor keys guide the movement of the spaceship, which is limited by the width of the map and a height of 85 pixels. There’s some extra code to display the ship if it isn’t on the map – for example, at the start, before it reaches the map area. To make the code snippet into a true Zaxxon clone, you’ll have to add some laser fire and explosions, a fuel gauge, and a scoring system, but this code sample should provide the basis you’ll need to get started.

    Code for our Zaxxon homage

    Here’s Mark’s code snippet, which creates a side-scrolling beat-’em-up in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code, go here.

    Get your copy of Wireframe issue 33

    You can read more features like this one in Wireframe issue 33, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

    Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 33 for free in PDF format.

    Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

    Website: LINK

  • Mini Bellagio Water Show

    Mini Bellagio Water Show

    Reading Time: 3 minutes

    Warning! Electricity & water: Take extra care when combining electricity and water in a project: the two should be kept well apart!

    Pump it up

    A pump pushes water from a reservoir (children’s paddling pool) through PVC piping attached to water solenoids connected to sprinkler tubing pointed up in the air. A Raspberry Pi controls the solenoids, creating the effect of water jetting out in sync with the music being played.

    “A total of eight solenoids were connected back to a mechanical relay, which in turn was controlled by Raspberry Pi,” says Nick. Seven out of the eight solenoids were connected to brass reducers to fit into garden sprinkler tubing. The eighth solenoid was a pressure control (relief) valve, which was used to control back pressure in the system.

    “When I wanted to ‘fire’ one of the seven solenoids to shoot water, Raspberry Pi would close the pressure solenoid,” explains Nick. This built up pressure in the PVC pipe, at which time Raspberry Pi would trigger a relay to open the desired solenoid so a jet of water would shoot out. “This was required to get any distance with very little water. I also didn’t want to burn out the pump, so the relief valve was open when no other solenoid was open.”

    Water is pumped from a children’s pool through PVC piping attached to water solenoids to turn the individual jets on and off

    Water music

    The music is synchronised to the solenoid firing by using FFT (fast Fourier transform) analysis performed on the audio in real-time. “I wrote a sequencer in Python to perform the analysis and determine which solenoids to turn on and off, based on a config file which maps high fidelity signals (bass, mid-range, etc.) to particular solenoids or solenoid groups,” says Nick. “In summary, you just put WAV files in a songs directory and start the Python code, which did all the heavy lifting in real-time.”

    One technical challenge was solving the timing discrepancy between the solenoid firing water and the musical note being heard by the audience. “The water had to be shot out of the jets approximately 600 ms ahead of the audio for the water to appear to be in sync with the music.”

    Another issue was safety, as mixing water and electricity can be hazardous. “The power for the system was a 12 V automotive battery,” reveals Nick, “so I used fuses to protect things, just as you would find in a family vehicle. I also tried to keep the dangerous gear out of reach of the general public.”

    Everything went well on the day, albeit with a few bugs: “There were certain sequences of musical notes where the FFT analysis would produce changes too rapidly for the back pressure and corresponding solenoid firing to produce much of a water jetting effect.” The result was a variance in water height from song to song.

    “I rode on the float during the parade, so the public reaction was the most rewarding part of the project for me,” he adds. “After people figured out what they were looking at, the responses ranged from laughter to astonishment. The public response made my day and all the efforts of the team worthwhile!”

    A waterproof fuse box is used to safely distribute power from a 12 V car battery

  • Win signed Raspberry Pi 4 computers, books, and accessories

    Win signed Raspberry Pi 4 computers, books, and accessories

    Reading Time: < 1 minute

    Subscribe

  • Build a Magic Mirror

    Build a Magic Mirror

    Reading Time: 5 minutes

    Magic mirrors have to be one of the most popular projects out there. Initially created by combining old laptops and semi-reflective observation glass, they appear as normal mirrors but with text and images that appear to float in mid-air. The information displayed is typically what you need as you’re preparing to leave the house: weather, news headlines, and transit information. Although they come across as advanced builds, the community behind the projects have made significant advances in making magic mirrors accessible to all. Let’s take a look at one of these makers and then have a go at building our own mirror.

    Creating a good magic mirror requires experience in many disciplines including carpentry, electronics, programming, and graphic design. Fortunately, the team at MagicMirror2, headed by Michael Teeuw (see The MagPi issue 54), have not only compiled tutorials and fostered a great community, they’ve also built their own open-source application. This modular system takes away all the programming and design pain. Best of all, you can expand the capabilities of your mirror through the hundreds of community plug-ins available and, if you wish, you can write your own. It’s no wonder it won the number one slot in our best projects feature for The MagPi issue 50.

    Assembling a simple magic mirror

    Would you like a magic mirror, but don’t fancy all that carpentry? Here’s a first project to ease you in without having to reach for the band-saw

    To build a magic mirror, you’ll need:

    Tip! Not for Raspberry Pi Zero

    A Raspberry Pi Zero would seem ideal for this project, but MagicMirror2 is incompatible with that model and the original Raspberry Pi 1.

    There have been some impressive magic mirror projects as makers around the globe challenge each other to improve on previous designs. Although the results are undoubtedly impressive, it can make the hobby look a little daunting to the beginner, especially if you don’t have access to the necessary equipment to build a custom frame. In this tutorial, we’ll assemble a simple magic mirror using off-the-shelf parts. This can be built in an afternoon and is a great way to find out whether you want to take it to the next step and get working on something a bit bigger.

    Prepare the frame

    To create our magic mirror, we will create a ‘sandwich’ of the frame, a piece of observation mirror acrylic, and the screen. It’s vital that all these are kept as clean as possible during assembly as any dust will get trapped and leave an irritating mark on your lovely mirror. Unpack the frame, remove the mount, and then remove the plastic clear sheet. You’ll need to carefully peel back the two protective layers and then replace the clear sheet in the frame. This is statically charged and will start to attract dust, so lots of cleaning is required. Return the mount to the frame.

    	The Ikea Ribba range is perfect for a starter project like this thanks to having unusually deep frames and a wide variety of sizes

    Mount the mirror

    The big ‘trick’ of a magic mirror is the use of two-way material, also known as ‘observation glass’. This material is the same that is used in police interview rooms and as privacy screening. It’s only semi-reflective, so the output from your screen can be seen ‘through’ the glass but it’s still effective as a mirror (if a little darker than a regular mirror). This material is cheapest when bought by the roll, so it’s ideal for custom-build or larger mirror projects. Ours is a £5 A5 acrylic sheet. Remove the protective sheeting and place in the frame, making sure it covers the open area. Secure with sticky tape.

    Add the screen

    We’re using the official 7-inch touchscreen for this project to make power requirements easier; we only need one cable to drive both Raspberry Pi and the display. It also happens to be a perfect size for this project. The touchscreen needs to be carefully placed so it’s in parallel with the frame and central. Secure with sticky tape.

    Secure in place

    The combined weight of a Raspberry Pi computer and the touchscreen doesn’t come to much, so rather than getting into complicated mounting solutions, we will apply generous amounts of gaffer (or duct) tape to hold everything in place. This is of course a very lo-fi solution – if you want to go for something more refined, you can consider making use of the mounting points on the screen that can be used with horizontal or vertical bars to attach to the inner edge of the frame. Check for any trapped dust or marks in our ‘sandwich’ before proceeding.

    The simplest way to mount the screen is to use gaffer or duct tape

    Just add Raspberry Pi

    Normally, you would mount a Raspberry Pi computer on top of the screen’s PCB on the provided standoffs. If you want to mount your completed mirror on the wall, this poses a problem, as the computer now sits quite a way proud of the frame. Your options are: 1) don’t care (not advisable), 2) buy a second frame and fix it to the original to double its depth, or 3) mount the Raspberry Pi computer on the side. We’ve gone with option three and it just fits, even with the supplied display cable. Make sure you line the back of the screen with insulation tape to avoid any electrical shorts and secure in place with a Velcro pad to allow for future access to the microSD card.

    Check and test

    With a microSD card with Raspbian installed, mount the Raspberry Pi 4 into place. Check the display ribbon cable hasn’t been stretched too much and the four jumper cables that connect the display to the GPIO are in the correct position. You should now be able to boot and see the Raspbian boot sequence through the display. It will probably look disappointingly dull. Don’t worry, we’ll address that in the next tutorial. If everything is free of dust, secured, and the display is working, shut everything down (you may need to connect a keyboard and mouse to do this).

    We’ve chosen gaffer tape, as the official screen is very light. This would be a terrible idea for a ‘full-size’ monitor.

    Professional magic mirror builds

    We’ve created a simple project for you here that requires no cutting or mains electricity. However, it would be remiss of us not to admire the work of those who have dedicated hours and hours to making the ultimate magic mirror. One of those is MagicMirror2 creator Michael Teeuw, who has created several mirrors completely from scratch, building his own frames and carefully mounting large monitors – all powered by Raspberry Pi computers, of course! The great thing about magic mirrors is you can start small and work up to masterpieces like this, learning as you go.

  • Preview the Debugger feature for the Arduino Pro IDE

    Preview the Debugger feature for the Arduino Pro IDE

    Reading Time: < 1 minute

    Preview the Debugger feature for the Arduino Pro IDE

    Arduino TeamFebruary 28th, 2020

    We’ve released the first prototype of one of the most requested Arduino Pro IDE features: the Arduino Debugger!

    [youtube https://www.youtube.com/watch?v=CoporLqnLOI?feature=oembed&w=500&h=281]

    Key features include the ability to:

    • Execute your Arduino sketch step-by-step while it’s running on your Arduino board!
    • Pause your sketch execution placing breakpoints.
    • Inspect variables values during execution.

    Initially supporting SAM D21 boards, the Arduino Pro IDE Debugger is available for Windows, Mac OS X and Linux64.

    You can try the Arduino Debugger as part of the latest Alpha preview version for the Arduino Pro IDE. More details will follow soon!

    Website: LINK

  • Raspberry Jams around the world celebrate Raspberry Pi’s 8th birthday

    Raspberry Jams around the world celebrate Raspberry Pi’s 8th birthday

    Reading Time: 4 minutes

    Happy birthday to us: tomorrow marks the eighth birthday of the Raspberry Pi computer!

    On 29 February 2012 we launched our very first $35 credit card-sized computer, Raspberry Pi 1 Model B. Since then, we’ve sold over 30 million Raspberry Pi computers worldwide. People all over the world (and beyond!) use them to learn, teach, and make cool stuff; industrial customers embed Raspberry Pi devices in their own products or use them to monitor and control factory processes. As an early birthday present, yesterday we cut the price of the 2GB RAM Raspberry Pi 4 Model B from $45 to $35: now you can buy a no-compromises desktop PC for the same price as Raspberry Pi 1 in 2012.

    A Raspberry Pi stuck into a piece of birthday cake

    Don’t try this at home: you may damage your Raspberry Pi or teeth.

    A global community of Raspberry Jams

    Throughout the last eight years, a passionate community of enthusiasts has championed the use of Raspberry Pi, and our library of free resources, by hosting Raspberry Jams: events where people of all ages come together to learn about digital making in a fun, friendly, and inclusive environment.

    Raspberry Jam logo and illustrations

    To celebrate Raspberry Pi’s in style, Raspberry Jam community members around the world are hosting special birthday-themed events during the whole month from 15 February to 15 March.

    Our special thanks to The Pi Hut for shipping our special birthday packs to these Jams all over the world!

    Raspberry Jam branded goodies

    The contents of the packs we sent to Raspberry Jams that registered events during our birthday month. Thanks for the photo go to Andy Melder, who runs Southend and Chelmsford Raspberry Jams.

    20 Birthday Jams have already taken place in Australia, Belgium, Bulgaria, Canada, Greece, India, the UK, and the US. In total, there are at least 118 Birthday Jam events across 35 countries on 6 continents this year! (We’re determined to reach Antarctica one day soon.)

    Jams can take many forms, from talks and workshops based around the Raspberry Pi computer, to project showcases and hackathons. Here is a selection of photos from some of the birthday events community members have run over the last fortnight:

    Shoutout to Tokyo Raspberry Jam

    We’d like to give a special mention to Masafumi Ohta and our friends at Tokyo Raspberry Jam, who have had to postpone their Birthday Jam due to coronavirus-related safety restrictions currently in place across Japan.

    Someone blowing out the candles of a birthday cake

    The Birthday Jam in Tokyo in 2018

    The whole team at the Foundation sends their best wishes to everyone who is affected by the virus!

    You can still join in the celebrations

    Jam makers are running birthday events up to and including 15 March, so check out the Raspberry Jam world map to find your nearest Birthday Jam!

    Chelmsford Raspberry Jam, celebrating Raspberry Pi’s eighth birthday with multiple generations

    If you’d like to host your own Jam, we also have free resources to help you get started and free starter projects made especially for Jam events.

    It’s really simple to register your Birthday Jam: just fill in the Raspberry Jam submission form, including a valid event information URL linking to a webpage with more information about your event. (This is an excellent example of a Jam event listing.)

    As always, if you have any questions, please don’t hesitate to ask us via [email protected].

    Website: LINK

  • An Arduino-enabled observatory dome door opener

    An Arduino-enabled observatory dome door opener

    Reading Time: 2 minutes

    An Arduino-enabled observatory dome door opener

    Arduino TeamFebruary 27th, 2020

    The South Florida Science Center recently commissioned a beautiful new 10” aperture refactor telescope. Its dome, however, was opened by hand; so in an effort to modernize this part of the setup, Andres Paris and his brother “patanwilson” added a windshield wiper-style DC motor to automate the process.

    The “window to the heavens” is now operated by an Arduino Uno via a high current H-bridge capable of passing along up to 20 amps. User interface is provided by an IR remote control and reed switches stop the door’s motion at the appropriate points. 

    A pair of 12V batteries enable the system to move within the dome and the voltage displays — that can be turned off remotely — to show how much power is left.

    [youtube https://www.youtube.com/watch?v=AJAlnIDWrJQ?feature=oembed&w=500&h=281]

    More details on the project can be found on Reddit.

    Website: LINK

  • Arm Pelion Device Management comes to the Arduino IoT Cloud

    Arm Pelion Device Management comes to the Arduino IoT Cloud

    Reading Time: 2 minutes

    Arm Pelion Device Management comes to the Arduino IoT Cloud

    Arduino TeamFebruary 27th, 2020

    As part of Arduino’s expanding relationship with Arm and continuing commitment to professionals, Arm Pelion Device Management users can now seamlessly use Arduino IoT Cloud to quickly create IoT applications.

    Combining the speed of application development of the low-code Arduino IoT Cloud with the secure, scalable lifecycle management features of Arm Pelion Device Management brings the best of both worlds.

    The integration enables Pelion Device Management users to import all their resources via the Pelion API and translate them into Arduino IoT Cloud properties. They can see and manage everything in the cloud, with the Arduino IoT interface (web or mobile client) providing the simplicity for designers to focus their efforts on the IoT application, creating control panels and summary dashboards. Scalability is a fundamental of the Pelion Device Management service, and new devices will automatically appear in the Arduino IoT Cloud as soon as they are registered in Pelion.

    If you are an existing client of Pelion Device Management and would like to know more about the integration with Arduino IoT Cloud and the professional services available from the Arduino Pro team, please contact us here

    Website: LINK