Kategorie: Mobile

  • The new Auxivo EduExo Pro helps students with exoskeleton research

    The new Auxivo EduExo Pro helps students with exoskeleton research

    Reading Time: 2 minutes

    Emerging technologies initially develop at a slow pace and that is due in large part to the lack of resources available to students. Complex technology is built on existing knowledge and higher education students need the tools to gain hands-on experience. To help educate the next generation of exoskeleton engineers, Auxivo has just introduced the newly updated EduExo Pro exoskeleton kit.

    The Auxivo EduExo Pro is an educational exoskeleton platform designed to help students learn fundamentals via a project-based learning approach, with enough flexibility for those students to experiment with their own designs. It originally launched on Kickstarter in 2021 and now Auxivo has released an updated version.

    The hardware in the kit consists of structural parts, mechanical components, motorized actuators, sensors, and control electronics. The kit includes everything necessary — except 3D-printed parts — to build a full-arm exoskeleton that has a 2DOF (degrees of freedom) shoulder joint and a 1DOF elbow joint.

    For maximum compatibility and versatility, the Auxivo EduExo Pro operates under the control of an Arduino Nano 33 IoT board. Students can take advantage of the powerful Arduino IDE to program sophisticated behaviors and integrate that with other software, such as Unity 3D. 

    The provided handbook will walk students through assembling and programming the arm exoskeleton, but educators can also create their own curriculums or let students devise new designs. That makes the Auxivo EduExo Pro perfect for high school and university-level engineering courses. 

    The Auxivo EduExo Pro is available on the Auxivo shop right now for CHF1,790.00 (about €1,890 / $2,000 USD). 

    The post The new Auxivo EduExo Pro helps students with exoskeleton research appeared first on Arduino Blog.

    Website: LINK

  • Ride safer with these DIY bicycle lights

    Ride safer with these DIY bicycle lights

    Reading Time: 2 minutes

    Many people around the world live in cities designed for cars, with bicycle use being a distant afterthought. That makes cycling dangerous and lights can do a lot to make riding safer. That’s why Giovanni Aggiustatutto designed this DIY system that includes headlights, a taillight, turn signals, and even an integrated odometer/speedometer. 

    Aggiustatutto wanted this system to work with most bicycles, so he designed the front lights and controls to clamp onto the handlebars. The rear light pod attaches to a cargo rack and should be compatible with a wide range of models. There are two bright white LED headlight arrays on the front with integrated yellow turn signal LEDs. Also on the front is an OLED display that shows the speed, time, and odometer, as well as three buttons. The back lights consist of red taillight LEDs and yellow turn signal LEDs in a single 3D-printed enclosure.

    An Arduino Nano board controls everything, directing power to the LEDs from an 18650 lithium battery through IRFZ44N MOSFETs. A DS3231 RTC module helps the Arduino track time accurately and that gives it the ability to monitor speed — and therefore total distance — with the help of a Hall effect sensor. That sensor detects the passing of a magnet attached to a spoke, so the Arduino can count each rotation. The Arduino then displays the results on a 0.96” 128×64 monochrome OLED screen. 

    Finally, Aggiustatutto tucked the Arduino and battery into an enclosure disguised as a water bottle to prevent theft. 

    [youtube https://www.youtube.com/watch?v=c7lJhk-3xcM?start=2&feature=oembed&w=500&h=281]

    The post Ride safer with these DIY bicycle lights appeared first on Arduino Blog.

    Website: LINK

  • Meet Real Robot One V2: A mini DIY industrial robot arm

    Meet Real Robot One V2: A mini DIY industrial robot arm

    Reading Time: 2 minutes

    Started in 2022 as an exploration of what’s possible in the field of DIY robotics, Pavel Surynek’s Real Robot One (RR1) project is a fully-featured 6+1-axis robot arm based on 3D-printed parts and widely available electronics. The initial release was constructed with PETG filament, custom gearboxes for transferring the motor torque to the actuators, and a plethora of stepper motors/shaft-mounted encoders to provide closed-loop control.

    The lessons learned from V1 were instrumental in helping Surynek design his next iteration of the RR1 project, including improved motion, rigidity, and control schemes. Replacing the more flexible PETG filament is a far stronger polycarbonate composite which aided in reducing backlash in the gearing. Beyond the plastic housing, Surynek also swapped the planetary gearboxes for a series of belt-driven mechanisms as well as moved the encoders to the perimeter of each joint to get better positional tracking. The last major change involved printing the gripper in TPU and securing it to the wrist assembly with more points of contact.

    Controlling all seven stepper motors is an Arduino DUE, which talks to the host machine using its serial USB connection and a custom GUI. It is through this interface that each joint can be configured, set, and continuously monitored, thus giving a comprehensive way to operate the arm.

    For more information about revision 2 of the Real Robot One project, watch Surynek’s video below!

    [youtube https://www.youtube.com/watch?v=d3VVTlRikv4?feature=oembed&w=500&h=281]

    The post Meet Real Robot One V2: A mini DIY industrial robot arm appeared first on Arduino Blog.

    Website: LINK

  • Empowering undergraduate computer science students to shape generative AI research

    Empowering undergraduate computer science students to shape generative AI research

    Reading Time: 6 minutes

    As use of generative artificial intelligence (or generative AI) tools such as ChatGPT, GitHub Copilot, or Gemini becomes more widespread, educators are thinking carefully about the place of these tools in their classrooms. For undergraduate education, there are concerns about the role of generative AI tools in supporting teaching and assessment practices. For undergraduate computer science (CS) students, generative AI also has implications for their future career trajectories, as it is likely to be relevant across many fields.

    Dr Stephen MacNeil, Andrew Tran, and Irene Hou (Temple University)

    In a recent seminar in our current series on teaching programming (with or without AI), we were delighted to be joined by Dr Stephen MacNeil, Andrew Tran, and Irene Hou from Temple University. Their talk showcased several research projects involving generative AI in undergraduate education, and explored how undergraduate research projects can create agency for students in navigating the implications of generative AI in their professional lives.

    Differing perceptions of generative AI

    Stephen began by discussing the media coverage around generative AI. He highlighted the binary distinction between media representations of generative AI as signalling the end of higher education — including programming in CS courses — and other representations that highlight the issues that using generative AI will solve for educators, such as improving access to high-quality help (specifically, virtual assistance) or personalised learning experiences.

    Students sitting in a lecture at a university.

    As part of a recent ITiCSE working group, Stephen and colleagues conducted a survey of undergraduate CS students and educators and found conflicting views about the perceived benefits and drawbacks of generative AI in computing education. Despite this divide, most CS educators reported that they were planning to incorporate generative AI tools into their courses. Conflicting views were also noted between students and educators on what is allowed in terms of generative AI tools and whether their universities had clear policies around their use.

    The role of generative AI tools in students’ help-seeking

    There is growing interest in how undergraduate CS students are using generative AI tools. Irene presented a study in which her team explored the effect of generative AI on undergraduate CS students’ help-seeking preferences. Help-seeking can be understood as any actions or strategies undertaken by students to receive assistance when encountering problems. Help-seeking is an important part of the learning process, as it requires metacognitive awareness to understand that a problem exists that requires external help. Previous research has indicated that instructors, teaching assistants, student peers, and online resources (such as YouTube and Stack Overflow) can assist CS students. However, as generative AI tools are now widely available to assist in some tasks (such as debugging code), Irene and her team wanted to understand which resources students valued most, and which factors influenced their preferences. Their study consisted of a survey of 47 students, and follow-up interviews with 8 additional students. 

    Undergraduate CS student use of help-seeking resources

    Responding to the survey, students stated that they used online searches or support from friends/peers more frequently than two generative AI tools, ChatGPT and GitHub Copilot; however, Irene indicated that as data collection took place at the beginning of summer 2023, it is possible that students were not familiar with these tools or had not used them yet. In terms of students’ experiences in seeking help, students found online searches and ChatGPT were faster and more convenient, though they felt these resources led to less trustworthy or lower-quality support than seeking help from instructors or teaching assistants.

    Two undergraduate students are seated at a desk, collaborating on a computing task.

    Some students felt more comfortable seeking help from ChatGPT than peers as there were fewer social pressures. Comparing generative AI tools and online searches, one student highlighted that unlike Stack Overflow, solutions generated using ChatGPT and GitHub Copilot could not be verified by experts or other users. Students who received the most value from using ChatGPT in seeking help either (i) prompted the model effectively when requesting help or (ii) viewed ChatGPT as a search engine or comprehensive resource that could point them in the right direction. Irene cautioned that some students struggled to use generative AI tools effectively as they had limited understanding of how to write effective prompts.

    Using generative AI tools to produce code explanations

    Andrew presented a study where the usefulness of different types of code explanations generated by a large language model was evaluated by students in a web software development course. Based on Likert scale data, they found that line-by-line explanations were less useful for students than high-level summary or concept explanations, but that line-by-line explanations were most popular. They also found that explanations were less useful when students already knew what the code did. Andrew and his team then qualitatively analysed code explanations that had been given a low rating and found they were overly detailed (i.e. focusing on superfluous elements of the code), the explanation given was the wrong type, or the explanation mixed code with explanatory text. Despite the flaws of some explanations, they concluded that students found explanations relevant and useful to their learning.

    Perceived usefulness of code explanation types

    Using generative AI tools to create multiple choice questions

    In a separate study, Andrew and his team investigated the use of ChatGPT to generate novel multiple choice questions for computing courses. The researchers prompted two models, GPT-3 and GPT-4, with example question stems to generate correct answers and distractors (incorrect but plausible choices). Across two data sets of example questions, GPT-4 significantly outperformed GPT-3 in generating the correct answer (75.3% and 90% vs 30.8% and 36.7% of all cases). GPT-3 performed less well at providing the correct answer when faced with negatively worded questions. Both models generated correct answers as distractors across both sets of example questions (GPT-3: 11.1% and 10% of cases; GPT-4: 9.9% and 17.8%). They concluded that educators would still need to verify whether answers were correct and distractors were appropriate.

    An undergraduate student is raising his hand up during a lecture at a university.

    Undergraduate students shaping the direction of generative AI research

    With student concerns about generative AI and its implications for the world of work, the seminar ended with a hopeful message highlighting undergraduate students being proactive in conducting their own research and shaping the direction of generative AI research in computer science education. Stephen concluded the seminar by celebrating the undergraduate students who are undertaking these research projects.

    You can watch the seminar here:

    [youtube https://www.youtube.com/watch?v=Pq-d6wipGRQ?feature=oembed&w=500&h=281]

    If you are interested to learn more about Stephen’s work on generative AI, you can read about how undergraduate students used generative AI tools to create analogies for recursion. If you would like to experiment with using generative AI tools to assist with debugging, you could try using Gemini, ChatGPT, or Copilot.

    Join our next seminar

    Our current seminar series is on teaching programming with or without AI. 

    In our next seminar, on 16 July at 17:00 to 18:30 BST, we welcome Laurie Gale (Raspberry Pi Computing Education Research Centre, University of Cambridge), who will discuss how to teach debugging to secondary school students. To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

    The schedule of our upcoming seminars is available online. You can catch up on past seminars on our blog and on the previous seminars and recordings page.

    Website: LINK

  • What if robots could communicate with humans by emitting scents?

    What if robots could communicate with humans by emitting scents?

    Reading Time: 2 minutes

    Almost all human-robot interaction (HRI) approaches today rely on three senses: hearing, sight, and touch. Your robot vacuum might beep at you, or play recorded or synthesized speech. An LED on its enclosure might blink to red to signify a problem. And cutting-edge humanoid robots may even shake your hand. But what about the other senses? Taste seems like a step too far, so researchers at KAIST experimented with “Olfactory Puppetry” to test smell’s suitability for HRI communication.

    This concept seems pretty obvious, but there is very little formal research on the topic. What if a robot could communicate with humans by emitting scents?

    Imagine if a factory worker suddenly began smelling burning rubber. That could effectively communicate the idea that a nearby robot is malfunctioning, without relying on auditory or visual cues. Or a personal assistant robot could give off the smell of sizzling bacon to tell its owner that it is time to wake up.

    The researchers wanted to test these ideas and chose to do so using puppets instead of actual robots. By using puppets — paper cutouts on popsicle sticks — test subjects could act out scenarios. They could then incorporate scent and observe the results.

    For that to work, they needed a way to produce specific smells on-demand. They achieved that with a device built using an Arduino Nano R3 board that controls four atomizers. Those emit rose, citrus, vanilla, and musk scents, respectively. Another device performs a similar function, but with solid fragrances melted by heating elements.

    This research was very open-ended, but the team was able to determine that people prefer subtle scents, don’t want those to happen too frequently, and want them to mesh well with what their other senses are telling them. That knowledge could be helpful for scent-based HRI experiments in the future.

    The post What if robots could communicate with humans by emitting scents? appeared first on Arduino Blog.

    Website: LINK

  • This ‘smocking display’ adds data physicalization to clothing

    This ‘smocking display’ adds data physicalization to clothing

    Reading Time: 2 minutes

    Elastic use in the textile industry is relatively recent. So, what did garment makers do before elastic came along? They relied on smocking, which is a technique for bunching up fabric so that it can stretch to better fit the form of a body. Now a team of computer science researchers from Canada’s University of Victoria are turning to smocking to create interesting new “data physicalization” displays for clothing.

    These “smocking displays,” part of the researchers’ VISMOCK approach, can convey information through changes in form and changes in color. The practical implementation of this idea would be up to the garment maker, but there are many intriguing possibilities. Imagine, for instance, that your shirt sleeve could get tighter to indicate that it is time for an appointment on your daily calendar. Or if your pants could show the current time.

    Both of those concepts — and much more — are entirely feasible. The team made that true by combining two techniques. The first is impregnating the fabric with thermochromic pigments that change color in the presence of heat. Heating elements embedded in the fabric, controlled by an Arduino Mega 2560 board through MOSFETs, influence that change. Resolution is low, because heat spreads, but this is enough to show quite a bit of information.

    The second technique is smocking, but with special SMA (Shape Memory Alloy) wires and springs. Those can be deformed, but will then return to their original shape when current (and heat) runs through them. By integrating SMA into the smocking pattern, the fabric can change shape on-demand. As with the thermochromic heating elements, this occurs under the control of an Arduino.

    Image credit: B. Bakhtiari et al.

    The post This ‘smocking display’ adds data physicalization to clothing appeared first on Arduino Blog.

    Website: LINK

  • Retro gaming magic mirror

    Retro gaming magic mirror

    Reading Time: 2 minutes

    With that in mind, he went big. And we mean really big. He took a 65-inch touchscreen and connected it to a Raspberry Pi 5 computer, figuring games would look amazing across such a large display. It also enabled David to create a full-length mirror, despite it posing issues of its own.

    “Working with such a large display was a challenge due to the physical weight of moving and manipulating it,” he explains. “I think it weighed 48kg so I really shouldn’t have been lifting it on my own. I was afraid I would break it by letting it flex, cracking the screen.”

    Looking good

    Initially, David tested the concept using a Raspberry Pi 3 computer and an official Raspberry Pi seven-inch touchscreen. He played around with PINN, a version of the NOOBS operating system installer, and sought to get everything working with RetroPie before ordering the larger equipment.

    “Unfortunately, the curse of the early adopter struck, with RetroPie and PINN not having official support for Raspberry Pi 5 at the time,” David says. “It took some time to get PINN working at all and, even then, I think Raspberry Pi 5 support was questionable.” David switched to Recalbox which was installed on one partition. Another partition was used for the magic mirror functions.

    “I wanted the mirror to play as many gaming platforms as possible,” David says. “To achieve this I figured I needed the most processing power, and the Raspberry Pi 5 seemed the best way to go. So far it has proved more than capable of emulating games on many platforms without much trouble.”

    On reflection

    David also added motion-sensing using a PIR sensor. When someone walks in front of the sensor, the screen turns on. When the person moves away, it turns off. The display also turns off at night and comes back on in the morning, using the Raspberry Pi OS’ Magic Mirror app to show the weather forecast, a calendar and more. The build also includes an RS232 converter so that the Raspberry Pi’s Universal Asynchronous Receiver/Transmitter (UART) can be converted for serial communications.

    When you want to play, controllers can be connected via Bluetooth Low-Energy or USB, and the games look a treat on such a large screen. There is still room for improvement, however. “I still need to get around to reapplying the mirror film again,” he says. “I’d also like to spend more time with the plugins to the magic mirror platform, maybe even develop a couple of my own to make the best use of the screen real estate available. Maybe in the future there is scope for a camera, facial recognition and a multi-user experience.”

  • 10 years ago, Android expanded to 3 new platforms10 years ago, Android expanded to 3 new platformsKeyword Contributor

    10 years ago, Android expanded to 3 new platforms10 years ago, Android expanded to 3 new platformsKeyword Contributor

    Reading Time: 2 minutes

    Ten years ago, Android got a lot bigger. Not just because it crossed over a billion active users, but because in 2014, we extended Android (then primarily available on phones and tablets) to three new platforms: Android Auto for your car, Android TV for your smart TV and Android Wear for your smartwatch. The new platforms introduced Android to hundreds of millions of new devices and brought Google services to more places.

    “Our mission since the launch of the first Android device in 2008 has been to work with the industry and build for everyone, everywhere — to provide people with choice,” says Android Ecosystem President Sameer Samat. “And in 2014, we decided to take that to the next level.”

    Cars, smartwatches and TVs may sound like three extremely different categories of devices, with such vastly different hardware and form factors that it’s hard to imagine they’d share any software DNA, let alone the same base operating system. But Android’s open platform, developer-friendly design and flexibility made it a great fit for new places.

    Android hits the road

    In 2010, Google Maps added turn-by-turn navigation, and practically overnight, driving became wildly more efficient.

    “Smartphones were not designed to be used in a vehicle by a driver. So we looked at this as an opportunity to create a safer and more seamless connected experience,” says Patrick Brady, Android Auto VP. “We realized we could leverage the computers in everyone’s pockets that had an ecosystem of millions of apps used by over a billion people. So we adapted Android — and its apps — to work in cars.”

    Android Auto’s approach was simple: Connect your phone to your car display and you’ll see your Android apps on screen, ready to give you directions, make a hands-free call or play music from your favorite playlists.

    Website: LINK

  • Kickstart your tech journey, with the new Arduino Plug and Make Kit!

    Kickstart your tech journey, with the new Arduino Plug and Make Kit!

    Reading Time: 4 minutes

    Hey, creating an IoT device shouldn’t be rocket science. We believe technology is for everyone. That’s why we’ve developed the whole new, beginner-friendly Plug and Make Kit – the easiest way to get started with Arduino! 

    [youtube https://www.youtube.com/watch?v=Qzzqn2AgSlo?feature=oembed&w=500&h=281]

    Inside the box, you’ll find everything you need to create your first smart solution for everyday life. For example, you can build a fully functional timer, a weather forecast station, or even a game controller – in a single session

    There are seven projects complete with step-by-step instructions ready to try (and dedicated tutorials on how to use individual components included): start wherever you like, follow your interests, and have fun with it

    • Weather Report: Never get caught in the rain again, with a visual reminder to take an umbrella when needed.
    • Hourglass: Who needs an egg timer? Customize your own digital hourglass.
    • Eco Watch: Make sure your plants thrive in the perfect temperature and humidity.
    • Game Controller: Level up with your very own HID (human interface device) gamepad.
    • Sonic Synth: Get one step closer to being a rockstar, DJ or sound engineer!
    • Smart Lights: Set the mood with your very own smart lamp.
    • Touchless Lamp: Control lights with a simple gesture.

    Our hope is that the skills you learn and satisfaction you gain will fuel your tech journey in making for years to come, wherever your passions may take you. 

    This is just the beginning

    The components in the Plug and Make Kit can be used to come up with endless new applications – also swiftly integrating with our full ecosystem of hardware and software tools. 

    We can’t wait to see the original ideas you will share, for new projects the community can try!

    Build it in a snap, control it via the app!

    For the Plug and Make Kit, we’ve developed a whole new hardware approach: components just connect together – no breadboard, jumper wires or soldering needed. 

    Once you’ve built your device, you’ll find all the resources and support you may need to get going via the Arduino Cloud:

    • Make progress, troubleshoot, or expand your project with step-by-step online guides.
    • Save precious time and focus on bringing your next idea to life, by simply importing templates (pre-configured projects for quick device setup), freely available to turn your ideas into fully operational devices within minutes.
    • Visualize data any way you wish, with unlimited dashboards, also on your smartphone.

    Based on makers’ favorite, the UNO R4 WiFi

    The Arduino UNO R4 WiFi features a powerful microcontroller with Wi-Fi®/Bluetooth® Low Energy connectivity, a Qwiic connector, a large LED matrix, and more. If you don’t fully understand what that all means for now, don’t worry: the UNO is the definition of ease of use, and its latest version is perfect for beginners and beyond.

    Plug & play with Modulino® 

    The Plug and Make Kit offers a collection of seven Modulino® sensors and actuators, all included in the box:

    • Knob: for super-fine value setting
    • Pixels: eight LEDs to shine bright or dim down – you choose!
    • Distance: a time-of-flight proximity sensor to measure distances
    • Movement: to perfectly capture movements like pitch, roll or tilt
    • Buzzer: to compose your own alarm sounds or simple tunes
    • Thermo: a sensor for both temperature and humidity
    • Buttons: three buttons for quick user selection

    Each Modulino simply connects via the UNO R4 WiFi’s onboard Qwiic connector: no breadboard, no soldering – and no wondering which side goes where, because the connector is polarized.

    If you like the sense of accomplishment you get when things just click, you’ll love this: once you have a few nodes, you can keep your project looking neat by arranging everything on the Modulino Base structural frame. 

    Connect to your passions

    Whether you are new to making or want to share your passion with someone taking their first steps in this world, the Plug and Make Kit offers the easiest, most fun introduction to a world of possibilities where technology is open to all. 

    Ready to put your hands on technology? The Plug and Make Kit can be purchased worldwide from the Arduino Store, as well as from official network of Arduino partners listed below:

    Global

    Asia

    Europe

    North America

    Latin & South America

    The post Kickstart your tech journey, with the new Arduino Plug and Make Kit! appeared first on Arduino Blog.

    Website: LINK

  • Celebrating the community: Yang

    Celebrating the community: Yang

    Reading Time: 4 minutes

    We love hearing from members of the community and sharing the stories of amazing young people, volunteers, and educators who are using their passion for technology to create positive change in the world around them.

    A woman is pictured sitting in the office. There's a window behind her with a view of the London skyline.

    In our latest story, we’re heading to London to meet Yang, a Manager in Technology Consulting at EY specialising in Microsoft Business Applications, whose commitment to CoderDojo is truly inspiring. Yang’s passion for volunteering has grown since she first volunteered at a CoderDojo club at a local museum. In recent years, she has actively searched for ways to bring the CoderDojo movement to more children, and encouraged her colleagues to come along on the journey too.

    Introducing Yang

    [youtube https://www.youtube.com/watch?v=HOPK4I-zBn8?feature=oembed&w=500&h=281]

    When Yang was growing up, both of her parents worked in STEM, but her own journey into a career in technology took a varied route. After initially studying journalism in China, her path shifted when she pursued a Master’s in Digital Humanities at UCL, London, broadening her digital skills and paving the way for her current role.

    On a weekend visit to a museum, Yang found the opportunity to volunteer at their CoderDojo. This experience sparked an enthusiasm to create more opportunities for young people to explore the world of computing, and this soon evolved into a plan to implement clubs at the EY offices. 

    Building a community of mentors

    With support from the EY Corporate Responsibility team, and fellow colleagues, Yang started to deliver Dojo sessions at the EY office in London. From the very first session, Yang was blown away by the level of enthusiasm among her colleagues, and their willingness to volunteer their time to support the club. She soon realised it was possible to roll this initiative out to other offices around the country, expanding the volunteer network and increasing their impact.

    Yang mentors two young learners during a CoderDojo session.

    Clubs have now been run in four EY offices across the UK, and the team has even seen the first international club take place, at the EY office in Baku, Azerbaijan. In total, EY clubs have seen around 350 young people attend and give coding a go.

    Championing diversity in tech

    As a woman in tech, Yang is all too aware of the gender imbalance in the industry, and this is something she wanted the clubs at the EY offices to address. 

    “If there are some female role models, I think for a little girl grow up that means so much. Because if they can see somebody thrive in this industry, they will see themselves there one day. And that’s the inspiration.” – Yang

    Yang actively encourages female participation in Dojo sessions, for example through holding sessions with a focus on engaging girls to mark International Women’s Day and Ada Lovelace Day. Through her leadership, she creates an inclusive environment where girls can envision themselves as future leaders. 

    Yang mentors a young person during a CoderDojo session.

    Yang’s motivation doesn’t only inspire the young people attending her clubs, but also resonates with those who work with her on a daily basis, including colleagues like Iman and Elizabeth, who shared how much they admire Yang’s dedication and energy.

    “I would love to have had a role model like [Yang] when I was younger. She’s just so inspiring. She’s so full of energy. I mean, from my personal experience, when I was younger, we didn’t have anything to do with coding.

    There were situations where I was vaguely interested [in computing] but was told that it wasn’t for girls. And now with Yang running these events, seeing the girls come here and being so interested and wanting to learn, it really opens up so many more doors for them that they don’t even realise.” – Elizabeth, colleague and CoderDojo volunteer

    Seeing the impact of her mentorship and the enthusiasm of young participants has fueled Yang’s passion even further. 

    This has been a great opportunity to set up CoderDojo sessions for young people. I’ve had a lot of support from colleagues and other volunteers who have helped to run the sessions […] I feel super proud of what we’ve achieved so far.” – Yang

    For Yang, mentorship isn’t just about teaching technical skills; it’s about helping young people develop confidence and resilience, and letting everyone know there is a place for them in computing should they want one.

    Two mentors deliver a presentation during a CoderDojo session.

    Continuing to make a difference in her community and beyond, Yang recently participated in the 68th annual UN Women’s Commission on the Status of Women, which is the UN’s largest annual gathering on gender equality and women’s empowerment. 

    We’re delighted to be part of Yang’s journey, and can’t wait to see what she contributes to the world of tech next.

    Help us celebrate Yang and her inspiring journey by sharing her story on X, LinkedIn, and Facebook.

    Website: LINK

  • KiPneu makes soft robotic biomimetics accessible to STEAM students

    KiPneu makes soft robotic biomimetics accessible to STEAM students

    Reading Time: 2 minutes

    Biomimicry, which is a method for developing new technology inspired by nature, has been one of humanity’s greatest assets. But systems reliant on soft tissue, such as an octopus’s tentacles, have been notoriously difficult to reproduce in the robotics world. To give STEAM students an advantage in the soft robotics arena, a team of Chinese researchers developed a pneumatic biomimicry platform called KiPneu.

    Pneumatics are ideal for biomimetic soft robots because they’re subject to fewer of the constraints typical of electric motors and rigid mechanical linkages. KiPneu is a hardware and software ecosystem designed to speed up the assembly of pneumatically actuated soft robots. It consists of inflatable pneumatic actuators and custom bricks compatible with LEGO bricks. Users can use those bricks and actuators to construct the physical forms of their robots.

    After construction, students can make their robot move by pumping in air and controlling the flow of that air using valves. The initial prototype relied on an Arduino UNO Rev3 board to control power going to the pump, as well as the positions of the valves. The Arduino could, of course, perform those functions in sequence or in response to input commands, giving the robots the ability to move in complex ways.

    But the team also created an electronics-free version, which relies on a hand pump and “tangible valves.” Together, those allow for similar functionality, but the user must pump air and change valve positions manually.

    Both KiPneu systems have potential, with the manual system better suited to younger students and the more versatile Arduino-controlled system for the older students. 

    Image credit: Guanyun Wang et al.

    The post KiPneu makes soft robotic biomimetics accessible to STEAM students appeared first on Arduino Blog.

    Website: LINK

  • Circuit Canvas can help you quickly create illustrated wiring diagrams

    Circuit Canvas can help you quickly create illustrated wiring diagrams

    Reading Time: 2 minutes

    Good documentation is extremely useful when conceiving, building, or sharing electronic circuit designs, but traditional schematics and technical drawings are difficult for non-professionals to interpret and create. Makers can benefit from intuitive illustrations that look good enough to share. Circuit Canvas, developed by Oyvind Nydal Dahl, makes it easy to quickly create beautiful and useful illustrated diagrams.

    Circuit Canvas is quite similar to Fritzing, but developed with the goals of being easy to use and fast. A user can create a schematic or an illustrated diagram for a basic circuit in less than a minute — if the components already exist in the library. But as with Fritzing, users may end up in a situation where they need to add custom parts. Circuit Canvas promises to make that process as painless as possible and even supports Fritzing parts, so it can take advantage of that ecosystem’s huge library.

    At this time, Circuit Canvas already has a substantial library of parts. That includes Arduino UNO and Arduino Nano development boards, as well as other boards that are compatible with the Arduino IDE, such as the Seeed Studio XIAO ESP32C3 and the Raspberry Pi Pico. And, of course, there are many discrete components, ICs, and modules in the library to work with.

    Users can either build schematics using standard symbols, or more friendly illustrated diagrams. In the future, the two document types will link together. Creating a diagram is as simple as placing components and drawing wires between them. After making the connections, users can move components around and the wires will automatically follow.

    If you’ve been looking for a way to improve the documentation for your Arduino projects, then Circuit Canvas is worth checking out. It is free to try and you can run it right in your browser now.

    The post Circuit Canvas can help you quickly create illustrated wiring diagrams appeared first on Arduino Blog.

    Website: LINK

  • 4 Google updates coming to Samsung devices4 Google updates coming to Samsung devicesSenior Director, Global Android Product Marketing

    4 Google updates coming to Samsung devices4 Google updates coming to Samsung devicesSenior Director, Global Android Product Marketing

    Reading Time: < 1 minute

    At I/O, we shared how Wear OS 5 brings improved performance and battery life. Samsung’s new Galaxy Watch lineup, including the Watch Ultra and Watch7, will be the first smartwatches powered by Wear OS 5. And they’re the perfect companion for when you’re on the go: These smartwatches offer advanced health monitoring capabilities, including heart rate tracking and sleep monitoring, and a personalized health experience, as well as access to a wide range of apps in Google Play.

    4. Watch YouTube TV in multiview

    On the GalaxyZ Fold6, YouTube TV subscribers will be able to watch in multiview, enjoying up to four different streams at the same time. You can choose from pre-selected combinations of football, news, weather and simultaneous sporting events.

    We’re constantly working with Samsung to bring the latest Google updates to Galaxy products, from smartphones and wearables to even future technologies, like the upcoming XR platform. Check out everything that was announced at Galaxy Unpacked today.

    Website: LINK

  • Puttr indoor putting practice green

    Puttr indoor putting practice green

    Reading Time: 4 minutes

    Like many great ideas, Puttr came about because of some enforced downtime during lockdown. Entrepreneur and founder of several successful start-ups Matthew Allard had been on the golf team at university, and lockdown had him contemplating an at-home putting game that he and his son could both enjoy. Matthew had a personal interest in how software and computers can interact with the real world, and having taken post-graduate courses in embedded systems was keen to make use of what he’d learned.

    One thing Matthew knew already was that “putting practice is boring and lonely” (don’t they have crazy golf courses in the US?) yet it accounts for 42% of time golfers put in. Creating a means to connect fellow golfers and ‘gamify’ putting could transform this rote activity and allow members of the golfing community to challenge each other with online tournaments.

    Putting mat and chute roll up and are storage in the self-contained Puttr box

    Hits and misses

    Matthew originally aimed to track made and missed putts via an app using sensors in the hole of an at-home putting mat hooked up to GPIO pins. However, he soon discovered this approach was limited: “I could detect when a ball went in the hole, [but] I couldn’t detect missed putts.” Next, Matthew tried break-beam IR sensors to get more precision and measure missed putts, as well as ‘makes’, but “quickly realised that any sun exposure would cause false positives in the break-beam”.

    A friend tipped him off about Raspberry Pi, and Matthew soon saw he could use computer vision and a wide-angle lens to detect the location of the physical hole, then track any golf ball that passed its field of view. Once a ball has entered or exited, it sends the full ball path data over Bluetooth to a connected app on an iOS or Android device, he explains. Details of each putt are logged, with the user able to access stats on their performance and optionally share it with other Puttr users.

    Creating a putt-tracker involves mounting Raspberry Pi 4, an infrared lens and wide-angle camera lens in a case

    Raspberry Pi quickly proved a great choice, since it offered an operating system with all the tools he needed for the software along with good value hardware that worked well together. “Many suppliers tried to talk me into creating my own board [but] there were many reasons to use Raspberry Pi.” The camera connection, Bluetooth, Wi-Fi, and processor were all included. Matthew was also encouraged by the strong community keen to help with any troubleshooting he might need, given this was his first ever Raspberry Pi project.

    Embrace the light

    At first, Matthew stuck with his infrared break-beam idea, testing it in his garage in the evenings after long days at his day job. There were “a ton of tweaks” to get the computer vision to work well under different lighting conditions. Eventually, it seemed as though the beams were working just as he expected. “I would get a break when the ball enters the ramp, and another one when and if it entered the hole. Perfect!”

    Replicating results when demonstrating the embryonic Puttr game to his son was less successful. In fact, it didn’t work at all in daylight. Matthew eventually realised that sunlight hitting the beam’s receiver was preventing the circuit being broken even when a ball passed through it because it emits infrared rays of its own: “Apparently I missed that in school!” Connecting Raspberry Pi 4 to a GATT server (for Apple devices) as a headless Bluetooth peripheral meant code pairing was not an option. Instead, Matthew created a Bluetooth Write Characteristic that can receive a Wi-Fi SSID and password specifically for the task. He then wrote all the computer vision code and app software to make Puttr work.

    The Puttr app automatically connects the mat to the phone or tablet via Bluetooth and records statistics for each player’s putting average.

    Prototyping involved laser-cutting Baltic birchwood, and Matthew’s first foray into 3D design and printing using CraftCloud to create the box used as both ball tracker and holdall, the ramp, and ball return chute. The clever design is portable, with the mat rolling up inside.

    Matthew praises the “stable, tested OS, camera interface, Bluetooth and Wi-Fi, and says choosing Raspberry Pi meant R&D took at least a year less than choosing a different setup with costs that would have been much higher. New versions and applications are already planned. Since launching 18 months ago (after a successful Indigogo crowdfunder), the Puttr app has logged more than a million putts. The clever take on pitch and putt now has worldwide league tables, games and challenges, with a subscription model for golfers keen to pit their skills against others.

  • This rolling ball game brings Skee-Ball-style fun from the arcade to your home

    This rolling ball game brings Skee-Ball-style fun from the arcade to your home

    Reading Time: 2 minutes

    Ask your friends about their favorite games at the arcade and the most common answer will likely be Skee-Ball. But while many other popular arcade games have viable at-home alternatives, Skee-Ball doesn’t — at least not unless you’re willing to spend a serious amount of money. Luckily, you can get your Skee-Ball fix with a similar carnival-style rolling ball game by Gary Nelis.

    This isn’t exactly the same as Skee-Ball; it seems to be a unique creation inspired by several different ball-rolling games that you might come across at carnivals and arcades. The player rolls balls across the table and into an array of holes. If the ball falls through a hole, the player gets the number of points associated with that specific hole. To make this even more fun, Nelis added electronic scorekeeping and fun sound effects.

    The hardest part of this project is constructing the table, which will require some woodworking experience. Next, you’ll need to add the electronics, including the Arduino UNO Rev3 board that detects balls and keeps score. It detects balls falling through the holes using infrared break beam sensors. Nelis grouped those by point value, wiring the sensors in parallel so that they only use a total of three Arduino pins. 

    The Arduino shows the score and remaining time on a pair of three-digit, seven-segment displays made using strips of WS2812B individually addressable RGB LEDs. Those can be set to any color and they even support animated effects. Finally, the Arduino plays sound effects through an Adafruit Audio FX Sound Board module.

    If you always head straight to the Skee-Ball tables when you visit an arcade, then this is the project for you.

    [youtube https://www.youtube.com/watch?v=cnWRkR652qU?feature=oembed&w=500&h=281]

    The post This rolling ball game brings Skee-Ball-style fun from the arcade to your home appeared first on Arduino Blog.

    Website: LINK

  • A stroopwafel doneness detection device

    A stroopwafel doneness detection device

    Reading Time: 2 minutes

    If you’re lucky enough to visit the Netherlands and you order a hot drink, you’ll likely be given a sweet treat as well. That is a stroopwafel, a crispy little waffle-syrup sandwich that the Dutch like to rest on top of their drink so that the rising heat will soften the pastry. But Eamon Magd is just a visitor to the country and didn’t know how long to leave it, so he built this stroopwafel doneness detection device.

    Magd inferred that there are three factors that, together, might help him determine when a stroopwafel becomes ready for consumption: heat, time, and movement. That last one might seem strange, but stroopwafels tend to curl up after they reach a certain point — probably a result of the sandwich style construction and a differential in temperature/moisture. So, by looking for movement, Magd thought he could detect the beginning of that process.

    A computer vision application, running on Magd’s laptop, detects that movement by looking for blurry pixels. Assuming the image is otherwise sharp, blurry pixels indicate movement. Magd also used an Arduino UNO Rev3 board to detect the temperature on the surface of the stroopwafel with a simple temperature sensor. The Arduino displays the current time since start on a small LCD and sounds an alarm through a buzzer when it determines that the stroopwafel has softened to Magd’s liking.

    The system attempts to guess the right moment using a linear regression model trained on input data Magd collected. He tried to account for beverage types, as some might soften the stroopwaffel faster than others, but the model is really just working on averages anyway. It doesn’t, for instance, differentiate between stroopwafel makers. Regardless, this is an amusing project. 

    [youtube https://www.youtube.com/watch?v=FuxpXlFtaQE?feature=oembed&w=500&h=281]

    The post A stroopwafel doneness detection device appeared first on Arduino Blog.

    Website: LINK

  • Coolest controllers ever? Icy gamepads melt in users’ hands

    Coolest controllers ever? Icy gamepads melt in users’ hands

    Reading Time: 2 minutes

    Nintendo’s Joy-Con controller system is very innovative and generally well-regarded, with one major exception: stick drift. That’s a reliability issue that eventually affects a large percentage of Joy-Cons, to the frustration of gamers. But what if that was intentional and gamepads were designed to deteriorate in short order? That’s the idea behind ICY Interfaces.

    Yoonji Lee and Chang Hee Lee at KAIST (Korea Advanced Institute of Science & Technology) created three devices under the ICY Interfaces umbrella: MeltPress, FrostPad, and IceSquish. Each incorporate ice — literal frozen water — in a manner meant to make use of the material’s ephemeral nature. Imagine, for instance, a gamepad with buttons that melt at an increasing rate as you touch them. Or another gamepad with buttons that don’t become accessible until a protective sheet of ice melts away. The ICY Interfaces are experiments in this kind of dynamic design.

    Each device contains an Arduino Mega 2560 board to read button presses and control additional hardware, like Peltier coolers. Those are thermoelectric solid-state heat pumps capable of refreezing the ice after it melts. 

    The researchers developed a simple game, called Iceland: Frozen Journeys, to work with ICY Interfaces. They built that game in Processing in order to take advantage of its strong compatibility with Arduino boards and the Arduino IDE. The game challenges players to build snowmen, capitalizing on the ice theme.

    MeltPress has an array of buttons with key caps made of ice. FrostPad has a surface with several capacitive touch pads covered in a layer of ice. IceSquish has buttons made of ice-filled silicone balls, which don’t become flexible enough to press until they’ve melted a bit. All of them make use of ice in an interesting way to explore new gameplay ideas. 

    Image credit: Y. Lee et al.

    The post Coolest controllers ever? Icy gamepads melt in users’ hands appeared first on Arduino Blog.

    Website: LINK

  • CrowPi Compact Raspberry Pi Educational Kit review

    CrowPi Compact Raspberry Pi Educational Kit review

    Reading Time: 2 minutes

    Box of delights

    Elsewhere on the board you’ll find a USB-C power input, speakers, an LED display, GPIO pins, an RFID chip, plenty of sensors and switches and LEDs, and more besides. In the box there’s also a startling array of extra components, including a pair of SNES-like gamepads, a US-style power plug (with a three-pin adapter for UK sockets), servo and stepper motors, an IR remote, LEDs, a small stylus, headphones (3.5mm, so there’s nowhere to plug them in on a Raspberry Pi 5 board) and more. A GPIO ribbon cable is meant to bridge the gap between the Raspberry Pi’s pins and those on the carrier, but one wasn’t included in the package sent to us for review. Something that will fit is pretty cheap and easy to get online, but it would have been nice to have had it included.

    It takes a bit of force to successfully mate your Raspberry Pi 5 board with the CrowPi carrier, as the cables put up some resistance to getting it in exactly the right place, and once it’s screwed down the microSD slot is inaccessible. You might also need to rely on Wi-Fi for networking, as the USB cable goes across the Ethernet port, though you may be able to negotiate a fit with a slim cable. Having a power connection enter vertically at the top right of the motherboard feels clunky too – it would have been so much tidier to have it pierce the casing at the rear.

    A screw loose

    A version of the Raspberry Pi OS with appropriate drivers is available from the CrowPi website – a 3.9GB download – and while the board booted first time, it threw an error when we tried to use the Recommended Software tool and the Terminal (the Terminal text is tricky to read on such a small screen, but that’s not Elecrow’s fault) to install new programs. There was also a loose screw in the case, which fell out when we tried giving it an experimental shake.

    These problems are ones that can be fixed via software patches or by updating the package contents for future orders, and don’t affect the fact this is a convenient and well-made electronics board with prolific features. What they do mean is that, in its current state, it’s slightly difficult to recommend the CrowPi Compact Raspberry Pi Educational Kit, which is a shame, as it could be brilliant.

  • Exclusive experience at Pokémon GO Fest 2024 for Google Play Points membersExclusive experience at Pokémon GO Fest 2024 for Google Play Points membersGeneral Manager

    Exclusive experience at Pokémon GO Fest 2024 for Google Play Points membersExclusive experience at Pokémon GO Fest 2024 for Google Play Points membersGeneral Manager

    Reading Time: 2 minutes

    In May, we announced that we were leveling up Google Play Points with exciting new perks and rewards. That included exclusive early access to new games, Diamond Valley, and VIP experiences at the hottest events in gaming and entertainment.

    We’re kicking off those VIP experiences today at Pokémon GO Fest 2024. We’ve teamed up with Niantic and 100 Thieves to offer Play Points members both at home and on the ground in New York City exclusive perks and rewards. Here’s what’s available:

    Perks on the Play Store

    These exclusive perks are available to all members now until the end of Pokémon GO Fest 2024: Global on July 14:

    • Redeem your points for exclusive Partner Research: Use your points for exclusive Partner Research that includes an encounter with the Fire Child Pokémon, Charcadet and the chance to earn XP, Stardust and an Incubator. Available for members in the United States, United Kingdom, Germany, Japan, Brazil and South Korea.
    • Watch 100 Thieves livestreams: Tune in as Valkyrae, Fuslie and more take viewers through Pokémon GO Fest 2024: New York City. Diamond, Platinum and Gold members: Be on the lookout for surprise merch drops during the streams.
    • Redeem your points for Pokémon GO Fest Merchandise: Use your points for hats, tote bags and pins from the official Pokémon GO Fest collection, while supplies last.
    • Claim a points boost: From July 5-7, claim a points boost to get 5X points on anything you buy in Pokémon GO.

    To redeem your points for Partner Research or Pokémon GO Fest merchandise and claim your points booster, visit the Use tab and Earn tab of Play Points home.

    On-the-ground at Pokémon GO Fest 2024: New York City

    Members with a ticket to Pokémon GO Fest 2024: New York City can visit the Google Play Space on Randall’s Island to go get rewarded. If you’re a Gold+ member, be sure to claim a Wildcard: your VIP pass to exclusive merchandise from the Pokémon x 100 Thieves collection, meet-and-greets with 100 Thieves Creators, and more.

    Gold+ members in NYC can also stop by the Google Store in Chelsea from 11 a.m. – 2 p.m. ET on July 5-7 to snag apparel from the Pokémon x 100 Thieves collection, first come first serve while supplies last.

    To learn more about the Pokémon GO Fest experience, check out the blog post from the Pokémon GO team or visit the Play Store.

  • Can remote co-presence keep distant human connections alive?

    Can remote co-presence keep distant human connections alive?

    Reading Time: 2 minutes

    The pandemic made a lot of things obvious, not the least of which is that humans need social interaction to maintain good mental health. Sadly, many of us spend our lives physically separated from our loved ones by great distances or inopportune circumstances. That’s why a team of researchers decided to explore remote co-presence design within the category of smart home technology.

    The goal of this design research, conducted by an interdisciplinary team from McMaster University and Simon Fraser University, was to experiment with technology that fosters human connection over long distances. But in contrast to typical communication, like email and video chats, this creates a sense of shared physical proximity. 

    The team developed two devices to demonstrate the concept. The first is a paired chair system called There Chair, with one chair visually indicating when someone occupies the other. If one chair is in a loved one’s home and the other in your own, then you would see when they sit down — and vice-versa. The visual indicator is a “display” made up of a spiral wire covered in special fabric that changes color when current flow causes that wire to heat up. There are also heating pads in the seat to mimic the warmth of a person’s body. Those operate under the control of an Arduino UNO Rev3 board

    The other device, called The Fragrance Frame, is also intended to pair with a remote equivalent. It, too, contains an UNO Rev3. The device looks like a picture frame, but with an ultrasonic sensor and a fragrance sprayer. When one unit detects someone nearby, it tells the paired unit to spray its scent. Ideally, a specific scent will trigger a memory associated with that individual. 

    Both of these are an attempt at using technology to create a feeling of closeness. These specific devices may not make it onto the consumer market, but the idea behind them will inevitably catch on.

    Image credit: H. Shakeri et al.

    The post Can remote co-presence keep distant human connections alive? appeared first on Arduino Blog.

    Website: LINK

  • Celebrating the AI innovators of tomorrow

    Celebrating the AI innovators of tomorrow

    Reading Time: 4 minutes

    As the Experience AI Challenge has closed for submissions, we would like to thank all the talented young people who participated and submitted their projects this year.

    The Challenge, created by us in collaboration with Google DeepMind, guides young people under the age of 18, and their mentors, through the process of creating their own unique AI project. It encourages young people to seek out real-world problems and create possible AI-based solutions. From January to May, participants in the UK were also able to submit their projects for feedback from AI experts.

    In response to the submissions, Richard Hayler, our Director of Youth Programmes commented:

    “In running the Challenge, we have seen an incredible display of creativity, ingenuity, and curiosity about AI among young people. The dedication and innovation they  demonstrated in their submitted projects has been truly inspiring. The Challenge has not only showcased the immense potential of addressing problems using AI tools, but most of all the remarkable talent and dedication of the next generation of innovators.

    We would also like to thank all the mentors who guided and encouraged participants throughout the Challenge for their invaluable support. Their expertise and mentorship were instrumental in the young people’s success.”

    Some Challenge highlights

    These are some examples of the innovative projects young people created: 

    AI creation: River Water Quality Prediction App

    Creator: Shreyas, age 13

    What does it do:

    “The model predicts how good the water quality of a river is based on several factors such as the levels of ammonium, nitrates, and dissolved oxygen.”

    Who is it for:

    ”It can be used to tell if river water is safe to drink, or safe for life. This can also be used by authorities to decide where to deploy limited resources to purify water depending on its toxicity.”

    An image of a river with buildings in the background.

    AI creation: Coeliac Disease

    Creator: Zainev, age 14–18

    What does it do:

    “The model aims to identify foods that contain the allergen gluten.”

    Who is it for:

    “It is for people with gluten allergy and/or people trying to arrange food for those with a gluten allergy, as it will easily help them identify foods that contain gluten and are not safe to eat.”

    An AI tool classifying gluten and gluten free products.

    AI creation: Spacepuppy’s colour adventure

    Creator: Charlotte, age 12

    What does it do:

    “Teaches children about colours.”

    Who is it for:

    “Teachers at primary schools/ nurseries.”

    A blue rocket on a white background.

    AI creation: Nutrify

    Creator: Ishaan, age 14–18

    What does it do:

    “The model identifies the students’ food items through a webcam image, giving its specific nutritional information including calories, carbs, sugars and proteins.”

    Who is it for:

    “This model can be easily used by students to be aware of the nutritional information of their meals.”

    An AI tool classifying different types of food, such as burgers, juice, and pizza.

    AI creation: Flossie

    Creator: Florence, age 11

    What does it do:

    “Identifies dressing gowns, slippers and pyjamas.”

    Who is it for:

    “For young children to learn different clothing.”

    An AI tool classifying different clothing.

    AI creation: Dermalyst

    Creator: Vedant, age 14–18

    What does it do:

    “Dermalyst is an AI-based dermatologist that analyses images of your skin to check if you have any skin infection or disease and also suggests solutions.”

    Who is it for:

    “This app is targeted at young people but anyone could use it. It saves them from having to wait for a GP appointment.”

    A doctor's hands holding a mobile phone.

    AI creation: Bird identifier

    Creator: William, age 13

    What does it do:

    “It is designed to identify common garden birds native to the United Kingdom. It can identify robins, blue tits, great tits and blackbirds by their photograph.”

    Who is it for:

    “Bird watchers may use the app to identify the birds that they see but don’t know what they are.”

    An image of a Robin on a tree branch.

    Save the date for the celebratory webinar

    We would like to invite you to an online webinar on Wednesday 10 July at 4pm BST to celebrate all Experience AI Challenge participants. Click ‘notify me’ on YouTube to be notified when the webinar starts.

    During the webinar, Mark Calleja from the Raspberry Pi Foundation and Matko Bošnjak, Research Scientist at Google DeepMind, will highlight some young people’s AI creations, and discuss all things AI. You can share your questions about AI for Mark and Matko by filling in this form today.

    Download the Experience AI Challenge resources

    Once again thank you to everyone who participated in the Experience AI Challenge and submitted their projects.

    If you’re interested in the Challenge, you can still download the resources and use them to create your own AI projects.

    Website: LINK

  • Raspberry Pi goes public

    Raspberry Pi goes public

    Reading Time: 2 minutes

    Evolution

    “This is a watershed moment for Raspberry Pi,” Eben posted on Raspberry Pi dot com that morning. “And the start of a new phase in our evolution: access to the public market will enable us to build more of the products you love, faster. And the money raised by the Raspberry Pi Foundation in the IPO will support its ambitions for global impact in its second decade.”

    Philip Colligan, CEO of the Raspberry Pi Foundation wrote in a post a couple of weeks ago just how that would work: “To date, Raspberry Pi Ltd has donated nearly $50m from its profits to the Foundation, which we have used to advance our educational mission combined with over $60m in funding from philanthropy, sponsorship, and contracts for educational services,” he wrote. “From the Foundation’s perspective, an IPO provides us with the ability to sell some of our shares to raise money to finance a sustainable expansion of our educational activities. Put simply, instead of receiving a share of the company’s profits each year, we will convert some of our shareholding into an endowment that we will use to fund our educational programmes.”

    What’s next

    There’s been a whole lot of work behind the scenes for this for some time now – I’ve only caught glimpses on my monthly visits to Raspberry Pi Towers – so hopefully some of that pressure has now been alleviated. I’ll find out on my next visit.

    Anyway, I thought I’d talk about it here as for various reasons we’ve not had a chance to mention it elsewhere in the magazine [lots of exciting new opportunities to end up in front of a judge – Ed]. Also, my car got returned the following day and now I sort of regret not having got up early for it. Ah well – onwards.