Creating a computer program involves many different skills — knowing how to code is just one part. When we teach programming to young people, we want to guide them to learn these skills in a structured way. The ‘levels of abstraction’ framework is a great tool for doing that. This blog describes how using the framework will benefit you and your learners in the computing classroom.
Find practical tips for using the ‘levels of abstraction’ framework with your learners
Read a summary of the research behind the framework
Learning to program: Everything at once?
Creating a program from the ground up can be daunting, especially for new learners. Without support, they’ll likely get stuck sooner or later; programs rarely work the first time round. And the more complex the problem that a program is addressing, the more likely it is that the first version of the program won’t work.
One reason that learning to program can be challenging is that it involves understanding a lot of specific concepts and applying many varied skills. From early on in their learning journey, young people need to have a firm grasp of concepts such as repetition, selection, variables, and functions. Also fundamental to learning to program well is the skill of abstraction: understanding a task and identifying which details are relevant and which can be ignored.
To get to grips with all these different concepts and skills, young people need structure — otherwise they’ll try to hold everything in their head at once, and likely feel overwhelmed by the cognitive load. This sort of experience may cause them to disengage instead of persisting. They may even decide that programming is not for them.
In light of these challenges, the ‘levels of abstraction’ framework is a great tool for teaching.
The benefits of the ‘levels of abstraction’ framework
The framework breaks programming down into four levels, each focusing on a different aspect of creating a program:
Problem: Analysing the problem or task the program should address, to understand and record the requirements.
Design: Turning the analysis into an algorithm — a set of steps for the computer to follow to create the desired output. This can involve flowcharts or storyboards, but importantly no code.
Code: Developing the code based on the design (and building the physical components if any are involved).
Running the code: Testing the code, checking outputs, and debugging where necessary.
Throughout the processes of developing a program, learners (and professional programmers) move between these levels as they implement their designs and debug them, sometimes even returning to the problem level if more analysis or clarification is needed.
Potential benefits of the ‘levels of abstraction’ framework for teachers:
It helps you break down the activity of programming into discrete parts.
It helps you engage your learners, as you can show them that programming involves more than knowing how to code.
If your learners get stuck with their programming, the framework can help you guide them to a solution.
Potential benefits for learners:
The framework will help them think through all the steps needed to create a program that works, and practise their problem-solving skills and analytical thinking.
They will more readily see how programming connects to their world — at the problem level — and find aspects of programming where they have strengths and can use their creativity.
They will gain a stronger idea of how software is built in the tech sector.
Our new Quick Read shares tips on how to best use the framework in your teaching.
Things to aim for when using the framework with your learners:
Be aware of what level they are working at and when it’s time to switch to a different one.
Understand that, when they encounter an issue with their program, they can step back and use the framework to figure out where the issue comes from. The issue might be a bug in the code, the algorithm not working as intended, or a description of the problem not taking into account something important.
We hope you find the framework useful. If you have ideas for how to use it in your teaching, why not share them in the comments?
Teaching programming: The wider context
When following the ‘levels of abstraction’ approach, learners need to explain how programs work and debug them. That means program comprehension is a key skill here. You may have already helped your learners to develop and practise this skill, for example with the PRIMM approach. The Block Model is another useful tool for helping your learners talk about various aspects of a program. And if you use the pair programming approach in programming activities, your learners can improve their program comprehension by talking about their code with each other. On our website, you’ll find more guidance on the best ways to teach programming and computing.
And what about generative artificial intelligence (AI) tools for programmers? In the age of AI, we think young people still need to learn to code because it empowers them to navigate and think critically about all digital technologies, including AI. And while generative AI tools can help a skilled programmer create quality code more quickly, more research is needed to show whether such tools help school-age young people build their understanding as they learn to code. You can see some of the great work being done in this area if you catch up with our 2024 research seminar series.
The ‘levels of abstraction’ framework is useful in your teaching no matter what tools young people use to create programs. Even with an AI tool, they will still need to work at all four levels of abstraction to program effectively.
Generative AI (GenAI) tools like GitHub Copilot and ChatGPT are rapidly changing how programming is taught and learnt. These tools can solve assignments with remarkable accuracy. GPT-4, for example, scored an impressive 99.5% on an undergraduate computer science exam, compared to Codex’s 78% just two years earlier. With such capabilities, researchers are shifting from asking, “Should we teach with AI?” to “How do we teach with AI?”
Leo Porter from UC San Diego
Daniel Zingaro from the University of Toronto
Leo Porter and Daniel Zingaro have spearheaded this transformation through their groundbreaking undergraduate programming course. Their innovative curriculum integrates GenAI tools to help students tackle complex programming tasks while developing critical thinking and problem-solving skills.
Leo and Daniel presented their work at the Raspberry Pi Foundation research seminar in December 2024. During the seminar, it became clear that much could be learnt from their work, with their insights having particular relevance for teachers in secondary education thinking about using GenAI in their programming classes
Practical applications in the classroom
In 2023, Leo and Daniel introduced GitHub Copilot in their introductory programming CS1-LLM course at UC San Diego with 550 students. The course included creative, open-ended projects that allowed students to explore their interests while applying the skills they’d learnt. The projects covered the following areas:
Data science: Students used Kaggle datasets to explore questions related to their fields of study — for example, neuroscience majors analysed stroke data. The projects encouraged interdisciplinary thinking and practical applications of programming.
Image manipulation: Students worked with the Python Imaging Library (PIL) to create collages and apply filters to images, showcasing their creativity and technical skills.
Game development: A project focused on designing text-based games encouraged students to break down problems into manageable components while using AI tools to generate and debug code.
Students consistently reported that these projects were not only enjoyable but also responsible for deepening their understanding of programming concepts. A majority (74%) found the projects helpful or extremely helpful for their learning. One student noted that.
“Programming projects were fun and the amount of freedom that was given added to that. The projects also helped me understand how to put everything that we have learned so far into a project that I could be proud of.“
Core skills for programming with Generative AI
Leo and Daniel emphasised that teaching programming with GenAI involves fostering a mix of traditional and AI-specific skills.
Writing software with GenAI applications, such as Copilot, needs to be approached differently to traditional programming tasks
Their approach centres on six core competencies:
Prompting and function design: Students learn to articulate precise prompts for AI tools, honing their ability to describe a function’s purpose, inputs, and outputs, for instance. This clarity improves the output from the AI tool and reinforces students’ understanding of task requirements.
Code reading and selection: AI tools can produce any number of solutions, and each will be different, requiring students to evaluate the options critically. Students are taught to identify which solution is most likely to solve their problem effectively.
Code testing and debugging: Students practise open- and closed-box testing, learning to identify edge cases and debug code using tools like doctest and the VS Code debugger.
Problem decomposition: Breaking down large projects into smaller functions is essential. For instance, when designing a text-based game, students might separate tasks into input handling, game state updates, and rendering functions.
Leveraging modules: Students explore new programming domains and identify useful libraries through interactions with Copilot. This prepares them to solve problems efficiently and creatively.
Ethical and metacognitive skills: Students engage in discussions about responsible AI use and reflect on the decisions they make when collaborating with AI tools.
Adapting assessments for the AI era
The rise of GenAI has prompted educators to rethink how they assess programming skills. In the CS1-LLM course, traditional take-home assignments were de-emphasised in favour of assessments that focused on process and understanding.
Leo and Daniel chose several types of assessments — some involved having to complete programming tasks with the help of GenAI tools, while others had to be completed without.
Quizzes and exams: Students were evaluated on their ability to read, test, and debug code — skills critical for working effectively with AI tools. Final exams included both tasks that required independent coding and tasks that required use of Copilot.
Creative projects: Students submitted projects alongside a video explanation of their process, emphasising problem decomposition and testing. This approach highlighted the importance of critical thinking over rote memorisation.
Challenges and lessons learnt
While Leo and Daniel reported that the integration of AI tools into their course has been largely successful, it has also introduced challenges. Surveys revealed that some students felt overly dependent on AI tools, expressing concerns about their ability to code independently. Addressing this will require striking a balance between leveraging AI tools and reinforcing foundational skills.
Additionally, ethical concerns around AI use, such as plagiarism and intellectual property, must be addressed. Leo and Daniel incorporated discussions about these issues into their curriculum to ensure students understand the broader implications of working with AI technologies.
A future-oriented approach
Leo and Daniel’s work demonstrates that GenAI can transform programming education, making it more inclusive, engaging, and relevant. Their course attracted a diverse cohort of students, as well as students traditionally underrepresented in computer science — 52% of the students were female and 66% were not majoring in computer science — highlighting the potential of AI-powered learning to broaden participation in computer science.
By embracing this shift, educators can prepare students not just to write code but to also think critically, solve real-world problems, and effectively harness the AI innovations shaping the future of technology.
If you’re an educator interested in using GenAI in your teaching, we recommend checking out Leo and Daniel’s book, Learn AI-Assisted Python Programming, as well as their course resources on GitHub. You may also be interested in our own Experience AI resources, which are designed to help educators navigate the fast-moving world of AI and machine learning technologies.
Join us at our next online seminar on 11 March
Our 2025 seminar series is exploring how we can teach young people about AI technologies and data science. At our next seminar on Tuesday, 11 March at 17:00–18:00 GMT, we’ll hear from Lukas Höper and Carsten Schulte from Paderborn University. They’ll be discussing how to teach school students about data-driven technologies and how to increase students’ awareness of how data is used in their daily lives.
To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.
We are so excited to share another story from the community! Our series of community stories takes you across the world to hear from young people and educators who are engaging with creating digital technologies in their own personal ways.
Selin and her robot guide dog IC4U.
In this story we introduce you to Selin, a digital maker from Istanbul, Turkey, who is passionate about robotics and AI. Watch the video to hear how Selin’s childhood pet inspired her to build tech projects that aim to help others live well.
Meet Selin
Celebrate Selin and inspire other young people by sharing her story on Twitter, LinkedIn, and Facebook.
Selin (16) started her digital making journey because she wanted to solve a problem: after her family’s beloved dog Korsan passed away, she wanted to bring him back to life. Selin thought a robotic dog could be the answer, and so she started to design her project on paper. When she found out that learning to code would mean she could actually make a robotic dog, Selin began to teach herself about coding and digital making.
Thanks to her local CoderDojo, which is part of the worldwide CoderDojo network of free, community-based, volunteer-led programming clubs where young people explore digital technology, Selin’s interest in creating tech projects grew and grew. Selin has since built seven robots, and her enthusiasm for building things with digital technology shows no sign of stopping.
Selin and her robot guide dog IC4U.
One of Selin’s big motivations to explore digital making was having an event to work towards. At her Dojo, Selin found out about Coolest Projects, the global technology showcase for young people. She then set herself the task of making a robot to present at the Coolest Projects event in 2018.
When thinking about ideas for what to make for Coolest Projects, Selin remembered how it felt to lose her dog. She wondered what it must be like when a blind person’s guide dog passes away, as that person loses their friend as well as their support. So Selin decided to make a robotic guide dog called IC4U. She contacted several guide dog organisations to find out how guide dogs are trained and what they need to be able to do so she could replicate their behaviour in her robot. The robot is voice-controlled so that people with impaired sight can interact with it easily.
Selin at Coolest Projects International in 2018.
Selin and her parents travelled to Coolest Projects International in Dublin, thanks to support from the CoderDojo Foundation. Accompanying them was Selin’s project IC4U, which became a judges’ favourite in the Hardware category. Selin enjoyed participating in Coolest Projects so much that she started designing her project for next year’s event straight away:
“When I returned back I immediately started working for next year’s Coolest Projects.”
Selin
Many of Selin’s tech projects share a theme: to help make the world a better place. For example, another robot made by Selin is the BB4All — a school assistant robot to tackle bullying. And last year, while she attended the Stanford AI4ALL summer camp, Selin worked with a group of young people to design a tech project to increase the speed and accuracy of lung cancer diagnoses.
Through her digital making projects, Selin wants to show how people can use robotics and AI technology to support people and their well-being. In 2021, Selin’s commitment to making these projects was recognised when she was awarded the Aspiring Teen Award by Women in Tech.
Listening to Selin, it is inspiring to hear how a person can use technology to express themselves as well as create projects that have the potential to do so much good. Selin acknowledges that sometimes the first steps can be the hardest, especially for girls interested in tech: “I know it’s hard to start at first, but interests are gender-free.”
“Be curious and courageous, and never let setbacks stop you so you can actually accomplish your dream.”
Selin
We have loved seeing all the wonderful projects that Selin has made in the years since she first designed a robot dog on paper. And it’s especially cool to see that Selin has also continued to work on her robot IC4U, the original project that led her to coding, Coolest Projects, and more. Selin’s robot has developed with its maker, and we can’t wait to see what they both go on to do next.
Help us celebrate Selin and inspire other young people to discover coding and digital making as a passion, by sharing her story on Twitter, LinkedIn, and Facebook.
Launched in 2013, Hour of Code is an initiative to introduce young people to computer science using fun one-hour tutorials. To date, over 100 million young people have completed an hour of code with it.
Although the Hour of Code website is accessible all year round, every December for Computer Science Education Week people worldwide run their own Hour of Code events. Each year we love seeing many Code Clubs, CoderDojos, and young people at home across the community complete their Hour of Code. You can register your 2022 Hour of Code event now to run between 5 and 11 December.
To support your event, we have pulled together a bumper set of our free coding projects, which can each be completed in just one hour. You will find these activities on the Hour of Code website.
There’s something for all ages and levels of experience, so put an hour aside and help young people make something fabulous with code:
Ages 7–11
Beginner
For younger creators new to coding, a Scratch project is a great place to start.
With our Space talk project, they can create a space scene with characters that ‘emote’ to share their thoughts or feelings using sounds, colours, and actions. Creators program the character emotes using Scratch blocks to control graphic effects, costume animation, and sound effects.
Alternatively, our Stress ball project lets them code an onscreen stress ball that reacts to user clicks. Creators use the Paint and Sound editors in Scratch to personalise a clickable stress ball, and they add Scratch blocks to control graphic effects, costume animation, and sound effects.
We love this fun stress ball example sent to us recently by young creator April from the United States:
Another great option is to use Code Club World, which is a free tool to help children who are new to coding.
Creators can develop a character avatar, design a T-shirt, make some music, and more.
Comfortable
For 7- to 11-year-olds who are more comfortable with block-based coding, our project Broadcasting spells is ideal to choose. With the project, they connect Scratch blocks to code a wand that casts spells turning sprites into toads, and growing and shrinking them. Creators use broadcast blocks to transform multiple sprites at once, and they create sound effects with the Sound editor in Scratch.
Ages 11–14
Beginner
We have three exciting projects for trying text-based coding during Hour of Code in this category. The first, Anime expressions, is one of our brand-new ‘Introduction to web development’ projects. With this project, young people create a responsive webpage with text and images for an anime drawing tutorial. They write HTML to structure the webpage and CSS styles to apply layout, colour palettes, and fonts.
For a great introduction to coding with Python, we have the project Hello world from our ‘Introduction to Python’ path. With this project, creators write Python text-based code to create an interactive program that shows text and emojis based on user input. They learn about variables as they use them to store text and numbers, and they learn about writing functions to organise code and do calculations, retrieve the current date and time, and make a customisable dice.
LED firefly is a fantastic physical making project in which young people use a Raspberry Pi Pico microcontroller and basic electronic components to create a blinking LED firefly. They program the LED’s light patterns with MicroPython code and activate it via a switch they make themselves using jumper wires.
Comfortable
For 11- to 14-year-olds who are already comfortable with HTML, the Flip treat webcards project is a fun option. With this, they create a webpage showing a set of cards that flip when a visitor’s mouse pointer hovers over them. Creators use CSS styling and animations to add interactivity, then they customise the cards with fancy fonts and colour gradients.
Young people who have already done some Python coding can try out our project Target practice. With this project they create a game, using the p5 graphics library to draw a colourful target, and writing code so that the player scores points by hitting the target’s rings with arrows. While they create the project, they learn about RGB colours, shape positioning with x and y coordinates, and decisions using if, else-if, and else code statements.
Ages 14+
Beginner
Our project Charting champions is a great introduction to data visualisation and analysis for coders aged 15 and older. With the project, they will discover the power of the Python programming language as they store Olympic medal data in lists and use the pygal library to create an interactive chart.
Comfortable
Teenage coders who feel comfortable with Python programming can use our project Solar system simulator to code an animated, interactive solar system model using the Python p5 graphics library. Their model will be interactive, as they’ll use dictionaries to store planet facts that display when a user clicks on an orbiting planet.
Coding for Hour of Code and beyond
Now is the time to register your Hour of Code event, then decide which project you’d like to support young people to create. You can download certificates for each of the creators from the Hour of Code certificates page.
And make sure to check out our project paths so you know what projects you can help the young people you support to code beyond this one hour of code.
We don’t just create activities so that other people can experience coding and digital making — we also get involved ourselves!
Recently, our teams who support the Code Club and CoderDojo networks got together to make LED fireflies. We are excited to get coding again as part of Hour of Code and Computer Science Education Week.
“In my vision, the child programs the computer and, in doing so, both acquires a sense of mastery over a piece of the most modern and powerful technology and establishes an intimate contact with some of the deepest ideas from science, from mathematics, and from the art of intellectual model building.” – Seymour Papert, Mindstorms: Children, Computers, And Powerful Ideas, 1980
We owe much of what we have learned about children learning to program to Seymour Papert (1928–2016), who not only was a great mathematician and computer scientist, but also an inspirational educationalist. He developed the theoretical approach to learning we now know as constructionism, which purports that learning takes place through building artefacts that have meaning and can be shared with others. Papert, together with others, developed the Logo programming language in 1967 to help children develop concepts in both mathematics and in programming. He believed that programming could give children tangible and concrete experiences to support their acquisition of mathematical concepts. Educational programming languages such as Logo were widely used in both primary and secondary education settings during the 1980s and 90s. Thus for many years the links between mathematics and programming have been evident, and we were very fortunate to be able to explore this topic with our research seminar guest speaker, Professor Dame Celia Hoyles of University College London.
Professor Dame Celia Hoyles
Dame Celia Hoyles is a huge celebrity in the world of mathematical education and programming. As well as authoring literally hundreds of academic papers on mathematics education, including on Logo programming, she has received a number of prestigious awards and honours, and has served as the Chief Advisor to the UK government on mathematics in school. For all these reasons, we were delighted to hear her present at a Raspberry Pi Foundation computing education research seminar.
Mathematics is a subject we all need to understand the basics of — it underpins much of our other learning and empowers us in daily life. Yet some mathematical concepts can seem abstract and teachers have struggled over the years to help children to understand them. Since programming includes the design, building, and debugging of artefacts, it is a great approach for make such abstract concepts come to life. It also enables the development of both computational and mathematical thinking, as Celia described in her talk.
Learning mathematics through Scratch programming
Celia and a team* at University College London developed a curriculum initiative called ScratchMaths to teach carefully selected mathematical concepts through programming (funded by the Education Endowment Foundation in 2014–2018). ScratchMaths is for use in upper primary school (age 9–11) over a two-year period.
In the first year, pupils take three computational thinking modules, and in the second year, they move to three more mathematical thinking modules. All the ScratchMaths materials were designed around a pedagogical framework called the 5Es: explore, envisage, explain, exchange, and bridge. This enables teachers to understand the structure and sequencing of the materials as they use them in the classroom:
Explore: Investigate, try things out yourself, debug in reaction to feedback
Envisage: Have a goal in mind, predict outcome of program before trying
Explain: Explain what you have done, articulate reasons behind your approach to others
Exchange: Collaborate & share, try to see a problem from another’s perspective as well as defend your own approach and compare with others
bridgE: Make explicit links to the mathematics curriculum
Teachers in the ScratchMaths project participated in professional development (two days per module) to enable them to understand the materials and the pedagogical approach.
At the end of the project, external evaluators measured the childrens’ learning and found a statistically significant increase in computational thinking skills after the first year, but no difference between an intervention group and a control group in the mathematical thinking outcomes in the second year (as measured by the national mathematics tests at that age).
Celia discussed a number of reasons for these findings. She also drew out the positive perspective that children in the trial learned two subjects at the same time without any detriment to their learning of mathematics. Covering two subjects and drawing the links between them without detriment to the core learning is potentially a benefit to schools who need to fit many subjects into their teaching day.
As at all our research seminars, participants had many questions for our speaker. Although the project was designed for primary education, where it’s more common to learn subjects together across the curriculum, several questions revolved around the project’s suitability for secondary school. It’s interesting to reflect on how a programme like ScratchMaths might work at secondary level.
Should computing be taught in conjunction or separately?
Teaching programming through mathematics, or vice versa, is established practice in some countries. One example comes from Sweden, where computing and programming is taught across different subject areas, including mathematics: “through teaching pupils should be given opportunities to develop knowledge in using digital tools and programming to explore problems and mathematical concepts, make calculations and to present and interpret data”. In England, conversely, we have a discrete computing curriculum, and an educational system that separates subjects out so that it is often difficult for children to see overlap and contiguity. However, having the focus on computing as a discrete subject gives enormous benefits too, as Celia outlined at the beginning of her talk, and it opens up the potential to give children an in-depth understanding of the whole subject area over their school careers. In an ideal world, perhaps we would teach programming in conjunction with a range of subjects, thus providing the concrete realisation of abstract concepts, while also having discrete computing and computer science in the curriculum.
In our current context of a global pandemic, we are continually seeing the importance of computing applications, for example computer modelling and simulation used in the analysis of data. This talk highlighted the importance of learning computing per se, as well as the mathematics one can learn through integrating these two subjects.
Celia is a member of the National Centre of Computing Education (NCCE) Academic Board, made up of academics and experts who support the teaching and learning elements of the NCCE, and we enjoy our continued work with her in this capacity. Through the NCCE, the Raspberry Pi Foundation is reaching thousands of children and educators with free computing resources, online courses, and advanced-level computer science materials. Our networks of Code Clubs and CoderDojos also give children the space and freedom to experiment and play with programming and digital making in a way that is concordant with a constructionist approach.
Next up in our seminar series
If you missed the seminar, you can find Celia’s presentation slides and a recording of her talk on our research seminars page.
In our next seminar on Tuesday 16 June at 17:00–18:00 BST / 12:00–13:00 EDT / 9:00–10:00 PDT / 18:00–19:00 CEST, we’ll welcome Jane Waite, Teaching Fellow at Queen Mary University of London. Jane will be sharing insights about Semantic Waves and unplugged computing. To join the seminar, simply sign up with your name and email address and we’ll email you the link and instructions. If you attended Celia’s seminar, the link remains the same.
*The ScratchMaths team are :
Professor Dame Celia Hoyles (Mathematics) & Professor Richard Noss (Mathematics) UCL Knowledge Lab
Professor Ivan Kalas, (Computing) Comenius University, Bratislava, Slovakia
Dr Laura Benton (Computing) & Piers Saunders, (Mathematics) UCL Knowledge Lab
Professor Dave Pratt (Mathematics) UCL Institute of Education
Programming an Arduino to do simple things like turn on an LED or read a sensor is easy enough via the official IDE. However, think back to your first experiences with this type of hardware. While rewarding, getting everything set up correctly was certainly more of a challenge, requiring research that you now likely take for granted.
To assist with these first steps of a beginner’s hardware journey, researchers at KAIST in South Korea have come up with HeyTeddy, a “conversational test-driven development [tool] for physical computing.”
As seen in the video below, HeyTeddy’s voice input is handled by an Amazon Echo Dot, which passes these commands through the cloud to a Raspberry Pi. The system then interacts with the physical hardware on a breadboard using an Arduino Uno running Firmata firmware, along with a 7” 1024 x 600 LCD touchscreen for the GUI. Once programmed, code can be exported and used on the board by itself.
HeyTeddy is a conversational agent that allows users to program and execute code in real-time on an Arduino device without writing actual code but instead operating it through dialogue. This conversation can either be based on voice or text (through a Web chat). Commands spoken to HeyTeddy are parsed, interpreted, and executed in real-time, resulting in physical changes to the hardware. For example, the “write high” command configures an I/O pin to behave as a digital output with its internal state set to high (e.g., a 5V logic level), making driving an LED possible. Hence, the user does not need to write any code, compile it, deal with errors, and manually upload it on the hardware.
Furthermore, HeyTeddy supervises the user’s choices, preventing incorrect logic (e.g., writing an analog value to a digital pin), guiding the user through each step needed to assemble the circuit, and providing an opportunity to test individual components through separate unit tests without interrupting the workflow (i.e., TDD functionalities). Finally, the user has the option of exporting the issued commands as a written code for Arduino (i.e., an Arduino sketch in C++, ready for upload).
The Ifs: Coding for kids, reading skills not required
Arduino Team — September 11th, 2019
Learning about how computers work and coding skills will be important for future generations, and if you’d like to get your kids started on this task—potentially before they can even read—the Ifs present an exciting new option.
The Ifs are a series of four character blocks each with their own abilities, such as reproducing sound, movement, or sensitivity to light and darkness.
Children can program the blocks to accomplish tasks based on instructions that snap onto the top of each using magnets, and the whole “family” can communicate and work together to accomplish more advanced actions as a team.
As outlined in more detail on this project page, the devices were developed using Arduino technology, and you can sign up here to be notified when they’re ready for crowdfunding.
The Ifs are full of sensors and actuators but they need some instructions in order to function.
Programming is as simple as placing physical blocks in their heads with the help of magnets. No screens are involved. Each block has a different image serving as an intuitive symbol to represent an instruction. This makes the game suitable for children from the age of three, even before learning to read or write.
We only need different color pieces that are placed on their heads. The different color pieces are instructions that are combined as if it were a code, from being able to light them when it’s dark to making them communicate with each other. This allows kids to play with loops, statements, algorithms while also inventing their own stories. Their imagination is the only limit.
Twitch Extensions create new ways to bring Twitch streamers and viewers together, to create amazing content, and to connect games to a live broadcast. But like any new technology, it can feel overwhelming to start using it.
I’m Breci, a member of the Twitch Developer community, I currently work as a full-stack developer, and I specialize in interactive experiences.
In this article, I will share what I’ve learned when making Twitch Extensions, how they are made, and how you can use Twitch tools to reduce hosting costs, improve scaling, and engage streamers and viewers in your work.
If this is your first time working with Twitch Extensions, you should check out the Getting Started first and install the Developer Rig.
Extensions are webpages
If you look at the Getting Started page for Twitch Extensions, you will see this information: “Extensions are programmable, interactive overlays and panels, which help broadcasters interact with viewers.”
It can be said differently, “Extensions are static webpages displayed on top of or entirely below the video.”
With that in mind, you can easily create your first Extension using HTML/CSS/JS, like any other website. And, of course, you can use frameworks like React, Vue, Angular, etc. to build your Extensions.
Please note that the pages need to be static webpages; you can’t use server-side rendering.
You can find a minimal Twitch Extension here:
Use the right Extensions types
There are three types of Extensions: panel, video component, and video overlay.
Each Extension type has certain advantages and limitations, so consider these as you think about what type of Extension you want to build.
Panel
The panel Extension lives under the stream and can be popped out if the viewer wants to.
To see them, viewers have to scroll down.
They should mainly be used for informative content that requires little interaction or content totally separated from the stream.
Three can be active at the same time.
Video component
The video component Extension lives on top of the stream and inside of an Extensions Sidebar on the channel page.
They can take all the interactive space on the video player, but you can take less space if you want, and they have a transparent black background.
These Extensions will be minimized when the viewers join the stream.
They should be used for specific use cases that bring complementary or interactive content.
Two can be active at the same time.
Video overlay
The video overlay Extension lives on top of the stream; it covers the whole video player.
These Extensions will be directly visible when the viewer joins the stream.
They should be used for content that will go hand in hand with the broadcast like game integrations.
One can be active at a time.
Choosing the right type impacts engagement. Moving my Extension Live Request from a panel to a video component doubled the engagement, because it was meant to be interactive and complementary to the stream.
The JavaScript helper is your new best friend
When building Twitch Extensions, you will need to use the JavaScript helper.
It is a small library, provided by Twitch, that must be imported on your Extension.
With it, you can access all the useful information from the current user and your Extension. You can also trigger functionality like asking the user to share their identity, follow a channel, and trigger Bits transaction. You can even receive messages from the Extension or react to changes in the stream, allowing you to adapt the content of your Extension appropriately.
Store data with the Configuration Service
Let’s talk about saving the data. For example, you want to display some messages that can be customized by the streamer.
You could use a database for this, but you will have to set up an API and the database as well as manage scaling and the costs associated with it.
Using the Twitch Configuration Service in my Extension Quests allowed me to remove all costs involved in the project and focus on the concept without having to worry about costs.
The Configuration Service is a free service that allows developers to store data hosted by Twitch and access it directly from their Extension without the need for a backend.
This service offers three segments where you can store data:
broadcaster: data is shared only on the channel of the broadcaster and can be set from the JavaScript helper by the broadcaster or from an API endpoint. Each channel has its own segment.
developer: data is shared only on the channel of the broadcaster and can only be set from an API endpoint. Each channel has its own segment.
global: data is shared with all the channels using the Extension and can only be set from an API endpoint. There is only one for all the channels of your Extension.
You can store up to 5KB of data per segment.
For global data, like maintenance status, or certain configurations, you should use the global segment.
If you want to handle more data, you will need to have your own data storage with a backend in front of it to protect it.
A broadcaster with a bit of experience in programming could change the content of his broadcaster segment easily. To set up things like exclusive features, you should use the developer segment.
PubSub messages help with scaling
You may be wondering: How can I start a poll when the broadcaster hits a button? Do I have to add a WebSocket connection for every user?
You could… Or you can just use the Twitch Extension PubSub. It allows you to send messages to the users of your Extension without having to manage all the scaling of WebSocket or do massive polling from each client. Twitch already manages a PubSub system for you.
With this system, you can send PubSub messages to two targets:
broadcast: to all the users of a channel.
global: to all the users of your Extension, but can only be sent from an Extension Backend Service (EBS), which is simply the backend of your application, and you are responsible for hosting it and creating it if you need one.
As an example with this system, a broadcaster could press a button that triggers a PubSub message to all viewers and make text appear on the stream.
Note: You can only send up to 5KB of data and one PubSub per second, per channel.
Engage viewers with Bits
Bits are a virtual good that viewers can use to celebrate and support streamers on Twitch. As a developer, you can enable your Extension so that viewers can use Bits for everything from getting on leaderboards to playing sounds or messages directly on stream, and even influencing gameplay.
Each time a viewer uses Bits in your Extension is an opportunity to engage a stream by displaying visual feedback and showing viewers more ways that they can interact with the stream.
You can listen to Bits transactions in two ways:
Inside the Extension
twitch.ext.bits.onTransactionComplete( transaction => { if ( transaction.initiator === ‘CURRENT_USER’){ // do personalized feedback } else { // do generic feedback } }
From your EBS, using Webhooks
You can then send a PubSub message to your Extension or to a video source overlay.
Note: Bits transaction broadcast does not count in the Twitch Extension PubSub rate.
Let Streamers choose the Bits value
There is no “premade” way to allow streamers to set their own Bits value for a feature in the Extension.
When using Bits in your Extension, you have to create Bits products. They are used to define the possible Bits transaction in your Extension. But, you can create several of them and use them to allow multiple values.
For example, you can create:
value100: 100 Bits
value250: 250 Bits
value500: 500 Bits
And the streamer can simply choose one of them. You could have 50 possible values or two; it is up to you.
Some you might need more information than what the JavaScript helper provides. Since your Extension is a webpage, you can call Twitch’s API directly from it.
Do you want to deactivate your panel Extension when the stream is offline? Check it directly from the API on your Extension using the Streams endpoint with the channelId of the current channel.
window.Twitch.ext.onAuthorized(auth => { const url = `https://api.twitch.tv/helix/streams/id=${auth.channelId}`; const fetchData = { method: “GET”, headers: { “Client-ID”: “YourExtensionClientIdHere” } }; fetch(url,fetchData) .then(resp => resp.json()) // Transform the data into json .then(data => { if (data.data && data.data.length) { // Display extension } else { // Show offline message } }); });
If you have an EBS, make sure to always check server-side the validity of the data sent by your users.
Help viewers notice your Extension
Twitch Extensions are fairly new, so some viewers don’t yet know how to interact with them.
For video overlay Extensions, you might want to engage viewers by showing them they can interact with something on the stream. The best time to do this is when viewers move the mouse on top of the video player, because it means the viewer might not be 100 percent focused on the content in the stream. This is a great opportunity to create a nice call to action and/or animations to engage and educate the viewer about the Extension.
One way to do it is to use the arePlayerControlsVisible from the onContext callback
twitch.ext.onContext( context =>{ if (context.arePlayerControlsVisible) { // display information } else { // hide information } }
Titatitutu playing Trackmania, showing an example of helping viewers notice an Extension
Your Extension can — and should — be lazy
Your Extension will not be the primary content of the stream; it will complement it. Viewers will first come for the streamer, then use your work.
With that in mind, you don’t have to display your Extension as fast as possible like a regular website. There will be a bit of buffering before the live feed starts, the viewers will first focus on the content of the stream, then on your Extension. This gives you a lot of time to gather all the necessary data and display it nicely for the viewers.
I recommend you check out this talk by Karl Patrick from TwitchCon 2018 Developer Day to learn more about how to set up design patterns for Twitch scale.
Conclusion
I hope my experiences and learnings building Twitch Extensions help you get started. Extensions are a new paradigm in live interactive content, and I hope to see many of you joining this fun journey!
Flowboard provides visual learning environment for coding
Arduino Team — June 10th, 2019
Embedded programming using the Arduino IDE has become an important part of STEM education, and while more accessible than ever before, getting started still requires some coding and basic electronics skills. To explore a different paradigm for starting out on this journey, researchers have developed Flowboard to facilitate visual flow-based programming.
This device consists of an iPad Pro and a set of breadboards on either side. Users can arrange electrical components on these breadboards, changing the flow-based program on the screen as needed to perform the desired actions. Custom ‘switchboard’ hardware, along with an Arduino Uno running a modified version of Firmata, communicate with the iPad editor via Bluetooth.
With maker-friendly environments like the Arduino IDE, embedded programming has become an important part of STEM education. But learning embedded programming is still hard, requiring both coding and basic electronics skills. To understand if a different programming paradigm can help, we developed Flowboard, which uses Flow-Based Programming (FBP) rather than the usual imperative programming paradigm. Instead of command sequences, learners assemble processing nodes into a graph through which signals and data flow. Flowboard consists of a visual flow-based editor on an iPad, a hardware frame integrating the iPad, an Arduino board and two breadboards next to the iPad, letting learners connect their visual graphs seamlessly to the input and output electronics. Graph edits take effect immediately, making Flowboard a live coding environment.
The latest book from Raspberry Pi Press, An Introduction to C & GUI Programming, is now available. Author Simon Long explains how it came to be written…
Learning C
I remember my first day in a ‘proper’ job very well. I’d just left university, and was delighted to have been taken on by a world-renowned consultancy firm as a software engineer. I was told that most of my work would be in C, which I had never used, so the first order of business was to learn it.
My manager handed me a copy of Kernighan & Ritchie’s The C Programming Language, pointed to a terminal in the corner, said ‘That’s got a compiler. Off you go!’, and left me to it. So, I started reading the book, which is affectionately known to most software engineers as ‘K&R‘.
I didn’t get very far. K&R is basically the specification of the C language. Dennis Ritchie, the eponymous ‘R’, invented C, and while the book he helped write is an excellent reference guide, it is not a great introduction for a beginner. Like most people who know their subject inside out, the authors tend to assume that you know more than you do, so reading the book when you don’t know anything about the language at all is a little frustrating. I do know people who have learned C from K&R, and they have my undying respect!
I ended up learning C on the job as I went along; I looked at other people’s code, hacked stuff together, worked out why things didn’t work, asked for help from my colleagues, made a lot of mistakes, and gradually got the hang of it. I found only one book that was helpful for a beginner: it was called C For Yourself, and was actually one of the manuals for the long-extinct Microsoft QuickC compiler. That book is now impossible to find, so I’ve always had to tell people that the best book for learning C as a beginner is ‘C For Yourself, but you won’t be able to find a copy!’
Writing An Introduction to C & GUI Programming
When I embarked on this project, the editor of The MagPi and I were discussing possible series for the magazine, and we thought about creating a guide to writing GUI applications in C — that’s what I do in my day job at Raspberry Pi, so it seemed a logical place to start. We realised that the reader would need to know C to benefit from the series, and they wouldn’t be able to find a copy of C For Yourself. We decided that I ought to solve that problem first, so I wrote the original beginners’ guide to C series for The MagPi.
(At this point, I should stress that the series is aimed at absolute beginners. I freely admit that I have simplified parts of the language so that the reader does not have to absorb as much in one go. So yes, I do know about returning a success/fail code from a program, but beginners really don’t need to learn about that in the first chapter — especially when many will never need to write a program which does it. That’s why it isn’t explained until Chapter 9.)
So, the beginners’ guide to C came first, and I have now got round to writing the second part, which was what I’d planned to write all along. The section on GUIs describes how to write applications using the GTK toolkit, which is used for most of the Raspberry Pi Desktop and its associated applications. GTK is very powerful, and allows you to write rich graphical user interfaces with relatively few lines of code, but it’s not the most intuitive for beginners. (Much like C itself!) The book walks you through the basics of creating a window, putting widgets on it, and making the widgets do useful things, and gets you to the point where you know enough to be able to write an application like the ones I have written for the Raspberry Pi Desktop.
It then seemed logical to bring the two parts together in a single volume, so that someone with no experience of C has enough information to go from a standing start to writing useful desktop applications.
I hope that I’ve achieved that — and if nothing else, I hope that I’ve written a book which is a bit more approachable for beginners than K&R!
Alex interjects to state the obvious: Basically, what we’re saying here is that there’s no reason for you not to read Simon’s book. Oh, and it feels really nice too.
I’m a big fan of small code changes that can have large impact. This may seem like an obvious thing to state, but let me explain:
These type of changes often involve diving into and understanding things one is not familiar with.
Even with the most well factored code, there is a maintenance cost to each optimization you add, and it’s usually (although not always) pretty linear with the amount of lines of code you end up adding/changing.
We recently rolled out a small change that reduced the CPU utilization of our API frontend servers at Twitch by ~30% and reduced overall 99th percentile API latency during peak load by ~45%.
This blog post is about the change, the process of finding it and explaining how it works.
Setting the stage
We have a service at Twitch called Visage, that functions as our API frontend. Visage is the central gateway for all externally originating API traffic. It is responsible for a bunch of things, from authorization to request routing, to (more recently) server-side GraphQL. As such, it has to scale to handle user traffic patterns that are somewhat out of our control.
As an example, a common traffic pattern we see is a “refresh storm.” This occurs when a popular broadcaster’s stream drops due to a blip in their internet connectivity. In response, the broadcaster restarts the stream. This usually causes the viewers to repeatedly refresh their pages, and suddenly we have a lot more API traffic to deal with.
Visage is a Go application (built with Go 1.11 at the time of this change) that runs on EC2 behind a load balancer. Being on EC2 it scales well horizontally, for the most part.
However, even with the magic of EC2 and Auto Scaling groups, we still have the problem of dealing with very large traffic spikes. During refresh storms, we frequently have surges of millions of requests over a few seconds, on the order of 20x our normal load. On top of this, we would see API latency degrade significantly when our frontend servers were under heavy load.
One approach to handle this is to keep your fleet permanently over-scaled, but this is wasteful and expensive. To reduce this ever-increasing cost, we decided to spend some time searching for some low hanging fruit that would improve per-host throughput, as well as provide more reliable per-request handling, when hosts were under load.
Scouting the deck
Luckily we run pprof on our production environments, so getting at real production traffic profiles becomes really trivial. If you are not running pprof, I would highly encourage you do. The profiler, for the most part, has very minimal CPU overhead. The execution tracer can have a small overhead, but still small enough that we happily run it in production for a few seconds each hour.
So after taking a look at our Go application’s profiles, we made the following observations:
At steady state, our application was triggering ~8–10 garbage collection (GC) cycles a second (400–600 per minute).
>30% of CPU cycles were being spent in function calls related to GC
During traffic spikes the number of GC cycles would increase
Our heap size on average was fairly small (<450Mib)
If you haven’t guessed it already the improvements we made relate to the performance of garbage collection in our application. Before I get into the improvements, below is a quick primer / recap on what GCs are and what they do. Feel free to skip ahead if you’re well versed in the concepts.
What is a garbage collector (GC) ?
In modern applications, there are generally two ways to allocate memory: the stack and the heap. Most programmers are familiar with the stack from the first time writing a recursive program that caused the stack to overflow. The heap, on the other hand, is a pool of memory that can be used for dynamic allocation.
Stack allocations are great in that they only live for the lifespan of the function they are part of. Heap allocations, however, will not automatically be deallocated when they go out of scope. To prevent the heap from growing unbound, we must either explicitly deallocate, or in the case of programming languages with memory management (like Go), rely on the garbage collector to find and delete objects that are no longer referenced.
Generally speaking in languages with a GC, the more you can store on the stack the better, since these allocations are never even seen by the GC. Compilers use a technique called escape analysis to determine if something can be allocated on the stack or must be placed on the heap.
In practice, writing programs that force the compiler to only allocate on the stack can be very limiting, and so in Go, we leverage its wonderful GC to do the work of keeping our heap clean.
Go’s GC
GCs are complex pieces of software so I’ll do my best to keep this relevant.
Since v1.5, Go has incorporated a concurrent mark-and-sweep GC. This type of GC, as the name implies, has two phases: mark and sweep. The “concurrent” just means that it does not stop-the-world (STW) for the entire GC cycle, but rather runs mostly concurrently with our application code. During the mark phase, the runtime will traverse all the objects that the application has references to on the heap and mark them as still in use. This set of objects is known as live memory. After this phase, everything else on the heap that is not marked is considered garbage, and during the sweep phase, will be deallocated by the sweeper.
To summarize the following terms:
Heap size — includes all allocation made on the heap; some useful, some garbage.
Live memory — refers to all the allocations that are currently being referenced by the running application; not garbage.
It turns out that for modern operating systems, sweeping (freeing memory) is a very fast operation, so the GC time for Go’s mark-and-sweep GC is largely dominated by the mark component and not sweeping time.
Marking involves traversing all the objects the application is currently pointing to, so the time is proportional to the amount of live memory in the system, regardless of the total size of the heap. In other words, having extra garbage on the heap will not increase mark time, and therefore will not significantly increase the compute time of a GC cycle.
Based on all of the above, it should seem reasonable that less frequent GC’ing means less marking, which means less CPU spent over time, but what is the trade off? Well, it’s memory. The longer the runtime waits before GC’ing, the more garbage will accumulate in the system’s memory.
As we noted earlier though, the Visage application which runs on its own VM with 64GiB of physical memory, was GC’ing very frequently while only using ~400MiB of physical memory. To understand why this was the case, we need to dig into how Go addresses the GC frequency / memory tradeoff and discuss the pacer.
Pacer
The Go GC uses a pacer to determine when to trigger the next GC cycle. Pacing is modeled like a control problem where it is trying to find the right time to trigger a GC cycle so that it hits the target heap size goal. Go’s default pacer will try to trigger a GC cycle every time the heap size doubles. It does this by setting the next heap trigger size during the mark termination phase of the current GC cycle. So after marking all the live memory, it can make the decision to trigger the next GC when the total heap size is 2x what the live set currently is. The 2x value comes from a variable GOGC the runtime uses to set the trigger ratio.
The pacer in our case was doing a superb job of keeping garbage on our heap to a minimum, but it was coming at the cost of unnecessary work, since we were only using ~0.6% of our system’s memory.
Enter the ballast
Ballast — Nautical. any heavy material carried temporarily or permanently in a vessel to provide desired draft and stability. — source: dictionary.com
The ballast in our application is a large allocation of memory that provides stability to the heap.
We achieve this by allocating a very large byte array as our application starts up:
Reading the above code you may have two immediate questions:
Why on earth would you do that?
Won’t this use up 10 GiB of my precious RAM?
Let’s start with 1. Why on earth would you do that? As noted earlier, the GC will trigger every time the heap size doubles. The heap size is the total size of allocations on the heap. Therefore, if a ballast of 10 GiB is allocated, the next GC will only trigger when the heap size grows to 20 GiB. At that point, there will be roughly 10 GiB of ballast + 10 GiB of other allocations.
When the GC runs, the ballast will not be swept as garbage since we still hold a reference to it in our main function, and thus it is considered part of the live memory. Since most of the allocations in our application only exist for the short lifetime of an API request, most of the 10 GiB of allocation will get swept, reducing the heap back down to just over ~10 GiB again (i.e., the 10GiB of ballast plus whatever in flight requests have allocations and are considered live memory.) Now, the next GC cycle will occur when the heap size (currently just larger than 10 GiB) doubles again.
So in summary, the ballast increases the base size of the heap so that our GC triggers are delayed and the number of GC cycles over time is reduced.
If you are wondering why we use a byte array for the ballast, this is to ensure that we only add one additional object to the mark phase. Since a byte array doesn’t have any pointers (other than the object itself), the GC can mark the entire object in O(1) time.
Rolling out this change worked as expected — we saw ~99% reduction in GC cycles:
Log base 2 scale graph showing GC cycles per minute
So this looks good, what about CPU utilization?
Visage application CPU utilization
The green sinusoidal CPU utilization metric is due to the daily oscillations of our traffic. One can see the step down after the change.
~30% reduction in CPU per box means without looking further, we can scale down our fleet by 30%, however what we also care about is API latency — more on that later.
As mentioned above, the Go runtime does provide an environment variable GOGC that allows a very coarse tuning of the GC pacer. This value controls the ratio of growth the heap can experience before the GC is triggered. We opted against using this, as it has some obvious pitfalls:
The ratio itself is not important to us; the amount of memory we use is.
We would have to set the value very high to get the same effect as the ballast, making the value susceptible to small changes in live heap size.
Reasoning about the live memory and its rate of change is not easy; thinking about total memory used is simple.
For those interested, there is a proposal to add a target heap size flag to the GC which will hopefully make its way into the Go runtime soon.
Now onto 2. Won’t this use up 10Gib of my precious RAM? I’ll put your mind at ease. The answer is: no it won’t, unless you intentionally make it. Memory in ‘nix (and even Windows) systems is virtually addressed and mapped through page tables by the OS. When the above code runs, the array the ballast slice points to will be allocated in the program’s virtual address space. Only if we attempt to read or write to the slice, will the page fault occur that causes the physical RAM backing the virtual addresses to be allocated.
We can easily confirm this with the following trivial program:
We’ll run the program and then inspect with ps:
This shows just over 100MiB of memory has been allocated virtually to the process — Virtual SiZe (VSZ), while ~5MiB has been allocated in the resident set — Resident Set Size(RSS), i.e physical memory.
Now let’s modify the program to write to half of the underlying byte array backing the slice:
Again inspecting with ps:
As expected, half of the byte array is now in the RSS occupying physical memory. The VSZ is unchanged since the same size virtual allocation exists in both programs.
For those interested, the MINFL column is the number of minor page faults — that is the number of page faults the process incurred that required loading pages from memory. If our OS managed to allocate our physical memory nice and contiguously, then each page fault will be able to map more than one page of RAM, reducing the total number of page faults that occur.
So as long as we don’t read or write to the ballast, we can be assured that it will remain on the heap as a virtual allocation only.
What about the API latency?
As mentioned above, we saw an API latency improvement (especially during high load) as a result of the GC running less frequency. Initially, we thought this may be due to a decrease in GC pause time — this is the amount of time the GC actually stops the world during a GC cycle. However, the GC pause times before and after the change were not significantly different. Furthermore, our pause times were on the order of single digit milliseconds, not the 100s of milliseconds improvement we saw at peak load.
To understand where this latency improvement came from, we need to talk a bit about a feature of the Go GC called assists.
GC assists
GC assists puts the burden of memory allocation during a GC cycle on the goroutine that is responsible for the allocation. Without this mechanism, it would be impossible for the runtime to prevent the heap growing unbound during a GC cycle.
Since Go already has a background GC worker, the term assist, refers to our goroutines assisting the background worker. Specifically assisting in the mark work.
To understand this a bit more, let’s take an example:
When this code is executed, through a series of symbol conversions and type checking, the goroutine makes a call to runtime.makeslice, which finally ends up with a call to runtime.mallocgc to allocate some memory for our slice.
Looking inside the runtime.mallocgc function shows us the interesting code path.
Note, I’ve removed most of the function and just showing the relevant parts below:
In the code above, the line if assistG.gcAssistBytes < 0 is checking to see if our goroutine is in allocation debt. Allocation debt is a fancy way of saying that this goroutine was allocating more than it was doing GC work during the GC cycle.
You can think of this like a tax that your goroutine must pay for allocating during a GC cycle, except that this tax must be paid upfront before the allocation can actually happen. Additionally, the tax is proportional to the amount the goroutine is attempting to allocate. This provides a degree of fairness such that goroutines that allocate a lot will pay the price for those allocations.
So assuming this is the first time our goroutine is allocating during the current GC cycle, it will be forced to do GC assist work. The interesting line here is the call to gcAssistAlloc
This function is responsible for some housekeeping and eventually calling into gcAssistAlloc1 to perform the actual GC assist work. I won’t go into the details of the gcAssistAlloc functions, but essentially it does the following:
Check that the goroutine is not doing something non pre-emptible (i.e., the system goroutine)
Perform GC mark work
Check if the goroutine still has an allocation debt, otherwise return
Goto 2
It should now be clear that any goroutine that does work that involves allocating will incur the GCAssist penalty during a GC cycle. Since the work has to be done before the allocation, this would surface as latency or slowness on the actual useful work the goroutine is intending to do.
In our API frontend, this meant that API responses would see increased latency during GC cycles. As mentioned earlier, as load increased to each server, memory allocation rate would increase, which would in turn increases the rate of GC cycles (often to 10s or 20s of cycles/s). More GC cycles, we now know, means more GC assist work for goroutines serving the API, and therefore, more API latency.
You can see this quite clearly from an execution trace of our application. Below are two slices from the same execution trace of Visage; one while the GC cycle was running and one while it was not.
Building Extensions for Twitch keeps getting faster and easier. We recently announced the new and improved Developer Rig that helps developers build Extensions more quickly and intuitively. Today we’re announcing the Twitch Configuration Service.
Configuration Service removes the burden of writing a back-end to store persistent channel- and Extension-specific data. It then provides this data on Extension load, eliminating the need for your back-end to handle traffic from end users for this scenario. This means that developers only need to focus on creating amazing experiences, not on building complex back-ends. In short, with Configuration Service, we’re unlocking a developers ability to build better Extensions faster.
Let’s take a look at two common use cases that will benefit from Configuration Service immediately. We’ll use our Bot Commander and Animal Facts example Extensions for reference.
Build a simple Extension without building a back-end.
Developers can build simple front-end-only Extensions that let streamers configure the Extension to provide unique experiences for their viewers. For instance, a streamer may want to configure a list of Chat commands to use in their channel that viewers can see. In this scenario, the Extension’s front-end will validate the input and call a method to store it in the Configuration Service. On the viewer side, on start up, the Extension is provided an object containing the stored information seen below.
Configuration object for the list of Chat commands
Using a callback function from the Extension Helper Library, the Extension is notified when data has been delivered.
Using the Extension Helper library to get the broadcaster configuration data
The front-end can then use data stored in the Configuration Service to load the object, which means the viewer will be able to see and react to the Chat. All this happens without needing to build a back-end.
Simple front-end Extension working without a back-end
Reduce development and operational costs as part of your Extension back-end service.
For Extensions that do require a back-end, the Configuration Service can support scenarios that require the ability to persist channel specific data. For example, with data-driven Extensions, the Configuration Service lets the streamer store the configuration needed to call the appropriate APIs. The Extension back-end can query from Configuration Service to get the needed data. When the viewer loads the Extension, they will receive content relevant to the streamer’s channel. See the data below which is from our Animal Facts code example.
Configuration object for the Animal Facts example
If this streamer’s specific data is needed when the Extension loads, the Configuration Service can also provide that data without exposing the rest of the back-end to viewer traffic, reducing the scaling needs for developers.
Sample Extensions using Configuration Service and Extension back-end Service
Configuration Service opens the door for more developers to build on Twitch by empowering them to spend more time on the user experience for their idea, rather than building the back-end. Most Extension scenarios require a developer to persist channel specific data and retrieve it on Extension load to hydrate the experience. This required developers to support this in their back-end, even for simple scenarios, taking time away from building the best possible experience for streamers and their viewers.
Developers who have already built Extensions can start using Configuration Service immediately, either to add new functionality to their experience, or to replace the way they persist data on their back-end. New devs, can either build their back-end around Configuration Service, or just use Configuration Service to support their Extension scenario. All this is provided with love, by Twitch, with no additional cost to developers.
We believe that streamers and viewers will benefit greatly (and use Extensions more) from the increased functionality and stability that Configuration Services provides them. For developers, we hope this will motivate them to build more Extensions knowing that they can do so faster and easier than before without having to worry about investing in and building this aspect of their back-end.
There are many ways to learn and get started with Configuration Service.
You can get started with Twitch Configuration Service samples on GitHub:
The Developer Rig concept is a cornerstone of Twitch’s developer outreach. Our vision for building Extensions on Twitch requires that we provide the tools that developers need to build and test Extensions quickly and intuitively. Since we launched the Developer Rig, we have made consistent improvements to start times, online and local capabilities and product management features. Lowering the barrier to entry for developers who to build for Twitch is our number one priority.
As part of our product planning cycle, we conducted research to gauge how well we’re hitting our own goals. What we’ve heard from our community is that the Developer Rig needs some love. The major obstacles we’ve heard from you are that it’s challenging to get started, some of the user experience is unclear and the documentation is hard to grok.
Thanks to your feedback, today, we’re releasing a rebuilt and redesigned Developer Rig. The new Developer Rig is available now.
To improve the Developer Rig experience, we spent a lot of time thinking about how to get developers started as quickly, intuitively, and efficiently as possible in a stable and reliable environment.
Create Extensions Project from scratch or use provided samples to get started
Reduced start time
When you download the Rig, you just need to invoke a simple script to install dependencies and configure your dev machine. After that, type `yarn start` to launch the Rig and you’re in. We now drop a user directly in an Extensions building experience without having to manually enter commands. On the first-run experience, you can create your own project or use a Twitch-provided sample. If you already have Extension Projects in the Rig, it pre-populates them for you. Finally, in order to start up front and back-end services, you simply click two buttons.
Intuitive workflow
We are also introducing the concept of an Extension Project — this is a combination of the Extension manifest you create on our dev site, along with your code. You can use the Rig to create an Extension Project, including your own code or samples created by Twitch. We’re also releasing a brand new React-based boilerplate sample that you can easily add to your Extension Projects. With an Extension Project, it will make running your Extension in the Rig a lot simpler with pre-populated commands that you need to run the Rig and contextually relevant tutorials and documentation for when you need them.
Improved documentation
Due to popular demand, we’ve reworked the Rig documentation. We streamlined the README file (getting started info) and created a new document to focus on the Rig. In addition, a new Rig UI leverages the Rig itself to provide the contextual information you need. Developers can now rely on technical documentation for getting started or when they are stuck.
Let us know what you think by connecting with us @twitchdev on Twitter or in the developer forums. Have fun building with the new Rig. We can’t wait to see your Extension on Twitch!
Um dir ein optimales Erlebnis zu bieten, verwenden wir Technologien wie Cookies, um Geräteinformationen zu speichern und/oder darauf zuzugreifen. Wenn du diesen Technologien zustimmst, können wir Daten wie das Surfverhalten oder eindeutige IDs auf dieser Website verarbeiten. Wenn du deine Einwillligung nicht erteilst oder zurückziehst, können bestimmte Merkmale und Funktionen beeinträchtigt werden.
Funktional
Immer aktiv
Die technische Speicherung oder der Zugang ist unbedingt erforderlich für den rechtmäßigen Zweck, die Nutzung eines bestimmten Dienstes zu ermöglichen, der vom Teilnehmer oder Nutzer ausdrücklich gewünscht wird, oder für den alleinigen Zweck, die Übertragung einer Nachricht über ein elektronisches Kommunikationsnetz durchzuführen.
Vorlieben
Die technische Speicherung oder der Zugriff ist für den rechtmäßigen Zweck der Speicherung von Präferenzen erforderlich, die nicht vom Abonnenten oder Benutzer angefordert wurden.
Statistiken
Die technische Speicherung oder der Zugriff, der ausschließlich zu statistischen Zwecken erfolgt.Die technische Speicherung oder der Zugriff, der ausschließlich zu anonymen statistischen Zwecken verwendet wird. Ohne eine Vorladung, die freiwillige Zustimmung deines Internetdienstanbieters oder zusätzliche Aufzeichnungen von Dritten können die zu diesem Zweck gespeicherten oder abgerufenen Informationen allein in der Regel nicht dazu verwendet werden, dich zu identifizieren.
Marketing
Die technische Speicherung oder der Zugriff ist erforderlich, um Nutzerprofile zu erstellen, um Werbung zu versenden oder um den Nutzer auf einer Website oder über mehrere Websites hinweg zu ähnlichen Marketingzwecken zu verfolgen.