Schlagwort: ethics

  • Data ethics for computing education through ballet and biometrics

    Data ethics for computing education through ballet and biometrics

    Reading Time: 6 minutes

    For our seminar series on cross-disciplinary computing, it was a delight to host Genevieve Smith-Nunes this September. Her research work involving ballet and augmented reality was a perfect fit for our theme.

    Genevieve Smith-Nunes.
    Genevieve Smith-Nunes

    Genevieve has a background in classical ballet and was also a computing teacher for several years before starting Ready Salted Code, an educational initiative around data-driven dance. She is now coming to the end of her doctoral studies at the University of Cambridge, in which she focuses on raising awareness of data ethics using ballet and brainwave data as narrative tools, working with student Computing teachers.

    Why dance and computing?

    You may be surprised that there are links between dance, particularly ballet, and computing. Genevieve explained that classical ballet has a strict repetitive routine, using rule-based choreography and algorithms. Her work on data-driven dance had started at the time of the announcement of the new Computing curriculum in England, when she realised the lack of gender balance in her computing classroom. As an expert in both ballet and computing, she was driven by a desire to share the more creative elements of computing with her learners.

    Two photographs of data-driven ballets.
    Two of Genevieve’s data-driven ballet dances: [arra]stre and [PAIN]byte

    Genevieve has been working with a technologist and a choreographer for several years to develop ballets that generate biometric data and include visualisation of such data — hence her term ‘data-driven dance’. This has led to her developing a second focus in her PhD work on how Computing students can discuss questions of ethics based on the kind of biometric and brainwave data that Genevieve is collecting in her research. Students need to learn about the ethical issues surrounding data as part of their Computing studies, and Genevieve has been working with student teachers to explore ways in which her research can be used to give examples of data ethics issues in the Computing curriculum.

    Collecting data during dances

    Throughout her talk, Genevieve described several examples of dances she had created. One example was [arra]stre, a project that involved a live performance of a dance, plus a series of workshops breaking down the computer science theory behind the performance, including data visualisation, wearable technology, and images triggered by the dancers’ data.

    A presentation slide describing technologies necessary for motion capture of ballet.

    Much of Genevieve’s seminar was focused on the technologies used to capture movement data from the dancers and the challenges this involves. For example, some existing biometric tools don’t capture foot movement — which is crucial in dance — and also can’t capture movements when dancers are in the air. For some of Genevieve’s projects, dancers also wear headsets that allow collection of brainwave data.

    A presentation slide describing technologies necessary for turning motion capture data into 3D models.

    Due to interruptions to her research design caused by the COVID-19 pandemic, much of Genevieve’s PhD research took place online via video calls. New tools had to be created to capture dance performances within a digital online setting. Her research uses webcams and mobile phones to record the biometric data of dancers at 60 frames per second. A number of processes are then followed to create a digital representation of the dance: isolating the dancer in the raw video; tracking the skeleton data; using post pose estimation machine learning algorithms; and using additional software to map the joints to the correct place and rotation.

    A presentation slide describing technologies necessary turning a 3D computer model into an augmented reality object.

    Are your brainwaves personal data?

    It’s clear from Genevieve’s research that she is collecting a lot of data from her research participants, particularly the dancers. The projects include collecting both biometric data and brainwave data. Ethical issues tied to brainwave data are part of the field of neuroethics, which comprises the ethical questions raised by our increasing understanding of the biology of the human brain.

    A graph of brainwaves placed next to ethical questions related to brainwave data.

    Teaching learners to be mindful about how to work with personal data is at the core of the work that Genevieve is doing now. She mentioned that there are a number of ethics frameworks that can be used in this area, and highlighted the UK government’s Data Ethics Framework as being particularly straightforward with its three guiding principles of transparency, accountability, and fairness. Frameworks such as this can help to guide a classroom discussion around the security of the data, and whether the data can be used in discriminatory ways.

    Brainwave data visualisation using the Emotiv software.
    Brainwave data visualisation using the Emotiv software.

    Data ethics provides lots of material for discussion in Computing classrooms. To exemplify this, Genevieve recorded her own brainwaves during dance, research, and rest activities, and then shared the data during workshops with student computing teachers. In our seminar Genevieve showed two visualisations of her own brainwave data (see the images above) and discussed how the student computing teachers in her workshops had felt that one was more “personal” than the other. The same brainwave data can be presented as a spreadsheet, or a moving graph, or an image. Student computing teachers felt that the graph data (shown above) felt more medical, and more like permanent personal data than the visualisation (shown above), but that the actual raw spreadsheet data felt the most personal and intrusive.

    Watch the recording of Genevieve’s seminar to see her full talk:

    You can also access her slides and the links she shared in her talk.

    More to explore

    There are a variety of online tools you can use to explore augmented reality: for example try out Posenet with the camera of your device.

    Genevieve’s seminar used the title ME++, which refers to the data self and the human self: both are important and of equal value. Genevieve’s use of this term is inspired by William J. Mitchell’s book Me++: The Cyborg Self and the Networked City. Within his framing, the I in the digital world is more than the I of the physical world and highlights the posthuman boundary-blurring of the human and non-human. 

    Genevieve’s work is also inspired by Luciani Floridi’s philosophical work, and his book Ethics of Information might be something you want to investigate further. You can also read ME++ Data Ethics of Biometrics Through Ballet and AR, a paper by Genevieve about her doctoral work

    Join our next seminar

    In our final two seminars for this year we are exploring further aspects of cross-disciplinary computing. Just this week, Conrad Wolfram of Wolfram Technologies joined us to present his ideas on maths and a core computational curriculum. We will share a summary and recording of his talk soon.

    On 2 November, Tracy Gardner and Rebecca Franks from our team will close out this series by presenting work we have been doing on computing education in non-formal settings. Sign up now to join us for this session:

    We will shortly be announcing the theme of a brand-new series of seminars starting in January 2023.  

    Website: LINK

  • Can algorithms be unethical?

    Can algorithms be unethical?

    Reading Time: 5 minutes

    At Raspberry Pi, we’re interested in all things to do with technology, from building new tools and helping people teach computing, to researching how young people learn to create with technology and thinking about the role tech plays in our lives and society. One of the aspects of technology I myself have been thinking about recently is algorithms.

    An illustration of a desktop computer above which 5 icons are shown for privacy, culture, law, environment, and ethics

    Technology impacts our lives at the level of privacy, culture, law, environment, and ethics.

    All kinds of algorithms — set series of repeatable steps that computers follow to perform a task — are running in the background of our lives. Some we recognise and interact with every day, such as online search engines or navigation systems; others operate unseen and are rarely directly experienced. We let algorithms make decisions that impact our lives in both large and small ways. As such, I think we need to consider the ethics behind them.

    We need to talk about ethics

    Ethics are rules of conduct that are recognised as acceptable or good by society. It’s easier to discuss the ethics of a specific algorithm than to talk about ethics of algorithms as a whole. Nevertheless, it is important that we have these conversations, especially because people often see computers as ‘magic boxes’: you push a button and something magically comes out of the box, without any possibility of human influence over what that output is. This view puts power solely in the hands of the creators of the computing technology you’re using, and it isn’t guaranteed that these people have your best interests at heart or are motivated to behave ethically when designing the technology.

    An icon with the word 'stakeholders' below it

    Who creates the algorithms you use, and what are their motivations?

    You should be critical of the output algorithms deliver to you, and if you have questions about possible flaws in an algorithm, you should not discount these as mere worries. Such questions could include:

    • Algorithms that make decisions have to use data to inform their choices. Are the data sets they use to make these decisions ethical and reliable?
    • Running an algorithm time and time again means applying the same approach time and time again. When dealing with societal problems, is there a single approach that will work successfully every time?

    Below, I give two concrete examples to show where ethics come into the creation and use of algorithms. If you know other examples (or counter-examples, feel free to disagree with me), please share them in the comments.

    Algorithms can be biased

    Part of the ‘magic box’ mental model is the idea that computers are cold instructions followers that cannot think for themselves — so how can they be biased?

    Humans aren’t born biased: we learn biases alongside everything else, as we watch the way our family and other people close to us interact with the world. Algorithms acquire biases in the same way: the developers who create them might inadvertently add their own biases.

    An illustration of four people using smartphones

    Humans can be biased, and therefore the algorithms they create can be biased too.

    An example of this is a gang violence data analysis tool that the Met Police in London launched in 2012. Called the gang matrix, the tool held the personal information of over 300 individuals. 72% of the individuals on the matrix were non-white, and some had never committed a violent crime. In response to this, Amnesty International filed a complaint stating that the makeup of the gang matrix was influenced by police officers disproportionately labelling crimes committed by non-white individuals as gang-related.

    Who curates the content we consume?

    We live in a content-rich society: there is much, much more online content than one person could possibly take in. Almost every piece of content we consume is selected by algorithms; the music you listen to, the videos you watch, the articles you read, and even the products you buy.

    An illustration of a phone screen showing an invented tweet asking where people get their news from

    Some of you may have experienced a week in January of 2012 in which you saw a lot of either cute kittens or sad images on Facebook; if so, you may have been involved in a global social experiment that Facebook engineers performed on 600,000 of its users without their consent. Some of these users were shown overwhelmingly positive content, and others overwhelmingly negative content. The Facebook engineers monitored the users’ actions to gage how they responded. Was this experiment ethical?

    In order to select content that is attractive to you, content algorithms observe the choices you make and the content you consume. The most effective algorithms give you more of the same content, with slight variation. How does this impact our beliefs and views? How do we broaden our horizons?

    Why trust algorithms at all then?

    People generally don’t like making decisions; almost everyone knows the discomfort of indecision. In addition, emotions have a huge effect on the decisions humans make moment to moment. Algorithms on the other hand aren’t impacted by emotions, and they can’t be indecisive.

    While algorithms are not immune to bias, in general they are way less susceptible to it than humans. And if a bias is identified in an algorithm, an engineer can remove the bias by editing the algorithm or changing the dataset the algorithm uses. The same cannot be said for human biases, which are often deeply ingrained and widespread in society.

    An icon showing a phone screen with an internet browser symbol

    As is true for all technology, algorithms can create new problems as well as solve existing problems.

    That’s why there are more and less appropriate areas for algorithms to operate in. For example, using algorithms in policing is almost always a bad idea, as the data involved is recorded by humans and is very subjective. In objective, data-driven fields, on the other hand, algorithms have been employed very successfully, such as diagnostic algorithms in medicine.

    Algorithms in your life

    I would love to hear what you think: this conversation requires as many views as possible to be productive. Share your thoughts on the topic in the comments! Here are some more questions to get you thinking:

    • What algorithms do you interact with every day?
    • How large are the decisions you allow algorithms to make?
    • Are there algorithms you absolutely do not trust?
    • What do you think would happen if we let algorithms decide everything?

    Feel free to respond to other people’s comments and discuss the points they raise.

    The ethics of algorithms is one of the topics for which we offer you a discussion forum on our free online course Impact of Technology. The course also covers how to facilitate classroom discussions about technology — if you’re an educator teaching computing or computer science, it is a great resource for you!

    The Impact of Technology online course is one of many courses developed by us with support from Google.

    Website: LINK

  • Hello World Issue 6: Ethical Computing

    Hello World Issue 6: Ethical Computing

    Reading Time: 3 minutes

    Join us for an in-depth exploration of ethical computing in the newest issue of Hello World, our magazine for computing and digital making educators. It’s out today!

    We need to talk about ethics

    Whatever area of computing you hail from, how to take an ethical approach to the projects we build with code is an important question. As educators, we also need to think about the attitudes we are passing on to our students as we guide them along their computing journey.

    Ensuring that future generations use technology for good and consider the ethical implications of their creations is vital, particularly as self-learning AI systems are becoming prevalent. Let’s be honest: none of us want to live in a future resembling The Terminator’s nightmarish vision, however unlikely that is to come true.

    With that in mind, we’ve brought together a wide range of experts to share their ideas on the moral questions that teaching computing raises, and on the social implications of computing in the wider context of society.

    More in this issue

    We’ve also got the latest news about exciting online courses from Raspberry Pi and articles on Minecraft, Scratch, and the micro:bit. As usual, we also answer your latest questions and bring you an excellent collection of helpful features, guides, and lesson plans!

    Highlights of issue 6 include:

    • Doing the right thing: can computing help create ‘good citizens’?
    • Ethics in the curriculum: how to introduce them to students
    • Microblocks: live programming for microcontrollers
    • The 100-word challenge: a free resource to unlock creative writing

    You can download your PDF of Hello World #6 from our website right now! It’s freely available under a Creative Commons licence.

    Subscribe to Hello World

    We offer free print copies of the magazine to all computing educators in the UK. This includes teachers, Code Club and CoderDojo volunteers, teaching assistants, teacher trainers, and others who help children and young people learn about computing and digital making.

    Subscribe to have your free print magazine posted directly to your home, or subscribe digitally — 24000 educators have already signed up to receive theirs!

    If you live outside the UK and are interested in computer science and digital making education (and since you’ve read this far, I think you are!), subscribe to always get the latest issue as a PDF file straight to your inbox.

    Get in touch!

    You could write for us about your experiences as an educator to share your advice with the community. Wherever you are in the world, get in touch by emailing our editorial team about your article idea — we would love to hear from you!

    Hello World magazine is a collaboration between the Raspberry Pi Foundation and Computing At School, which is part of the British Computing Society.

    Website: LINK