Schlagwort: ai

  • New AI and accessibility updates across Android, Chrome and moreNew AI and accessibility updates across Android, Chrome and moreDirector, Product Management

    New AI and accessibility updates across Android, Chrome and moreNew AI and accessibility updates across Android, Chrome and moreDirector, Product Management

    Reading Time: 2 minutes

    Improving speech recognition around the world

    In 2019, we launched Project Euphonia to find ways to make speech recognition more accessible for people with non-standard speech. Now we’re supporting developers and organizations around the world as they bring that work to even more languages and cultural contexts.

    New developer resources

    To improve the ecosystem of tools globally, we’re providing developers with our open-source repositories via Project Euphonia’s GitHub page. They can now develop personalized audio tools for research or train their models for diverse speech patterns.

    Support for new projects in Africa

    Earlier this year we partnered with Google.org to provide support to the University College London in their creation of the Centre for Digital Language Inclusion (CDLI). The CDLI is working to improve speech recognition technology for non-English speakers in Africa by creating open-source datasets in 10 African languages, building new speech recognition models and continuing to support the ecosystem of organizations and developers in this space.

    Expanding accessibility options for students

    Accessibility tools can be particularly helpful for students with disabilities, from using facial gestures to navigate their Chromebooks with Face Control to customizing their reading experience with Reading Mode.

    And now when you use your Chromebook with College Board’s Bluebook testing app (which is where students can take the SAT and most Advanced Placement exams) you’ll have access to all of Google’s built-in accessibility features. This includes ChromeVox screen reader and Dictation, along with College Board’s own digital testing tools.

    Making Chrome more accessible

    With more than 2 billion people using Chrome each day, we’re always striving to make our browser easier to use and more accessible for everyone with features like Live Caption and image descriptions for screen reader users.

    Access PDFs more easily on Chrome

    Previously, if you opened a scanned PDF in your desktop Chrome browser, you wouldn’t be able to use your screen reader to interact with it. Now with Optical Character Recognition (OCR), Chrome automatically recognizes these types of PDFs, so you can highlight, copy and search for text like any other page and use your screen reader to read them.

    Read with ease with Page Zoom

    Page Zoom now lets you increase the size of the text you see in Chrome on Android without affecting the webpage layout or your browsing experience — just like how it works on Chrome desktop. You can customize how much you want to zoom in and easily apply the preference to all the pages you visit or just specific ones.

    Website: LINK

  • How well do you know our I/O 2025 announcements?How well do you know our I/O 2025 announcements?Contributor

    How well do you know our I/O 2025 announcements?How well do you know our I/O 2025 announcements?Contributor

    Reading Time: < 1 minute

    No, but you can still be an I/O pro. Originally, the name I/O was based on the first two digits in a googol (a one, followed by 100 zeroes), the number that lends our company its name. According to lore, I/O has evolved to also nod to “input / output,” referencing the computational concept of interfacing between a computer system and the outside world, and “innovation in the open.” Pretty fitting, don’t you think?

    Website: LINK

  • The latest AI news we announced in MayThe latest AI news we announced in May

    The latest AI news we announced in MayThe latest AI news we announced in May

    Reading Time: < 1 minute

    General summary

    Google is investing in AI to improve everyday life. Recent updates include AI features in Search like AI Mode and Deep Search. Also, check out the new Android XR glasses and the redesigned Android and Wear OS.

    Summaries were generated by Google AI. Generative AI is experimental.

    Shakespeare-ish

    Attend, and hear of Google’s AI might,
    Where models bloom, and futures take their flight.
    New tools for search, with answers deep and vast,
    And shopping smarts, that hold your budget fast.

    On glasses now, AI will lend its grace,
    While Gemini finds its helpful place.
    For filmmakers too, a tool is made with care,
    And marketers find AI beyond compare.

    A subscription new, for those who crave the best,
    AI Ultra grants access, putting skills to the test.
    So use these tools, and let your visions soar,
    For Google’s AI unlocks a boundless door.

    Summaries were generated by Google AI. Generative AI is experimental.

    Website: LINK

  • How Google and NVIDIA are teaming up to solve real-world problems with AIHow Google and NVIDIA are teaming up to solve real-world problems with AI

    How Google and NVIDIA are teaming up to solve real-world problems with AIHow Google and NVIDIA are teaming up to solve real-world problems with AI

    Reading Time: 3 minutes

    From Google DeepMind’s AI advancements, to Google Cloud’s AI-first infrastructure, to our Other Bets’ use of AI to tackle complex challenges in areas like healthcare, energy and robotics — every corner of our company is focused on pushing the frontier of AI and bringing its benefits to everyone.

    NVIDIA is a critical partner in this mission. As a world leader in AI and accelerated computing, NVIDIA has collaborated with Google and Alphabet for years on a wide variety of projects, including Android, advanced AI research, next-generation hardware and software optimizations, and beyond — all with the goal of making AI more accessible for developers.

    This week at the NVIDIA GTC global AI conference, we’re doubling down on our partnership with NVIDIA with announcements across Google and Alphabet:

    Google Cloud’s AI infrastructure will help transform businesses with NVIDIA’s latest GPUs

    Training large AI models requires serious computing power, and NVIDIA continues to raise the bar on GPU performance. We’re excited to announce our A4 VM is now generally available, based on NVIDIA HGX B200 GPU — and our A4X VMs based on NVIDIA GB200 NVL72 will be available soon, too. We’re committed to supporting the latest Blackwell GPUs, including the just-announced NVIDIA RTX PRO 6000 Blackwell and NVIDIA GB300. This means faster training times, smoother deployments and the ability to tackle even more complex AI challenges.

    Google DeepMind and NVIDIA will make the power of Gemini available to more people

    Google DeepMind’s Gemini is the result of decades of large language model research and is our most capable AI model. This week we’ll share how rapid advances in Gemini are designed to support more developers and end users — and how our collaboration with NVIDIA is helping to make that possible.

    The Google DeepMind team also released Gemma 3 last week, the latest in our collection of lightweight, state-of-the-art open models. NVIDIA played a key role in the development of these open models, working closely with our researchers to optimize Gemma to run on its vast accelerated computing ecosystem. This means developers can easily access the same technology that powers Gemini and implement it on any NVIDIA GPU available to them.

    Separately, NVIDIA also announced today that it has employed Google DeepMind’s SynthID watermarking tool on its Cosmos video generation platform. This marks our first external deployment of the technology, and we’re proud to help instill user trust in AI-generated content.

    Alphabet and NVIDIA will use AI to tackle the world’s most complex issues

    Our collaboration with NVIDIA isn’t just about building AI tools to help in daily life; it’s about fundamentally improving the quality of life for everyone. That means working together to apply AI to address some of the world’s most complex challenges. For starters, we’re teaming up to apply the most advanced frontier models to areas from energy to drug discovery.

    • Smarter energy grids. Tapestry, X’s moonshot for the electric grid, and NVIDIA are researching methods to increase the speed and accuracy of electric grid simulations.
    • Improved drug discovery. Isomorphic Labs and NVIDIA are advancing the development of new medicines using AI.
    • More capable robots. Intrinsic and NVIDIA are making robots more intelligent and capable, with the integration of NVIDIA’s Isaac foundation models for more adaptive grasp capabilities.
    • Advanced robotics simulation. Google DeepMind and NVIDIA are launching MuJoCo-Warp, a new open-source physics simulator that will accelerate robotics research.

    We’ve always believed that the value of technology lives in its capacity to benefit people everywhere. That’s been foundational to our mission and it’s what our growing collaboration with NVIDIA is built on. By combining Google’s expertise in AI research and infrastructure with NVIDIA’s leadership in accelerated computing, we’re committed to strengthening the foundation of AI, making AI more accessible and helpful to developers and users, and collaborating in ways that will drive innovation for years to come.

    If you’re attending GTC this week, swing by Google Cloud’s booth #914 and check out one of the many Google sessions to learn more about our work together.

    Website: LINK

  • Experience AI: The story so far

    Experience AI: The story so far

    Reading Time: 5 minutes

    In April 2023, we launched our first Experience AI resources, developed in partnership with Google DeepMind to support educators to engage their students in learning about the topic of AI. Since then, the Experience AI programme has grown rapidly, reaching thousands of educators all over the world. Read on to find out more about the impact of our resources, and what we are learning.

    The Experience AI resources

    The Experience AI resources are designed to help educators introduce AI and AI safety to 11- to 14-year-olds. They consist of:

    • Foundations of AI: a comprehensive unit of six lessons including lesson plans, slide decks, activities, videos, and more to support educators to introduce AI and machine learning to young people
    • Two standalone lessons:
      • Large language models (LLMs): a lesson designed to help young people discover how large language models work, their benefits, and why their outputs are not always reliable
      • Ecosystems and AI — Biology: a lesson providing an opportunity for young people to explore how AI applications are supporting animal conservation
    • AI safety: a set of resources with a flexible design to support educators in a range of settings to equip young people with the knowledge and skills to responsibly and safely navigate the challenges associated with AI

    We also offer a free online course, Understanding AI for educators, to help educators prepare to teach about AI.

    International expansion

    The launch of Experience AI came at an important time: AI technologies are playing an ever-growing role in our everyday lives, so it is crucial for young people to gain the understanding and skills they need to critically engage with these technologies. While the resources were initially designed for use by educators in the UK, they immediately attracted interest from educators across the world, as well as individuals wanting to learn about AI. The resources have now been downloaded over 325,000 times by people from over 160 countries. This includes downloads from over 7000 educators worldwide, who will collectively reach an estimated 1.2 million young people.

    Photo of an educator teaching an Experience AI lesson.

    Thanks to funding from Google DeepMind and Google.org, we have also been working with partners from across the globe to localise and translate the resources for learners and educators in their countries, and provide training to support local educators to deliver the lessons. The educational resources are now available in up to 15 languages, and to date, we have trained over 100 representatives from 20 international partner organisations, who will go on to train local educators. Five of these organisations have begun onward training already, collectively training over 1500 local educators so far.

    The impact of Experience AI

    The Experience AI resources have been well received by students and educators. Based on responses to our follow-up surveys, in countries where we have partners

    • 95% of educators agreed that the Experience AI sessions have increased their students’ knowledge of AI concepts 
    • 90% of young people (including young people in formal and non-formal education settings and learning independently) indicated that they better understand what AI and machine learning are
    Photo of a young person learning about AI on a laptop.

    This is backed up by qualitative feedback from surveys and interviews.

    “Students’ perception and understanding of AI has improved and corrected. They realised they can contribute and be a part of the [development], instead of only users.” – Noorlaila, educator, SMK Cyberjaya, Malaysia

    “[Students] found it interesting in the sense that it’s relevant information and they didn’t know what information was used for training models.” – Teacher, Liceul Tehnologic “Crisan” Criscior, Romania

    “Based on my knowledge and learning about AI, I now appreciate the definition of AI as well as its implementation.” – Student, Changamwe JSS, Kenya

    Photo of a group of educators participating in an Experience AI teacher training event in Kenya.

    The training and resources also support educators to feel more confident to teach about AI:

    • 93% of international partner representatives who participated in our training agreed that the training increased their knowledge of AI concepts
    • 88% of educators receiving onward training by our international partners agreed that the training increased their confidence to teach AI concepts
    • 87% of educator respondents from our ‘Understanding AI for educators’ online course agreed that the course was useful for supporting young people

    “It was a wonderful experience for me to join this workshop. Truly I was able to learn a lot about AI and I feel more confident now to teach the kids back at school about this new knowledge.” – Nur, educator, SMK Bandar Tasek Mutiara, trained by our partner Penang Science Cluster, Malaysia

    “This was one of the best information sessions I’ve been to! So, so helpful!” – Meagan, educator, University of Alberta, trained by our partner Digital Moment, Canada

    “The layout of the course in terms of content structuring is amazing. I love the discussion forum and the insightful yet empathetic responses by the course moderators on the discussion board. Honestly, I am really glad I started my AI in education journey with you.” – Priyanka, head teacher (primary level), United Arab Emirates, online course participant

    What are we learning?

    We are committed to continually improving our resources based on feedback from users. A recent review of feedback from educators highlighted key aspects of the resources that educators value most, as well as some challenges educators are facing and possible areas for improvement. For example, educators particularly like the interactive aspects, the clear structure and explanations, and the videos featuring professionals from the AI industry. We are continuing to look for ways we can better support educators to adapt the content and language to better support students in their context, fit Experience AI into their school timetables, and overcome technical barriers. 

    We value feedback on our resources and will continue to highlight the importance of AI education in schools and work with partners across the globe to adapt our resources for different contexts.

    Get involved

    If you would like to try out our Experience AI resources, head to experience-ai.org, where you can find our free resources and online course, as well as information about local partners in your area.

    Website: LINK

  • Xbox-Update im Februar: Verschicke Einladungslinks, Cloud Gaming-Updates und mehr

    Xbox-Update im Februar: Verschicke Einladungslinks, Cloud Gaming-Updates und mehr

    Reading Time: 4 minutes

    Auch dieses Mal werden mit dem neuen Xbox-Update wieder frische Features ausgerollt. Ab heute kannst Du als Game Pass-Abonnent*in Deine Freund*innen ganz bequem über einen Link zu Deinen Xbox Cloud Gaming (Beta)-Sessions einladen. Außerdem stehen noch mehr über Cloud spielbare Xbox-Titel zur Verfügung. Falls Du es verpasst haben solltest: Xbox forscht aktiv an der Nutzung von K.I. im Gaming, und im Januar haben wir einen Netzwerkqualitätsindikator eingeführt. Außerdem gibt es ein neues Controller-Update zur Verbesserung des Gameplays. Weitere Details findest Du weiter unten im Text:

    Xbox Cloud Gaming (Beta)

    Sende Einladungslinks an Deine Freund*innen, um gemeinsam in einer Cloud Gaming-Session zu zocken.

    Ab heute kannst du als Xbox Game Pass Ultimate-Abonnent*in, während Du den Xbox Cloud Gaming (Beta) Service in Deinem Webbrowser oder auf Deinem unterstützten Fernseher verwendest, Links generieren, um andere Game Pass Ultimate-Abonnent*innen zu Deinen Spielsitzungen einzuladen.

    Während des Spielens kannst du als Xbox Game Pass Ultimate-Mitglied einen Session-Link erstellen, indem Du das Spieleinladungsmenü im Guide oder im Spiel öffnest und dann nach dem Abschnitt „Anyone“ suchst. Kopiere diesen Link und schicke ihn an Deine Freund*innen, damit sie an der Spielsitzung teilnehmen können.

    Einladungslinks bieten Flexibilität bei der Auswahl der Personen, die zu einer Spielsitzung eingeladen werden sollen. Du kannst eine direkte Nachricht senden, eine Gruppe zum Chatten einladen oder den Link in einem sozialen Netzwerk teilen. Eingeladene Spieler*innen können sofort über ihren Webbrowser oder einen unterstützten Fernseher an deiner Cloud-Gaming-Sitzung teilnehmen.

    So nimmst Du über einen Webbrowser oder ein Mobilgerät an einer Spielesitzung teil:

    1. Öffne den Einladungslink mit einem unterstützten Browser.
    2. Melde Dich mit deinem Xbox-Profil an.
    3. Klicke auf „Mit Ultimate spielen“.

    So nimmst Du an einer Spielesitzung auf einem Fernseher teil:

    1. Öffne den Einladungslink auf einem PC oder Mobilgerät.
    2. Klicke auf die Schaltfläche „Auf einem anderen Gerät beitreten“, um deinen Code zu erhalten.
    3. Öffne den Xbox Guide auf Deinem Fernseher und wähle die Option „Hast du einen Code für eine Spielesitzung?“ aus
    4. Gib den Code über den Webbrowser ein und spiele.

    Bei der Nutzung dieser neuen Funktion sind einige Dinge zu beachten:

    • Alle Spieler*innen müssen über ein Xbox-Konto verfügen und für einige Spiele ist der Xbox Game Pass Ultimate erforderlich.
    • Alle Spieler*innen müssen über eine Berechtigung für das Spiel verfügen, um spielen zu können.
    • Die Anzahl der Spieler*innen, die über den Einladungslink beitreten können, hängt von der maximalen Anzahl der Spieler*innen ab, die das Spiel zulässt.
    • Nach dem Erstellen eines Links können die Hosts der Session diesen jederzeit widerrufen, um zu verhindern, dass neue Spieler*innen beitreten.
    • Die Möglichkeit, vorhandene Spieler*innen aus der Sitzung zu entfernen, hängt vom Spiel ab.

    Streame Dein Spiel – Neue Cloud-Titel verfügbar

    Diesen Monat fügen wir der Sammlung von „Stream Dein eigenes Spiel“ noch mehr Spiele hinzu. Als Abonnent*in des Game Pass Ultimate kannst Du über 50 Cloud-Spiele auf unterstützten Geräten streamen, sofern Du sie besitzt.

    Kürzlich hinzugefügt:

    • Blasphemous 2
    • Kingdom Come: Deliverance II
    • Slime Rancher 2
    • Subnautica
    • Subnautica: Below Zero
    • The Talos Principle 2
    • Tomb Raider IV-VI Remastered

    Bald verfügbar:

    • Atomic Heart
    • Cult of the Lamb
    • Hotline Miami
    • Killer Frequency
    •  Neva
    •  Overcooked! All You Can Eat
    • Phanton Breaker: Battle Grounds Ultimate
    • Serious Sam Collection
    • Trepang 2
    • Worms Armageddon: Anniversary Edition
    • Und mehr…

    Die vollständige Liste der Cloud-Spiele, die auf unterstützten Geräten gestreamt werden können, findest Du hier. Weitere Informationen findest Du hier.

    Falls Du es verpasst hast

    Xbox forscht gerade an der Nutzung von KI im Gaming. Außerdem enthielt das Xbox Update im Januar eine neue Ausführung des Netzwerkqualitätsindikators für Cloud-Gaming, ein Controller-Update und Updates für PC-Gaming, die die Stabilität, Auffindbarkeit von Titeln und Benutzerfreundlichkeit verbessern.

    Durchbrüche in der generativen KI

    Nach einer auf der Webseite Nature veröffentlichten Studie, haben wir kürzlich ein neues generatives KI-Modell für die Spielkonzeption namens Muse angekündigt. Wir untersuchen, wie dieses Modell eines Tages sowohl Spieler*innen als auch Spieleentwickler*innen zugutekommen kann: Von der Wiederbelebung nostalgischer Spiele bis hin zu einer schnelleren Ideenfindung und -umsetzung für Entwickler gibt es zahlreiche mögliche Anwendungsmöglichkeiten.

    Weitere Informationen zu diesem Durchbruch findest Du in diesem Artikel.

    Xbox Cloud Gaming (Beta) – Aktualisierung des Netzwerkqualitätsindikators

    Der neue Netzwerkqualitätsindikator ist jetzt für alle verfügbar und hilft bei der Diagnose potenzieller Netzwerkprobleme während gestreamter Spielsitzungen. Die meisten Audio- und Videoprobleme werden durch Probleme mit der Netzwerkverbindung verursacht. Diese neue Funktion hilft dabei, die Qualität der Netzwerkverbindung zu verfolgen, während Du über Xbox Cloud Gaming (Beta) auf unterstützten Geräten spielst.

    Um die Netzwerkwarnsymbole ein- oder auszuschalten, gehe zu Deinem Profilbild > Einstellungen > Streaming > Netzwerkqualitätsanzeige. Weitere Informationen zu dieser Funktion und Tipps zur Fehlerbehebung findest du hier.

    Xbox-Zubehör – Firmware-Update für den Xbox Wireless Controller

    Wir haben ein Firmware-Update für den Xbox Wireless Controller veröffentlicht, das Verbesserungen bei der automatischen Zentrierung des Daumensticks, der Anpassung des Auslösers und der Maus-zu-Daumenstick-Eingaben enthält. Installiere dieses Update über auf Xbox Series X|S und Xbox One oder einem Windows-PC.

    Gestalte die Zukunft von Xbox mit

    Bleib via Xbox Wire auf dem Laufenden, um zukünftige Updates zu erhalten. Wenn Du Unterstützung im Zusammenhang mit Xbox-Updates benötigest, besuche die offizielle Xbox-Support-Website.

    Wir freuen uns, von der Community zu hören, egal ob Du einen Vorschlag für eine neue Funktion hast, die Du gerne hinzugefügt hättest, oder ob Du Feedback zu bestehenden Funktionen geben möchten, die verbessert werden könnten. Wir sind immer auf der Suche nach Möglichkeiten, das Xbox-Erlebnis für Spieler*innen auf der ganzen Welt zu verbessern. Wenn Du die Zukunft von Xbox mitgestalten und frühzeitig auf neue Funktionen zugreifen möchtest, tritt noch heute dem Xbox Insider-Programm bei, indem Du den Xbox Insider Hub für Xbox Series X|S und Xbox One oder Windows PC herunterlädst.

    Viel Spaß beim Spielen!

    Website: LINK

  • Teaching about AI – Teacher symposium

    Teaching about AI – Teacher symposium

    Reading Time: 5 minutes

    AI has become a pervasive term that is heard with trepidation, excitement, and often a furrowed brow in school staffrooms. For educators, there is pressure to use AI applications for productivity — to save time, to help create lesson plans, to write reports, to answer emails, etc. There is also a lot of interest in using AI tools in the classroom, for example, to personalise or augment teaching and learning. However, without understanding AI technology, neither productivity nor personalisation are likely to be successful as teachers and students alike must be critical consumers of these new ways of working to be able to use them productively. 

    Fifty teachers and researchers posing for a photo at the AI Symposium, held at the Raspberry Pi Foundation office.
    Fifty teachers and researchers share knowledge about teaching about AI.

    In both England and globally, there are few new AI-based curricula being introduced and the drive for teachers and students to learn about AI in schools is lagging, with limited initiatives supporting teachers in what to teach and how to teach it. At the Raspberry Pi Foundation and Raspberry Pi Computing Education Research Centre, we decided it was time to investigate this missing link of teaching about AI, and specifically to discover what the teachers who are leading the way in this topic are doing in their classrooms.  

    A day of sharing and activities in Cambridge

    We organised a day-long, face-to-face symposium with educators who have already started to think deeply about teaching about AI, have started to create teaching resources, and are starting to teach about AI in their classrooms. The event was held in Cambridge, England, on 1 February 2025, at the head office of the Raspberry Pi Foundation. 

    Photo of educators and researchers collaborating at the AI symposium.
    Teachers collaborated and shared their knowledge about teaching about AI.

    Over 150 educators and researchers applied to take part in the symposium. With only 50 places available, we followed a detailed protocol, whereby those who had the most experience teaching about AI in schools were selected. We also made sure that educators and researchers from different teaching contexts were selected so that there was a good mix of primary to further education phases represented. Educators and researchers from England, Scotland, and the Republic of Ireland were invited and gathered to share about their experiences. One of our main aims was to build a community of early adopters who have started along the road of classroom-based AI curriculum design and delivery.

    Inspiration, examples, and expertise

    To inspire the attendees with an international perspective of the topics being discussed, Professor Matti Tedre, a visiting academic from Finland, gave a brief overview of the approach to teaching about AI and resources that his research team have developed. In Finland, there is no compulsory distinct computing topic taught, so AI is taught about in other subjects, such as history. Matti showcased tools and approaches developed from the Generation AI research programme in Finland. You can read about the Finnish research programme and Matti’s two month visit to the Raspberry Pi Computing Education Research Centre in our blog

    Photo of a researcher presenting at the AI Symposium.
    A Finnish perspective to teaching about AI.

    Attendees were asked to talk about, share, and analyse their teaching materials. To model how to analyse resources, Ben Garside from the Raspberry Pi Foundation modelled how to complete the activities using the Experience AI resources as an example. The Experience AI materials have been co-created with Google DeepMind and are a suite of free classroom resources, teacher professional development, and hands-on activities designed to help teachers confidently deliver AI lessons. Aimed at learners aged 11 to 14, the materials are informed by the AI education framework developed at the Raspberry Pi Computing Education Research Centre and are grounded in real-world contexts. We’ve recently released new lessons on AI safety, and we’ve localised the resources for use in many countries including Africa, Asia, Europe, and North America.

    In the morning session, Ben exemplified how to talk about and share learning objectives, concepts, and research underpinning materials using the Experience AI resources and in the afternoon he discussed how he had mapped the Experience AI materials to the UNESCO AI competency framework for students.

    Photo of an adult presenting at the AI Symposium.
    UNESCO provide important expertise.

    Kelly Shiohira, from UNESCO, kindly attended our session, and gave an invaluable insight into the UNESCO AI competency framework for students. Kelly is one of the framework’s authors and her presentation helped teachers understand how the materials had been developed. The attendees then used the framework to analyse their resources, to identify gaps and to explore what progression might look like in the teaching of AI.

    Photo of a whiteboard featuring different coloured post-it notes displayed featuring teachers' and researchers' ideas.
    Teachers shared their knowledge about teaching about AI.

    Throughout the day, the teachers worked together to share their experience of teaching about AI. They considered the concepts and learning objectives taught, what progression might look like, what the challenges and opportunities were of teaching about AI, what research informed the resources and what research needs to be done to help improve the teaching and learning of AI.

    What next?

    We are now analysing the vast amount of data that we gathered from the day and we will share this with the symposium participants before we share it with a wider audience. What is clear from our symposium is that teachers have crucial insights into what should be taught to students about AI, and how, and we are greatly looking forward to continuing this journey with them.

    As well as the symposium, we are also conducting academic research in this area, you can read more about this in our Annual Report and on our research webpages. We will also be consulting with teachers and AI experts. If you’d like to ensure you are sent links to these blog posts, then sign up to our newsletter. If you’d like to take part in our research and potentially be interviewed about your perspectives on curriculum in AI, then contact us at: rpcerc-enquiries@cst.cam.ac.uk 

    We also are sharing the research being done by ourselves and other researchers in the field at our research seminars. This year, our seminar series is on teaching about AI and data science in schools. Please do sign up and come along, or watch some of the presentations that have already been delivered by the amazing research teams who are endeavouring to discover what we should be teaching about AI and how in schools

    Website: LINK

  • Muse, ein generatives KI-Modell für Gameplay, bietet neue Möglichkeiten im Gaming und in der Spielentwicklung

    Muse, ein generatives KI-Modell für Gameplay, bietet neue Möglichkeiten im Gaming und in der Spielentwicklung

    Reading Time: 3 minutes

    Was Muse für das Gaming von morgen bedeutet

    Obwohl wir noch am Anfang stehen, stößt die Modellforschung bereits an die Grenzen dessen, was wir für möglich gehalten haben. Wir verwenden Muse bereits, um ein in Echtzeit spielbares KI-Modell zu entwickeln, das auf der Basis anderer Spiele von Erstanbietern trainiert wurde. Wir sehen jetzt schon das Potenzial, das sowohl Spieler*innen als auch Spieleentwickler*innen eines Tages zugutekommen wird. Die Möglichkeiten reichen von der Wiederbelebung nostalgischer Spiele bis hin zur schnelleren und kreativeren Ideenfindung.

    Heutzutage sind unzählige klassische Spiele, die an eine veraltete Hardware gebunden sind, für meisten Menschen nicht mehr spielbar. Im Zuge unseres Durchbruchs erforschen wir das Potenzial von Muse, ältere Spiele aus dem Portfolio unserer Spiele wiederzubeleben, indem wir sie für jedes Gerät optimieren. Wir glauben, dass wir die Art und Weise, wie sich Spieleklassiker für die Zukunft erhalten lassen, so langfristig erleichtern können.

    Die Vorstellung, dass beliebte Spiele, die durch die Weiterentwicklung der Hardware als veraltet gelten, eines Tages wieder über Xbox auf jedem Bildschirm gespielt werden können, ist für uns Vision und Motivation zugleich. Darüber hinaus beschäftigen wir uns mit der Frage, wie Muse Spieleteams dabei helfen kann, neue Gameplay-Erlebnisse zu entwerfen und neue Inhalte einzuführen. Hierfür ziehen wir Spieleklassiker heran und bieten unseren Entwickler*innen die Möglichkeit, neue Erlebnisse zu schaffen und am Entwicklungsprozess teilzuhaben.

    Wir werden verschiedene Gelegenheiten schaffen, an dieser Erkundung teilzunehmen, angefangen bei kurzen interaktiven KI-Spieleerlebnissen, die schon bald über Copilot Labs getestet werden können. Es gibt noch so viel mehr zu entdecken und wir können es kaum erwarten, ausführlicher darüber zu berichten.

    Der Ansatz von Xbox für generative KI

    Xbox arbeitet schon seit Jahrzehnten mit KI, unter anderem in Zusammenarbeit mit Microsoft Research. Wir suchen ständig nach Möglichkeiten, die Spieler*innen zu begeistern und neue Funktionen zu nutzen, mit denen sich unsere kreativen Visionen zum Leben erwecken lassen. KI ist im Bereich des Gamings zwar nicht neu, doch wir erleben derzeit eine rasante Beschleunigung bei der generativen KI-Forschung, sowohl bei Microsoft als bei anderen großen Akteuren der Technologiebranche. Wir glauben, dass es von zentraler Bedeutung ist, darauf zu achten, wie wir die Gemeinschaft der Spieleentwickler*innen mit unseren KI-Durchbrüchen auf verantwortungsvolle Weise unterstützen können.

    Für Xbox werden die Spieleentwickler*innen immer im Mittelpunkt der allgemeinen KI-Bemühungen stehen. Wir glauben, dass es Raum für die traditionelle Spieleentwicklung und zukünftige generative KI-Technologien gibt, die als Erweiterung der kreativen Arbeit dienen und neue Erfahrungen ermöglichen können. In diesem Zusammenhang haben wir die kreativen Köpfe hier bei Xbox dazu befähigt, über den Einsatz von generativer KI zu entscheiden. Es wird nicht die eine Lösung für jedes Spiel oder Projekt geben. Vielmehr hängt der Ansatz von der kreativen Vision und den Zielen des jeweiligen Teams ab.

    Was auf uns zukommt: KI-Erlebnisse von Xbox

    Beim Blick in die Zukunft konzentrieren wir uns darauf, wie KI die Hürden und Herausforderungen für Spieler*innen und Entwickler*innen beseitigen kann. Das bedeutet, dass wir unsere KI-Innovationen früher vorstellen und Spielern und Entwicklern die Möglichkeit geben werden, mit neuen KI-Funktionen und -Fähigkeiten zu experimentieren und diese gemeinsam mit uns zu entwickeln. So können wir sicherstellen, dass unsere KI-Tools echte Probleme lösen und einen neuen Mehrwert für die Entwicklung und das Spielen mit Xbox bieten. 

    Um Tools zu entwickeln, die für alle von Nutzen sind, werden unsere Innovationen weiterhin auf unserer Verpflichtung zu verantwortungsvoller KI und unserem Engagement für die Entwicklung von KI-Lösungen beruhen, die sich an sechs Prinzipien orientieren: Fairness, Zuverlässigkeit und Sicherheit, Datenschutz und Sicherheit, Inklusion, Transparenz und Verantwortlichkeit.

    Wir werden in diesem Jahr weiterhin darüber berichten, wie wir KI einsetzen, um Spieleentwickler*innen zu unterstützen und das bestmögliche Spielerlebnis auf allen Geräten zu bieten. Vor der GDC 2025 werden wir auf Xbox Wire und in unserem AI Resource Hub über weitere Updates berichten.


    Du möchtest mehr erfahren? Sieh dir das Gespräch von Phil Spencer, dem CEO von Microsoft Gaming, Dom Matthews, dem Studioleiter von Ninja Theory, und Katja Hofmann, der Senior Principal Researcherin und Leiterin des Game Intelligence Teams, über die heutigen News an:

    [youtube https://www.youtube.com/watch?v=c15vxDHJ2lU?feature=oembed&w=500&h=281]

    Website: LINK

  • Empowering Creators and Players With Muse, a Generative AI Model for Gameplay

    Empowering Creators and Players With Muse, a Generative AI Model for Gameplay

    Reading Time: 3 minutes

    What Muse Means for Gaming

    Although it’s still early, this model research is pushing the boundaries of what we thought was possible. We are already using Muse to develop a real-time playable AI model trained on other first-party games, and we see potential for this work to one day benefit both players and game creators: from allowing us to revive nostalgic games to faster creative ideation.

    Today, countless classic games tied to aging hardware are no longer playable by most people. Thanks to this breakthrough, we are exploring the potential for Muse to take older back catalog games from our studios and optimize them for any device. We believe this could radically change how we preserve and experience classic games in the future and make them accessible to more players.

    To imagine that beloved games lost to time and hardware advancement could one day be played on any screen with Xbox is an exciting possibility for us. Another opportunity we are exploring is how Muse can help game teams prototype new gameplay experiences during the creative process and introduce new content—taking games players already love and enabling our developers to inject new experiences for them to enjoy, or even enable you to participate in the creation process.

    We’ll create opportunities for people to participate in this exploration, starting with short interactive AI game experiences for you to try on Copilot Labs very soon. There’s still so much for us to discover, and we can’t wait to share more.

    Xbox’s Approach to Generative AI

    Xbox has been innovating with AI and machine learning for decades, including in partnership with Microsoft Research—constantly working to find ways to delight players, and harness new computing capabilities to bring creative visions to life. While AI isn’t new to gaming, we’re now seeing an acceleration of generative AI research, both within Microsoft and across the broader technology community. We believe it’s important to shape how these new generative AI breakthroughs can support our industry and game creation community in a collaborative and responsible way.

    For Xbox, game creators will always be the center of our overall AI efforts. We believe there is space for traditional game development and future generative AI technologies that serve as an extension of creative work and offer novel experiences. As part of this, we have empowered creative leaders here at Xbox to decide on the use of generative AI. There isn’t going to be a single solution for every game or project, and the approach will be based on the creative vision and goals of each team.

    What’s Ahead: AI Experiences from Xbox

    As we look ahead, we’re focused on how AI can address the barriers and frictions to playing and developing games. This means that we’ll share our AI product innovation earlier on, providing opportunities for players and creators to experiment with and co-build new AI features and capabilities with us. This allows us to make sure that our AI innovations address real problems and add new value to creating or playing with Xbox. 

    To develop tools that are used in ways that benefit everyone, our innovation will continue to be built on our commitment to Responsible AI and dedication to developing AI solutions guided by six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

    You can expect more exciting announcements this year about how we are using AI to empower game creators and provide the best possible player experience across all devices. Stay tuned for more updates ahead of GDC 2025 on Xbox Wire and our AI Resource Hub as we continue to shape the future of gaming supported by AI.


    Want to hear more? Check out a conversation with Phil Spencer, CEO of Microsoft Gaming, Dom Matthews, Studio Head of Ninja Theory, and Katja Hofmann, Senior Principal Researcher and Lead of the Game Intelligence Team, about today’s news:

    [youtube https://www.youtube.com/watch?v=c15vxDHJ2lU?feature=oembed&w=500&h=281]

    Website: LINK

  • The latest AI news we announced in JanuaryThe latest AI news we announced in January

    The latest AI news we announced in JanuaryThe latest AI news we announced in January

    Reading Time: < 1 minute

    We announced that Google Cloud’s Automotive AI Agent is arriving for Mercedes-Benz. Google Cloud unveiled Automotive AI Agent, a new way for automakers to create helpful agentic experiences for drivers. Mercedes-Benz is among the first automakers planning to implement the product, which goes beyond current vehicle voice control to allow people to have natural conversations and ask queries while driving, like “Is there an Italian restaurant nearby?”

    We shared five ways NotebookLM Plus can help your business. NotebookLM is a tool for understanding anything — including synthesizing complex ideas buried in deep research. This month we made our premium NotebookLM Plus available in more Google Workspace plans to help businesses and their employees with everything from sharing team notebooks and centralizing projects, to streamlining onboarding and making learning more engaging with Audio Overviews.

    We announced new AI tools to help retailers build gen AI search and agents. The National Retail Federation kicked off the year with their annual NRF conference, where Google Cloud showed how AI agents and AI-powered search are already helping retailers operate more efficiently, create personalized shopping experiences and use AI to get the latest products and experiences to their customers.

    Website: LINK

  • New Circle to Search updates make it even easier to find information and get things done.New Circle to Search updates make it even easier to find information and get things done.Product Manager

    New Circle to Search updates make it even easier to find information and get things done.New Circle to Search updates make it even easier to find information and get things done.Product Manager

    Reading Time: < 1 minute

    Last year, we introduced Circle to Search to help you easily circle, scribble or tap anything you see on your Android screen, and find information from the web without switching apps. Now we’re introducing two improvements that make Circle to Search even more helpful.

    First, we’re expanding AI Overviews to more kinds of visual search results for places, trending images, unique objects and more. Inspired by a piece of art? Circle it and see a gen AI snapshot of helpful information with links to dig deeper and learn more from the web.

    Second, we’re making it easier for you to get things done on your phone. Circle to Search will now quickly recognize numbers, email addresses and URLs you see on your screen so you can take action with a single tap.

    Website: LINK

  • 60 of our biggest AI announcements in 202460 of our biggest AI announcements in 2024

    60 of our biggest AI announcements in 202460 of our biggest AI announcements in 2024

    Reading Time: 6 minutes

    It’s been a big year for Google AI. It may seem as though features like Circle to Search and NotebookLM’s Audio Overviews have been around for as long as you can remember, but they only launched in 2024. Joining them were a slew of other product releases and updates meant to make your day-to-day life even a little bit easier. So, as we say goodbye to 2024 (and prepare for the exciting AI news that’s sure to come in 2025), take a look at some of the top Google AI news stories that resonated with readers this year.

    January

    2024 began, quite fittingly, with fresh updates across a host of products and tools, including Gemini, Chrome, Pixel and Search. The announcement of our Circle to Search feature made a particular splash with readers. Here were some of the top Google AI news stories of the month:

    1. The power of Google AI comes to the new Samsung Galaxy S24 series
    2. New ways to search in 2024
    3. Circle (or highlight or scribble) to Search
    4. Chrome is getting 3 new generative AI features
    5. New Pixel features for a minty fresh start to the year

    February

    February brought a new chapter of our Gemini era, including the debut of Gemini 1.5; the news that Bard was becoming Gemini; the launch of Gemini Advanced; and more. We also announced new generative AI tools in Labs and tech to help developers and researchers build AI responsibly. Here were some of the top Google AI news stories of the month:

    1. Our next-generation model: Gemini 1.5
    2. Bard becomes Gemini: Try Ultra 1.0 and a new mobile app today
    3. The next chapter of our Gemini era
    4. Gemma: Introducing new state-of-the-art open models
    5. Try ImageFX and MusicFX, our newest generative AI tools in Labs

    March

    Health took center stage in March, with our annual Google Health Check Up event to show how AI is helping us connect people to health information and insights that matter to them. Stories about how we’re using AI for good also made the top-news cut, along with AI-based travel tools coverage as readers looked toward summer. Here were some of the top Google AI news stories of the month:

    1. Our progress on generative AI in health
    2. How we’re using AI to connect people to health information
    3. 6 ways to travel smarter this summer using Google tools
    4. How we are using AI for reliable flood forecasting at a global scale
    5. 21 nonprofits join our first generative AI accelerator

    April

    Spring showers bring…generative AI? Many of April’s top stories focused on how helpful generative AI can be to different groups of people, including developers, business owners, advertisers and Google Photos users. It was also a big month for AI skills-building, thanks to our AI Opportunity Fund and AI Essentials course. Here were some of the top Google AI news stories of the month:

    1. AI editing tools are coming to all Google Photos users
    2. Cloud Next 2024: More momentum with generative AI
    3. Grow with Google launches new AI Essentials course to help everyone learn to use AI
    4. Enhance visual storytelling in Demand Gen with generative AI
    5. Our newest investments in infrastructure and AI skills

    May

    May is synonymous with Google I/O around these parts, so it’s no wonder that much of the month’s top news was from our annual developer conference. At this year’s event, we shared how we’re building more helpful products and features with AI. But even amid all the I/O chatter, Googlers were working on other launches, like that of our AlphaFold 3 model, which holds big promise for science and medicine. Here were some of the top Google AI news stories of the month:

    1. Google I/O 2024: An I/O for a new generation
    2. Generative AI in Search: Let Google do the searching for you
    3. 100 things we announced at I/O 2024
    4. Ask Photos: A new way to search your photos with Gemini
    5. AlphaFold 3 predicts the structure and interactions of all of life’s molecules

    June

    In June, much of our AI news emphasized how this technology can help people in ways big and small. Stories covered both land (how Google Translate is helping people connect with one another all around the world, even if they don’t speak the same language) and sea (how a first-of-its-kind global map of ocean infrastructure is creating a better understanding of things like biodiversity). Here were some of the top Google AI news stories of the month:

    1. 110 new languages are coming to Google Translate
    2. Gemma 2 is now available to researchers and developers
    3. NotebookLM goes global with Slides support and better ways to fact-check
    4. New AI tools for Google Workspace for Education
    5. Mapping human activity at sea with AI

    July

    July was one of those months that makes clear how many things Googlers are working on at once with major announcements for Gemini, Google AI features on Samsung devices, our focus on secure AI and our Olympics partnership with Team USA and NBCUniversal. Here were some of the top Google AI news stories of the month:

    1. 4 Google updates coming to Samsung devices
    2. Gemini’s big upgrade: Faster responses with 1.5 Flash, expanded access and more
    3. 4 ways Google will show up in NBCUniversal’s Olympic Games Paris 2024 coverage
    4. Introducing the Coalition for Secure AI (CoSAI) and founding member organizations
    5. 3 things parents and students told us about how generative AI can support learning

    August

    August was a key moment for Google hardware, thanks to our Made by Google event, along with our Nest Learning Thermostat and Google TV Streamer releases. But software was in the mix, too — we’re looking at you, Chrome, Android and Gemini. Here were some of the top Google AI news stories of the month:

    1. The new Pixel 9 phones bring you the best of Google AI
    2. Gemini makes your mobile device a powerful AI assistant
    3. Your smart home is getting smarter, with help from Gemini
    4. 3 new Chrome AI features for even more helpful browsing
    5. Android is reimagining your phone with Gemini

    September

    Then came another month that underscored our mission to make AI helpful for everyone. Highlights included the launch of Audio Overviews in NotebookLM; the news of a new satellite constellation designed to detect wildfires more quickly; and tips on using Gemini features in Gmail. But that wasn’t all! Here were some of the top Google AI news stories of the month:

    1. NotebookLM now lets you listen to a conversation about your sources
    2. A breakthrough in wildfire detection: How a new constellation of satellites can detect smaller wildfires earlier
    3. Customers are putting Gemini to work
    4. How to use Gemini in Gmail to manage your inbox like a pro
    5. 5 new Android features to help you explore, search for music and more

    October

    October saw a slate of additional AI updates across products including Pixel, NotebookLM, Search and Shopping. Plus, we announced updates to the Search ads experiences at Google Marketing Live — helping advertisers use AI to reach their customers. Here were some of the top Google AI news stories of the month:

    1. October Pixel Drop: Helpful enhancements for your devices
    2. New in NotebookLM: Customizing your Audio Overviews and introducing NotebookLM Business
    3. Ask questions in new ways with AI in Search
    4. Google Shopping’s getting a big transformation
    5. New ways for marketers to reach customers with AI Overviews and Lens

    November

    This month was a time for both work and play, with news including how developers are using Gemini API and how chess-lovers can use AI to reimagine their sets. Plus, holiday prep was afoot with new updates to Google Lens and Shopping. Here were some of the top Google AI news stories of the month:

    1. 5 ways to explore chess during the 2024 World Chess Championship
    2. The Gemini app is now available on iPhone
    3. New ways to holiday shop with Google Lens, Maps and more
    4. How developers are using Gemini API
    5. Our Machine Learning Crash Course goes in depth on generative AI

    December

    We celebrated the one-year anniversary of our Gemini era by introducing our next, agentic era in AI — brought to life by our newest, most capable model, Gemini 2.0. We also shared landmark quantum chip news, and a whole raft of new generative AI offerings in Android, Pixel, Gemini, and our developer platforms AI Studio and Vertex AI. It’s certainly been a December to remember. Here were some of the top Google AI news stories of the month:

    1. Introducing Gemini 2.0: our new AI model for the agentic era
    2. Meet Willow, our state-of-the-art quantum chip
    3. Android XR: The Gemini era comes to headsets and glasses
    4. Try Deep Research and our new experimental model in Gemini, your AI assistant
    5. December Pixel Drop: New features for your Pixel phone, Tablet and more

    There you have it! Twelve months of top Google AI news in a flash. And the best part: Teams at Google are hard at work to keep the momentum going in 2025.

    Website: LINK

  • The latest AI news we announced in DecemberThe latest AI news we announced in December

    The latest AI news we announced in DecemberThe latest AI news we announced in December

    Reading Time: < 1 minute

    For more than 20 years, we’ve invested in machine learning and AI research, tools and infrastructure to build products that make everyday life better for more people. 2024 has been another big year for AI, and we ended on a high note with updates across AI models, consumer products, and research and science.

    Here’s a look back at just some of our AI announcements from December.

    Website: LINK

  • New AI features and more for Android and PixelNew AI features and more for Android and Pixel

    New AI features and more for Android and PixelNew AI features and more for Android and Pixel

    Reading Time: < 1 minute

    Our latest updates for Android and Pixel are packed with tons of AI-powered features — including Expressive Captions, an all-new feature that brings more feeling to captions, Gemini’s saved info, which remembers important information for you, and updates for Call Screen so it’s even easier to respond. Check out all the updates below.

    Website: LINK

  • Does AI-assisted coding boost novice programmers’ skills or is it just a shortcut?

    Does AI-assisted coding boost novice programmers’ skills or is it just a shortcut?

    Reading Time: 6 minutes

    Artificial intelligence (AI) is transforming industries, and education is no exception. AI-driven development environments (AIDEs), like GitHub Copilot, are opening up new possibilities, and educators and researchers are keen to understand how these tools impact students learning to code. 

    In our 50th research seminar, Nicholas Gardella, a PhD candidate at the University of Virginia, shared insights from his research on the effects of AIDEs on beginner programmers’ skills.

    Headshot of Nicholas Gardella.
    Nicholas Gardella focuses his research on understanding human interactions with artificial intelligence-based code generators to inform responsible adoption in computer science education.

    Measuring AI’s impact on students

    AI tools are becoming a big part of software development, but what does that mean for students learning to code? As tools like GitHub Copilot become more common, it’s crucial to ask: Do these tools help students to learn better and work more effectively, especially when time is tight?

    This is precisely what Nicholas’s research aims to identify by examining the impact of AIDEs on four key areas:

    • Performance (how well students completed the tasks)
    • Workload (the effort required)
    • Emotion (their emotional state during the task)
    • Self-efficacy (their belief in their own abilities to succeed)

    Nicholas conducted his study with 17 undergraduate students from an introductory computer science course, who were mostly first-time programmers, with different genders and backgrounds.

    Girl in class at IT workshop at university.
    By luckybusiness

    The students completed programming tasks both with and without the assistance of GitHub Copilot. Nicholas selected the tasks from OpenAI’s human evaluation data set, ensuring they represented a range of difficulty levels. He also used a repeated measures design for the study, meaning that each student had the opportunity to program both independently and with AI assistance multiple times. This design helped him to compare individual progress and attitudes towards using AI in programming.

    Less workload, more performance and self-efficacy in learning

    The results were promising for those advocating AI’s role in education. Nicholas’s research found that participants who used GitHub Copilot performed better overall, completing tasks with less mental workload and effort compared to solo programming.

    Graphic depicting Nicholas' results.
    Nicholas used several measures to find out whether AIDEs affected students’ emotional states.

    However, the immediate impact on students’ emotional state and self-confidence was less pronounced. Initially, participants did not report feeling more confident while coding with AI. Over time, though, as they became more familiar with the tool, their confidence in their abilities improved slightly. This indicates that students need time and practice to fully integrate AI into their learning process. Students increasingly attributed their progress not to the AI doing the work for them, but to their own growing proficiency in using the tool effectively. This suggests that with sustained practice, students can gain confidence in their abilities to work with AI, rather than becoming overly reliant on it.

    Graphic depicting Nicholas' RQ1 results.
    Students who used AI tools seemed to improve more quickly than students who worked on the exercises themselves.

    A particularly important takeaway from the talk was the reduction in workload when using AI tools. Novice programmers, who often find programming challenging, reported that AI assistance lightened the workload. This reduced effort could create a more relaxed learning environment, where students feel less overwhelmed and more capable of tackling challenging tasks.

    However, while workload decreased, use of the AI tool did not significantly boost emotional satisfaction or happiness during the coding process. Nicholas explained that although students worked more efficiently, using the AI tool did not necessarily make coding a more enjoyable experience. This highlights a key challenge for educators: finding ways to make learning both effective and engaging, even when using advanced tools like AI.

    AI as a tool for collaboration, not replacement

    Nicholas’s findings raise interesting questions about how AI should be introduced in computer science education. While tools like GitHub Copilot can enhance performance, they should not be seen as shortcuts for learning. Students still need guidance in how to use these tools responsibly. Importantly, the study showed that students did not take credit for the AI tool’s work — instead, they felt responsible for their own progress, especially as they improved their interactions with the tool over time.

    Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.
    Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

    Students might become better programmers when they learn how to work alongside AI systems, using them to enhance their problem-solving skills rather than relying on them for answers. This suggests that educators should focus on teaching students how to collaborate with AI, rather than fearing that these tools will undermine the learning process.

    Bridging research and classroom realities

    Moreover, the study touched on an important point about the limits of its findings. Since the experiment was conducted in a controlled environment with only 17 participants, researchers need to conduct further studies to explore how AI tools perform in real-world classroom settings. For example, the role of internet usage plays a fundamental role. It will be relevant to understand how factors such as class size, prior varying experience, and the age of students affect their ability to integrate AI into their learning.

    In the follow-up discussion, Nicholas also demonstrated how AI tools are becoming more accessible within browsers and how teachers can integrate AI-driven development environments more easily into their courses. By making AI technology more readily available, these tools are democratising access to advanced programming aids, enabling students to build applications directly in their web browsers with minimal setup.

    The path ahead

    Nicholas’s talk provided an insightful look into the evolving relationship between AI tools and novice programmers. While AI can improve performance and reduce workload, it is not a magic solution to all the challenges of learning to code.

    Based on the discussion after the talk, educators should support students in developing the skills to use these tools effectively, shaping an environment where they can feel confident working with AI systems. The researchers and educators agreed that more research is needed to expand on these findings, particularly in more diverse and larger-scale educational settings. 

    As AI continues to shape the future of programming education, the role of educators will remain crucial in guiding students towards responsible and effective use of these technologies, as we are only at the beginning.

    Join our next seminar

    In our current seminar series, we are exploring how to teach programming with and without AI technology. Join us at our next seminar on Tuesday, 10 December at 17:00–18:30 GMT to hear Leo Porter (UC San Diego) and Daniel Zingaro (University of Toronto) discuss how they are working to create an introductory programming course for majors and non-majors that fully incorporates generative AI into the learning goals of the course. 

    To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.

    The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

    Website: LINK

  • Discover #Virgil: history comes to life with Arduino

    Discover #Virgil: history comes to life with Arduino

    Reading Time: 2 minutes

    We’re excited to introduce #Virgil, an innovative project that combines the power of Arduino technology with a passion for history, creating a groundbreaking interactive experience for museums

    Using Arduino’s versatile and scalable ecosystem, #Virgil operates completely offline, allowing visitors to interact with 3D avatars in a seamless and immersive way. The project brings the past to life, offering dialogue-driven encounters with key historical figures thanks to voice recognition and edge AI – with the option to choose among many different languages.

    “#Virgil is meant to celebrate the past and, more importantly, open new avenues for education and inspiration. We want to prove how technology, when guided by ethical values, can amplify and perpetuate our cultural heritage in ways that used to be unimaginable,” comments Enrico Benevenuta, coordinator of the Territori Svelati project and AI expert.

    [youtube https://www.youtube.com/watch?v=hQBPIePZDMs?feature=oembed&w=500&h=281]

    Matteo Olivetti, great-grandson of Olivetti’s founder Camillo, drew inspiration from the iconic Divisumma to design a dedicated hardware setup, Olivox. 

    Powered by the Portenta X8 and Max Carrier, the device connects via HDMI to any screen, engaging visitors in a rich, interactive experience without the need for smartphones or a stable internet connection. This approach allows the project to adapt easily to different exhibitions and contexts, while offering full control over the visitor experience.

    Internationally renowned 3D artist Elvis Morelli was entrusted with creating the first avatar of the project – and it’s no coincidence that Camillo Olivetti was chosen. 

    The story of Olivetti resonates deeply with Arduino’s own mission of pushing the boundaries of technology, and #Virgil represents a continuation of that legacy by bridging the gap between the past and future through cutting-edge tools.

    To find out more about the project and perhaps have a chat with your favorite pioneer of technology and innovation, visit #Virgil’s booth at the upcoming 2024 Maker Faire Rome, booth E.09. Don’t forget to stop by Arduino’s booth N.07 to find out more about our products, and let us know what you asked Camillo!

    The post Discover #Virgil: history comes to life with Arduino appeared first on Arduino Blog.

    Website: LINK

  • Reimagining the chicken coop with predator detection, Wi-Fi control, and more

    Reimagining the chicken coop with predator detection, Wi-Fi control, and more

    Reading Time: 2 minutes

    The traditional backyard chicken coop is a very simple structure that typically consists of a nesting area, an egg-retrieval panel, and a way to provide food and water as needed. Realizing that some aspects of raising chickens are too labor-intensive, the Coders Cafe crew decided to automate most of the daily care process by bringing some IoT smarts to the traditional hen house.

    Controlled and actuated by an Arduino UNO R4 WiFi and a stepper motor, respectively, the front door of the coop relies on a rack-and-pinion mechanism to quickly open or close at the scheduled times. After the chickens have entered the coop to rest or lay eggs, they can be fed using a pair of fully-automatic dispensers. Each one is a hopper with a screw at the bottom which pulls in the food with the help of gravity and gently distributes it onto the ground. And similar to the door, feeding chickens can be scheduled in advance through the team’s custom app and the UNO R4’s integrated Wi-Fi chipset.

    The last and most advanced feature is the AI predator detection system. Thanks to a DFRobot HuskeyLens vision module and its built-in training process, images of predatory animals can be captured and leveraged to train the HuskyLens for when to generate an alert. Once an animal has been detected, it tells the UNO R4 over I2C, which in turn, sends an SMS message via Twilio.

    More details about the project can be found in Coders Cafe’s Instructables writeup.

    The post Reimagining the chicken coop with predator detection, Wi-Fi control, and more appeared first on Arduino Blog.

    Website: LINK

  • 4 Google updates coming to Samsung devices4 Google updates coming to Samsung devicesSenior Director, Global Android Product Marketing

    4 Google updates coming to Samsung devices4 Google updates coming to Samsung devicesSenior Director, Global Android Product Marketing

    Reading Time: < 1 minute

    At I/O, we shared how Wear OS 5 brings improved performance and battery life. Samsung’s new Galaxy Watch lineup, including the Watch Ultra and Watch7, will be the first smartwatches powered by Wear OS 5. And they’re the perfect companion for when you’re on the go: These smartwatches offer advanced health monitoring capabilities, including heart rate tracking and sleep monitoring, and a personalized health experience, as well as access to a wide range of apps in Google Play.

    4. Watch YouTube TV in multiview

    On the GalaxyZ Fold6, YouTube TV subscribers will be able to watch in multiview, enjoying up to four different streams at the same time. You can choose from pre-selected combinations of football, news, weather and simultaneous sporting events.

    We’re constantly working with Samsung to bring the latest Google updates to Galaxy products, from smartphones and wearables to even future technologies, like the upcoming XR platform. Check out everything that was announced at Galaxy Unpacked today.

    Website: LINK

  • 3 new ways to use Google AI on Android at work3 new ways to use Google AI on Android at workProduct Management Lead

    3 new ways to use Google AI on Android at work3 new ways to use Google AI on Android at workProduct Management Lead

    Reading Time: < 1 minute

    The modern workplace can be a whirlwind of employees juggling tasks and business leaders trying to simplify operations or improve security. Everyone approaches work differently, but we’re all aiming for more efficiency and productivity. This is where AI-powered tools come into the picture— around three-quarters of employees say that AI has enhanced their productivity and quality of work. Our mission to make AI helpful for everyone extends to the workplace, too. Here are three ways new Google AI on Android features can make work easier for your employees and developers — and, by extension, for you.

    Website: LINK

  • 8 new accessibility updates across Lookout, Google Maps and more8 new accessibility updates across Lookout, Google Maps and moreSenior Director, Products for All

    8 new accessibility updates across Lookout, Google Maps and more8 new accessibility updates across Lookout, Google Maps and moreSenior Director, Products for All

    Reading Time: < 1 minute

    New designs for Project Relate and Sound Notifications

    We’re committed to an ongoing partnership with the disability community to improve our accessibility features, including updates based on user feedback.

    • Customize how you teach Project Relate. In 2022, we launched Project Relate, an Android app for people with non-standard speech, that allows you to create a personalized speech recognition model to communicate and be better understood. Custom Cards allow you to customize the phrases you teach the model so it understands words that are important to you. Now, there’s a new way for you to select text and import phrases from other apps as Custom Cards, like a note in a Google Doc.
    • New design for Sound Notifications with feedback from you. Sound Notifications alerts you when household sounds happen — like a doorbell ringing or and a smoke alarm going off — with push notifications, flashes from your camera light, or vibrations on your phone. We’ve redesigned Sound Notifications based on user feedback, improving the onboarding process, sound event browsing, and making it easier to save custom sounds for appliances.

    Website: LINK

  • Experience Google AI in even more ways on AndroidExperience Google AI in even more ways on AndroidPresident, Android Ecosystem

    Experience Google AI in even more ways on AndroidExperience Google AI in even more ways on AndroidPresident, Android Ecosystem

    Reading Time: < 1 minute

    Circle to Search can now help students with homework

    With Circle to Search built directly into the user experience, you can search anything you see on your phone using a simple gesture — without having to stop what you’re doing or switch to a different app. Since launching at Samsung Unpacked, we’ve added new capabilities to Circle to Search, like full-screen translation, and we’ve expanded availability to more Pixel and Samsung devices.

    Starting today, Circle to Search can now help students with homework, giving them a deeper understanding, not just an answer — directly from their phones and tablets. When students circle a prompt they’re stuck on, they’ll get step-by-step instructions to solve a range of physics and math word problems without leaving their digital info sheet or syllabus. Later this year, Circle to Search will be able to help solve even more complex problems involving symbolic formulas, diagrams, graphs and more. This is all possible due to our LearnLM effort to enhance our models and products for learning.

    Circle to Search is already available on more than 100 million devices today. With plans to bring the experience to more devices, we’re on track to double that by the end of the year.

    Website: LINK

  • 4 updates from the 2024 Google for Games Developer Summit4 updates from the 2024 Google for Games Developer SummitGeneral Manager

    4 updates from the 2024 Google for Games Developer Summit4 updates from the 2024 Google for Games Developer SummitGeneral Manager

    Reading Time: < 1 minute

    Gaming can bring people together, which is why we’re committed to making the gaming experience fun and engaging for everyone. You can earn rewards for playing your favorite games on Play, connect with a passionate community of gamers on YouTube, discover new titles you’ll love through Ads, and enjoy secure, seamless gameplay powered by Cloud.

    This week at the Google for Games Developer Summit, we unveiled a suite of new tools and product features for developers and gamers. With these updates, developers can take their games to the next level so players like you can experience even more immersive worlds and have new ways to interact with your favorite titles. Here’s a look at what’s new.

    1. Play Pass gets even better

    Starting today, Google Play Pass subscribers in select markets will receive in-game items and discounts on popular games like EA SPORTS FC™ Mobile, Mobile Legends: Bang Bang, MONOPOLY GO! and Roblox. This is offered in addition to our current catalog of over 1,000 ad-free games and apps, so you get even more value at the same monthly price