<span class="vcard">DeepMind Blog</span>
DeepMind Blog

Putting the power of AlphaFold into the world’s hands

When we announced AlphaFold 2 last December, it was hailed as a solution to the 50-year old protein folding problem. Last week, we published the scientific paper and source code explaining how we created this highly innovative system, and today we’re s…

Melting Pot: an evaluation suite for multi-agent reinforcement learning

Here we introduce Melting Pot, a scalable evaluation suite for multi-agent reinforcement learning. Melting Pot assesses generalisation to novel social situations involving both familiar and unfamiliar individuals, and has been designed to test a broad …

An update on our racial justice efforts

In June 2020, after George Floyd was killed in Minneapolis (USA) and the solidarity that followed as millions spoke out at Black Lives Matter protests around the world, I – like many others – reflected on the situation and how our organisation could co…

Advancing sports analytics through AI research

Creating testing environments to help progress AI research out of the lab and into the real world is immensely challenging. Given AI’s long association with games, it is perhaps no surprise that sports presents an exciting opportunity, offering researc…

Game theory as an engine for large-scale data analysis

Modern AI systems approach tasks like recognising objects in images and predicting the 3D structure of proteins as a diligent student would prepare for an exam. By training on many example problems, they minimise their mistakes over time until they ach…

Alchemy: A structured task distribution for meta-reinforcement learning

There has been rapidly growing interest in developing methods for meta-learning within deep RL. Although there has been substantive progress toward such ‘meta-reinforcement learning,’ research in this area has been held back by a shortage of benchmark …

Data, Architecture, or Losses: What Contributes Most to Multimodal Transformer Success?

In this work, we examine what aspects of multimodal transformers – attention, losses, and pretraining data – are important in their success at multimodal pretraining. We find that Multimodal attention, where both language and image transformers attend …

MuZero: Mastering Go, chess, shogi and Atari without rules

In 2016, we introduced AlphaGo, the first artificial intelligence (AI) program to defeat humans at the ancient game of Go. Two years later, its successor – AlphaZero – learned from scratch to master Go, chess and shogi. Now, in a paper in the journal N…

Imitating Interactive Intelligence

We first create a simulated environment, the Playroom, in which virtual robots can engage in a variety of interesting interactions by moving around, manipulating objects, and speaking to each other. The Playroom’s dimensions can be randomised as can it…

Using JAX to accelerate our research

DeepMind engineers accelerate our research by building tools, scaling up algorithms, and creating challenging virtual and physical worlds for training and testing artificial intelligence (AI) systems. As part of this work, we constantly evaluate new ma…