DeepMind

Latest articles

Women at DeepMind | Applying for Technical Roles

It’s no secret that the gender gap still exists within STEM. Despite a slight increase in recent years, studies show that women only make up about a quarter of the overall STEM workforce in the UK, for example. While the reasons vary, many women report feeling held back by a lack of representation, clear opportunities and information on what working...

DeepMind x UCL | Deep Learning Lectures | 10/12 | Unsupervised Representation Learning

Unsupervised learning is one of the three major branches of machine learning (along with supervised learning and reinforcement learning). It is also arguably the least developed branch. Its goal is to find a parsimonious description of the input data by uncovering and exploiting its hidden structures. This is presumed to be more reminiscent of how the...

DeepMind x UCL | Deep Learning Lectures | 9/12 | Generative Adversarial Networks

Generative adversarial networks (GANs), first proposed by Ian Goodfellow et al. in 2014, have emerged as one of the most promising approaches to generative modeling, particularly for image synthesis. In their most basic form, they consist of two "competing" networks: a generator which tries to produce data resembling a given data distribution (e.g.,...

DeepMind x UCL | Deep Learning Lectures | 8/12 | Attention and Memory in Deep Learning

Attention and memory have emerged as two vital new components of deep learning over the last few years. This lecture by DeepMind Research Scientist Alex Graves covers a broad range of contemporary attention mechanisms, including the implicit attention present in any deep network, as well as both discrete and differentiable variants of explicit attention....

DeepMind x UCL | Deep Learning Lectures | 7/12 | Deep Learning for Natural Language Processing

This lecture, by DeepMind Research Scientist Felix Hill, is split into three parts. First, he discusses the motivation for modelling language with ANNs: language is highly contextual, typically non-compositional and relies on reconciling many competing sources of information. This section also covers Elman's Finding Structure in Time and simple recurrent...

DeepMind x UCL | Deep Learning Lectures | 6/12 | Sequences and Recurrent Networks

In this lecture, DeepMind Research Scientist Marta Garnelo focuses on sequential data and how machine learning methods have been adapted to process this particular type of structure. Marta starts by introducing some fundamentals of sequence modeling including common architectures designed for this task such as RNNs and LSTMs. She then moves on to sequence-to-sequence...

DeepMind x UCL | Deep Learning Lectures | 3/12 | Convolutional Neural Networks for Image Recognition

In the past decade, convolutional neural networks have revolutionised computer vision. In this lecture, DeepMind Research Scientist Sander Dieleman takes a closer look at convolutional network architectures through several case studies, ranging from the early 90's to the current state of the art. He also reviews some of the building blocks that are...

DeepMind x UCL | Deep Learning Lectures | 4/12 | Advanced Models for Computer Vision

Following on from the previous lecture, DeepMind Research Scientist Viorica Patraucean introduces classic computer vision tasks beyond image classification (object detection, semantic segmentation, optical flow estimation) and describes state of the art models for each, together with standard benchmarks. She discusses similar models for video processing...

DeepMind x UCL | Deep Learning Lectures | 1/12 | Intro to Machine Learning & AI

In this lecture DeepMind Research Scientist and UCL Professor Thore Graepel explains DeepMind's machine learning based approach towards AI. He examples of how deep learning and reinforcement learning can be combined to build intelligent systems, including AlphaGo, Capture The Flag, and AlphaStar. This is followed by a short introduction to the different...

DeepMind x UCL | Deep Learning Lectures | 5/12 | Optimization for Machine Learning

Optimization methods are the engines underlying neural networks that enable them to learn from data. In this lecture, DeepMind Research Scientist James Martens covers the fundamentals of gradient-based optimization methods, and their application to training neural networks. Major topics include gradient descent, momentum methods, 2nd-order methods,...

Discover, share and read the best on the web

Subscribe to RSS Feeds, Blogs, Podcasts, Twitter searches, Facebook pages, even Email Newsletters! Get unfiltered news feeds or filter them to your liking.

Get Inoreader
Inoreader - Subscribe to RSS Feeds, Blogs, Podcasts, Twitter searches, Facebook pages, even Email Newsletters!