Q&A  | 

Are universities lagging behind in AI research? Oriol Vinyals

"More breakthroughs will be required for us to get closer to our own learning capabilities."

Tags: 'Aprendizaje automático' 'Deep learning' 'Google' 'Google DeepMind' 'Machine learning' 'Oriol Vinyals'

SHARE

Reading Time: 5 minutes

Oriol Vinyals is the Principal Research Scientist at Google DeepMind and one of the most prominent figures in deep learning (a sub branch of machine learning which focuses on neural networks) worldwide. Together with his co-workers he developed a technology now used in Gmail's Smart Reply and has done decisive contributions for commercial translation systems.

Which are the most important latest breakthroughs in AI?

The breakthroughs in AI (or, more precisely, machine learning) can be roughly categorized in a few fronts or areas:

-Hardware. GPUs (typically used for video games) became critical to accelerate the computations needed in the kinds of models involved in deep learning. Nowadays, deep learning specific hardware exists (e.g. TPUs), and many startups are being founded around hardware due to its importance.
-Software. Many institutions and companies started developing tools for deep learning, but more importantly, they also open sourced them. Notable examples include Google’s TensorFlow, or Facebook PyTorch. This enables anyone with some programming expertise to jump in and contribute to the field of machine learning, and many interesting works have come out thanks to this.
-Data. All our algorithms are trained to “imitate” a dataset of labelled examples, and many more data sources have emerged alongside compute scale. Notable examples include ImageNet (labelled images of e.g. dogs, cats, cars), or many machine translation datasets (where you get a pair of sentences in two different languages but the same meaning).
-Algorithms. Even though most algorithms were already there since many decades ago (neural networks, gradient descent), the field has grown and many more components have been added to the toolbox that conform the current plethora of models and applications that are enabled today thanks to machine learning.

How close are we to super artificial intelligence?

In my opinion, we are quite far from it. Even what is meant by AGI is not clear amongst the experts. I think many more breakthroughs will be required for us to get closer to our own amazing learning capabilities.

Following Max Tegmark, one of the founders of Future of Life , “we must make sure that machines learn the collectives goals of humanity, that they adopt these goals for themselves and retain them as they get smarter, starting now”. What's your take on this? Do you also believe that there is otherwise a risk of super artificial intelligence powered dictatorship?

There are lots of dangers and discussions about this in our community, and I welcome the addition of workshops in the main conferences in our field.

Perhaps this wasn’t discussed very much in 2015, but nowadays this topic is carefully studied and researched by many, and there are more and more institutes (such as the Future of Life) which many of the big players are part of, which is reassuring.

As with any powerful technology, the societal implications and regulations should keep up with the pace of innovation, and on this front I am positive that we are on the right track, and we are working well to minimize these risks.

There are other topics which are even more important than these, and also researched by many of my colleagues. One that piqued my interest recently is that of bias in the datasets we use in our field. It would be hard to describe all that’s happening in this answer, but I recommend starting from Emily Denton’s tutorials on this matter.

Which is the role of computer games in the development of Artificial Intelligence? Can you give us some examples of the social applications of gaming?

Computer games have played an important role in the development of deep reinforcement learning, a paradigm that, instead of using a dataset from which our models need to learn to imitate how humans labelled examples, you have an environment in which an agent is trying to accomplish a goal.

For several reasons (easy to scale up, all in simulation, among others), games have been at the forefront of this. At DeepMind, we have worked on a series of breakthroughs which enabled our algorithms to master ATARI games at first, but culminated in AlphaStar, achieving Grandmaster level at the real time strategy game of StarCraft2, which is a project that I led and which was tons of fun! Regarding the other way around, i.e. what can AI do for games (rather than what games have done for AI) is far less explored by us, but video game companies are investing more and more on this (and this is the reason why Blizzard partnered with us on AlphaStar), but I’m looking forward to seeing what comes out of that!

A recent article by The New York Times argued that A.I. research is “becoming increasingly expensive, requiring complex calculations done by giant data centers, leaving fewer people with easy access to the computing firepower necessary to develop the technology behind futuristic products”. This could lead to a world of haves -big tech companies- and have nots. What's your opinion o this, and how can this be democratised?

Firstly, not all breakthroughs require massive amounts of computation. In fact, one of the key advances in the recent years called “attention” came out of MILA, a university lab which couldn’t compete in terms of resources to the method we proposed at Google, called seq2seq, and instead invented attention which is now everywhere.

As such, thinking outside of the box really helps develop new applications and breakthroughs, which is also related to the answer that I gave regarding open sourcing and democratizing access to the tools in machine learning to everyone. I have also seen great examples coming from researchers in less developed parts of the world, in which they have applied machine learning to help them harvest more food, etc., which is quite incredible and surprising to me. That being said, there are kinds of research that can only be conducted using the largest supercomputers. This is no different from e.g. CERN, which is a unique piece of hardware that enables physicists to answer questions about particles that would otherwise be impossible. Maybe there is something we can learn from that model of research, but there are already good steps in sharing the resources that private companies have with universities, and I’m quite happy to see Google and others in the forefront of this.

A report by the Allen Institute for Artificial Intelligence says that he volume of calculations needed “to be a leader in A.I. tasks like language understanding, game playing and common-sense reasoning has soared an estimated 300,000 times in the last six years”. In a technological future which requires staggering amounts of computing, can the world afford such levels of energy consumption?

I am not familiar with that estimate, but luckily performance per watt is also increasing exponentially, so let’s hope hardware development keeps up — in fact, I believe (and there is already some research) showing how AI can help hardware design!

Following your opinion, which can the main contributions of AI be for the wellness of society in the short term?

Given the current situation the world is in, I believe even more that health is an area which AI has the potential to enable better care for everyone (especially those who cannot afford basic healthcare).

This, combined with possibly accelerated scientific breakthroughs powered by AI (such as the current work in protein folding which has been carried at DeepMind in the AlphaFold project, but also elsewhere) makes me quite excited with positive outcomes from AI in the new few years.