This fact sheet on deep learning summarizes some facts deemed significant by our Fellows as part of the Presans Platform activity.

1. What is deep leaning?

Deep learning is a subset of machine learning techniques based on artificial neural networks. Artificial neural networks are composed of formal neurons. By analogy with biological neurons, formal neurons have mathematical and computer pendants of dendrites, axons, and synapses. In the case of deep leaning, the network architecture allows to hierarchically decompose the content of a complex data, such as a voice or a face image and then categorize it.

The applications of deep learning are at the heart of the rise of the theme of artificial intelligence for fifteen years. They basically consist of sorting and automate tasks ranging from facial recognition to the analysis of medical images, through robotic fruit sorting.

The advantage of the machine is not just a matter of automation and speed. Deep learning can also allow sorting where human perception is not to its advantage, for example, to recognize faces in the dark by detecting infrared waves.

2. The challenge of explicability

Synergy Factory

We create on-demand multicorporate & multiexpertise task forces for innovation & Intelligence.

Deep learning has the essential characteristic of not relying on rules set upstream of learning. Deep learning is a black box that does not give access to rules or categorization factors, and the implicit rules that result from learning are therefore difficult to explain. This point limits the reliability of deep learning.

The challenge of explicability indicates the next step in the development of artificial intelligence. The goal is the ability to function robustly in an uncertain world, characterized by incomplete information, exploiting a multitude of dimensions to adapt to unpredictable contexts.

3. Technological bricks

On the theoretical side, the statistical techniques of deep learning are not fundamentally new. It is the increase in the performance/cost ratio of computers that has changed the game and put these statistical techniques back on the agenda. The performance/cost ratio also benefits from the emergence of specialized hardware solutions, designed by graphical card providers.

4. Actors

The expression “deep learning” dates back to the second half of the 1980s and has been relaunched by Yann Le Cun, current director of Facebook’s IA laboratory in France. All the giants of the Internet use deep learning.

The geopolitical dimension of deep learning applications is also becoming clear in the current upsurge of defense innovation among the major powers (1, 2, 3). In particular, DARPA proposed various technological challenges related to deep learning, such as autonomous vehicles between 2005 and 2007, and then, more recently, around explainable AI. Current calls for projects concern both the questions of explicability and the energy or security aspects.

Laurence Devillers is one of the specialists in the question of explicability.

5. Ethical consequences

Surveillance and facial recognition systems in public spaces in China rely on deep learning.

In the West, one of the major discussions is about the biases contained in the data, which lead to deep learning applications that reproduce social discriminations deemed undesirable. But there is also a fear of artificial intelligence, such as military ones, which do not take into account certain ethical distinctions.

The scary possibility of using deep learning to create “deep fakes” is another unavoidable subject. Deep fakes are video animations whose synthetic character is difficult to detect. The techniques for making this type of video are available in open source. In the face of deep learning, we must relearn the virtues of skepticism.