Available via license: CC BY 3.0
Content may be subject to copyright.
Journal of Statistical Mechanics:
Theory and Experiment
ML 2019 • OPEN ACCESS
Entropy and mutual information in models of deep neural networks*
To cite this article: Marylou Gabrié et al J. Stat. Mech. (2019) 124014
View the article online for updates and enhancements.
This content was downloaded from IP address 92.240.206.176 on 20/12/2019 at 17:16
J. Stat. Mech. (2019) 124014
Entropy and mutual information
in models of deep neural networks*
Marylou Gabrié1, Andre Manoel2, Clément Luneau3,
Jean Barbier4, Nicolas Macris3, Florent Krzakala1
and Lenka Zdeborová5
1 Laboratoire de Physique de I’École Normale Supérieure, ENS,
Université PSL, CNRS, Sorbonne Université, Université de Paris, France
2 OWKIN, Inc., New York, NY, United States of America
3 Laboratoire de Théorie des Communications, École Polytechnique
Fédérale de Lausanne, Switzerland
4 International Center for Theoretical Physics, Trieste, Italy
5 Institut de Physique Théorique, CEA, CNRS, Université Paris-Saclay,
France
E-mail: marylou.gabrie@ens.fr
Received 30 May 2019
Accepted for publication 25 June 2019
Published 20 December 2019
Online at stacks.iop.org/JSTAT/2019/124014
https://doi.org/10.1088/1742-5468/ab3430
Abstract. We examine a class of stochastic deep learning models with a
tractable method to compute information-theoretic quantities. Our contributions
are three-fold: (i) we show how entropies and mutual informations can be derived
from heuristic statistical physics methods, under the assumption that weight
matrices are independent and orthogonally-invariant. (ii) We extend particular
cases in which this result is known to be rigorously exact by providing a proof
for two-layers networks with Gaussian random weights, using the recently
introduced adaptive interpolation method. (iii) We propose an experiment
framework with generative models of synthetic datasets, on which we train
deep neural networks with a weight constraint designed so that the assumption
M Gabrié etal
Entropy and mutual information in models of deep neural networks
Printed in the UK
124014
JSMTC6
© 2019 The Author(s). Published by IOP Publishing Ltd on behalf of SISSA Medialab srl
2019
19
J. Stat. Mech.
JSTAT
1742-5468
10.1088/1742-5468/ab3430
12
Journal of Statistical Mechanics: Theory and Experiment
© 2019 The Author(s).
Published by IOP Publishing Ltd on behalf of SISSA Medialab srl
ournal of Statistical Mechanics:
J
Theory and Experiment
IOP
Original content from this work may be used under the terms of the Creative Commons Attribution 3.0
licence. Any further distribution of this work must maintain attribution to the author(s) and
the title of the work, journal citation and DOI.
* This article is an updated version of: Gabrié M, Manoel A, Luneau C, Barbier J, Macris N, Krzakala F and Zde-
borová L 2018 Entropy and mutual information in models of deep neural networks Advances in Neural Informa-
tion Processing Systems 31 (Red Hook, NY : Curran Associates, Inc.) pp 1821–1831
1742 - 5 4 6 8/ 1 9 /1 24 014 +16 $ 3 3. 0 0
Entropy and mutual information in models of deep neural networks
2
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
in (i) is verified during learning. We study the behavior of entropies and mutual
informations throughout learning and conclude that, in the proposed setting,
the relationship between compression and generalization remains elusive.
Keywords: machine learning
Contents
1. Multi-layer model and main theoretical results 3
1.1. A stochastic multi-layer model ........................................................................3
1.2. Replica formula ...............................................................................................3
1.3. Rigorous statement ..........................................................................................4
2. Tractable models for deep learning 5
2.1. Other related works .........................................................................................7
3. Numerical experiments 8
3.1. Estimators and activation comparisons ...........................................................8
3.2. Learning experiments with linear networks ................................................... 10
3.3. Learning experiments with deep non-linear networks ....................................11
4. Conclusion and perspectives 14
Acknowledgments ................................................................................ 14
References 15
The successes of deep learning methods have spurred eorts towards quantitative
modeling of the performance of deep neural networks. In particular, an information-
theoretic approach linking generalization capabilities to compression has been receiving
increasing interest. The intuition behind the study of mutual informations in latent
variable models dates back to the information bottleneck (IB) theory of [1]. Although
recently reformulated in the context of deep learning [2], verifying its relevance in prac-
tice requires the computation of mutual informations for high-dimensional variables,
a notoriously hard problem. Thus, pioneering works in this direction focused either on
small network models with discrete (continuous, eventually binned) activations [3], or
on linear networks [4, 5].
In the present paper we follow a dierent direction, and build on recent results
from statistical physics [6, 7] and information theory [8, 9] to propose, in section 1, a
formula to compute information-theoretic quantities for a class of deep neural network
models. The models we approach, described in section 2, are non-linear feed-forward
neural networks trained on synthetic datasets with constrained weights. Such networks
capture some of the key properties of the deep learning setting that are usually dicult
to include in tractable frameworks: non-linearities, arbitrary large width and depth,
and correlations in the input data. We demonstrate the proposed method in a series
of numerical experiments in section 3. First observations suggest a rather complex
Entropy and mutual information in models of deep neural networks
3
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
picture, where the role of compression in the generalization ability of deep neural net-
works is yet to be elucidated.
1. Multi-layer model and main theoretical results
1.1. A stochastic multi-layer model
We consider a model of multi-layer stochastic feed-forward neural network where each
element xi of the input layer
x∈Rn0
is distributed independently as
P0(xi)
, while hid-
den units
t,i
at each successive layer
t∈Rn
(vectors are column vectors) come from
P
(t
,i
|W
,it
−
1
)
, with
t0≡x
and
W
,
i
denoting the ith row of the matrix of weights
W∈Rn
×n
−1
. In other words
t
0,i
≡x
i
∼P
0
(·), t
1,i
∼P
1
(·|W
1,i
x), ... t
L,i
∼P
L
(·|W
L,i
t
L
−
1
),
(1)
given a set of weight matrices
{W}
L
=1
and distributions
{P}
L
=1
which encode
possible non-linearities and stochastic noise applied to the hidden layer vari-
ables, and P0 that generates the visible variables. In particular, for a non-linearity
t
,i
=ϕ
(h,ξ
,i)
, where
ξ,i∼
P
ξ
(
·)
is the stochastic noise (independent for each i), we
have
P
(t,i
|
W
,i
t
−
1)=
dPξ(ξ,i)δ
t,i
−
ϕ(W
,i
t
−
1,ξ,i)
. Model (1) thus describes a
Markov chain which we denote by
X→T1→T2→···→TL
, with
T=ϕ(WT−1,ξ)
,
ξ
=
{
ξ
,i}n
i=1
, and the activation function
ϕ
applied componentwise.
1.2. Replica formula
We shall work in the asymptotic high-dimensional statistics regime where all
˜α
≡
n
/n
0
are of order one while
n0→∞
, and make the important assumption that
all matrices
W
are orthogonally-invariant random matrices independent from each
other; in other words, each matrix
W∈Rn
×n
−1
can be decomposed as a product
of three matrices,
W=USV
, where
U∈
O(n
)
and
V∈
O(n
−1)
are independently
sampled from the Haar measure, and
S
is a diagonal matrix of singular values.
The main technical tool we use is a formula for the entropies of the hidden vari-
ables,
H(T
)=−E
T
ln P
T
(t
)
, and the mutual information between adjacent lay-
ers
I
(T
;T
−1
)=H(T
)+
ET,T−1
ln P
T|T−1
(t
|
t
−1)
, based on the heuristic replica
method [6, 7, 10, 11]:
Claim 1 (Replica formula). Assume model (1) with L layers in the high-dimensional
limit with componentwise activation functions and weight matrices generated from the
ensemble described above, and denote by
λ
W
k
the eigenvalues of
W
kWk
. Then for any
∈{
1, ...,L
}
the normalized entropy of
T
is given by the minimum among all station-
ary points of the replica potential:
lim
n0
→∞
1
n0
H(T) = min extr
A,V,
˜
A,
˜
V
φ(A,V,
˜
A,
˜
V
),
(2)
Entropy and mutual information in models of deep neural networks
4
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
which depends on
-dimensional vectors
A,V,
˜
A,
˜
V
, and is written in terms of mutual
information I and conditional entropies H of scalar variables as
φ(A,V,
˜
A,
˜
V)=I
t0;t0+ξ0
˜
A1
−1
2
k=1
˜αk−1˜
AkVk+αkAk˜
Vk−FWk(AkVk)
+
−1
k=1
˜αk
H(tk|ξk;˜
Ak+1,˜
Vk,˜ρk)−1
2log(2πe ˜
A−1
k+1)
+˜αH(t|ξ;˜
V,˜ρ),
(3)
where
αk=nk/nk−1
,
˜αk=nk/n0
,
ρ
k
=dP
k
−
1
(t)t2
,
˜ρ
k
=(E
λW
kλ
Wk
)ρ
k
/α
k, and
ξk∼N(0, 1)
for
k= 0, ...,
. In the computation of the conditional entropies in (3), the
scalar tk-variables are generated from
P(t0)=P0(t0)
and
P(tk
|
ξk;A,V,ρ)=
E
˜
ξ,˜z
Pk(tk+
˜
ξ/
√
A
|
ρ
−
Vξ
k+
√
V˜z), k= 1, ...,
−1,
(4)
P
(t
|
ξ
;V,ρ)=
E˜z
P
(t
|
ρ
−
Vξ
+
√
V˜z
),
(5)
where
˜
ξ
and
˜z
are independent
N(0, 1)
random variables. Finally, the function
FWk
(x
)
depends on the distribution of the eigenvalues
λ
W
following
F
Wk(x) = min
θ∈R
2αkθ+(αk
−
1) ln(1
−
θ)+
E
λWkln[xλWk+ (1
−
θ)(1
−
αkθ)]
.
(6)
The computation of the entropy in the large dimensional limit, a computationally
dicult task, has thus been reduced to an extremization of a function of
4
variables,
that requires evaluating single or bidimensional integrals. This extremization can be
done eciently by means of a fixed-point iteration starting from dierent initial condi-
tions, as detailed in the supplementary material (stacks.iop.org/JSTAT/19/124014/
mmedia). Moreover, a user-friendly Python package is provided [12], which performs
the computation for dierent choices of prior P0, activations
ϕ
and spectra
λ
W
.
Finally, the mutual information between successive layers
I(T;T−1)
can be obtained
from the entropy following the evaluation of an additional bidimensional integral, see
section 1.6.1 of the supplementary material.
Our approach in the derivation of (3) builds on recent progresses in statistical
estimation and information theory for generalized linear models following the applica-
tion of methods from statistical physics of disordered systems [10, 11] in communica-
tion [13], statistics [14] and machine learning problems [15, 16]. In particular, we use
advanced mean field theory [17] and the heuristic replica method [6, 10], along with
its recent extension to multi-layer estimation [7, 8], in order to derive the above form-
ula (3). This derivation is lengthy and thus given in the supplementary material. In a
related contrib ution, Reeves [9] proposed a formula for the mutual information in the
multi-layer setting, using heuristic information-theoretic arguments. As ours, it exhib-
its layer-wise additivity, and the two formulas are conjectured to be equivalent.
1.3. Rigorous statement
We recall the assumptions under which the replica formula of claim 1 is conjectured to be
exact: (i) weight matrices are drawn from an ensemble of random orthogonally-invariant
Entropy and mutual information in models of deep neural networks
5
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
matrices, (ii) matrices at dierent layers are statistically independent and (iii) layers
have a large dimension and respective sizes of adjacent layers are such that weight matri-
ces have aspect ratios
{αk,˜αk}
k=1
of order one. While we could not prove the replica
prediction in full generality, we stress that it comes with multiple credentials: (i) for
Gaussian prior P0 and Gaussian distributions
P
, it corresponds to the exact analytical
solution when weight matrices are independent of each other (see section 1.6.2 of the
supplementary material). (ii) In the single-layer case with a Gaussian weight matrix, it
reduces to formula (6) in the supplementary material, which has been recently rigor-
ously proven for (almost) all activation functions
ϕ
[18]. (iii) In the case of Gaussian
distributions
P
, it has also been proven for a large ensemble of random matrices [19]
and (iv) it is consistent with all the results of the AMP [20–22] and VAMP [23] algo-
rithms, and their multi-layer versions [7, 8], known to perform well for these estimation
problems.
In order to go beyond results for the single-layer problem and heuristic arguments,
we prove claim 1 for the more involved multi-layer case, assuming Gaussian i.i.d.
matrices and two non-linear layers:
Theorem 1 (Two-layer Gaussian replica formula). Suppose
(H1)
the input units dis-
tribution P0 is separable and has bounded support;
(
H
2)
the activations
ϕ1
and
ϕ2
corre-
sponding to
P
1(t1,i
|
W
1,i
x
)
and
P
2(t2,i
|
W
2,i
t1
)
are bounded
C2
with bounded first and
second derivatives w.r.t their first argument; and
(
H
3)
the weight matrices W1, W2 have
Gaussian i.i.d. entries. Then for model (1) with two layers L = 2 the high-dimensional
limit of the entropy verifies claim 1.
The theorem, that closes the conjecture presented in [7], is proven using the adap-
tive interpolation method of [18, 24, 25] in a multi-layer setting, as first developed in
[26]. The lengthy proof, presented in details in section 2 of the supplementary mat-
erial, is of independent interest and adds further credentials to the replica formula, as
well as oers a clear direction to further developments. Note that, following the same
approximation arguments as in [18] where the proof is given for the single-layer case,
the hypothesis
(
H
1)
can be relaxed to the existence of the second moment of the prior,
(
H
2)
can be dropped and
(
H
3)
extended to matrices with i.i.d. entries of zero mean,
O(1/n0) variance and finite third moment.
2. Tractable models for deep learning
The multi-layer model presented above can be leveraged to simulate two prototypical
settings of deep supervised learning on synthetic datasets amenable to the replica trac-
table computation of entropies and mutual informations.
The first scenario is the so-called teacher-student (see figure 1, left). Here, we
assume that the input
x
is distributed according to a separable prior distribution
P
X
(x)=
i
P
0
(x
i)
, factorized in the components of
x
, and the corresponding label
y
is given by applying a mapping
x→y
, called the teacher. After generating a train and
test set in this manner, we perform the training of a deep neural network, the student,
on the synthetic dataset. In this case, the data themselves have a simple structure given
by P0.
Entropy and mutual information in models of deep neural networks
6
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
In constrast, the second scenario allows generative models (see figure 1, right) that
create more structure, and that are reminiscent of the generative-recognition pair of
models of a Variational Autoencoder (VAE). A code vector
y
is sampled from a sepa-
rable prior distribution
PY
(y)=
i
P
0
(y
i)
and a corresponding data point
x
is gener-
ated by a possibly stochastic neural network, the generative model. This setting allows
to create input data
x
featuring correlations, dierently from the teacher-student sce-
nario. The studied supervised learning task then consists in training a deep neural net,
the recognition model, to recover the code
y
from
x
.
In both cases, the chain going from
X
to any later layer is a Markov chain in the
form of (1). In the first scenario, model (1) directly maps to the student network. In the
second scenario however, model (1) actually maps to the feed-forward combination of
the generative model followed by the recognition model. This shift is necessary to verify
the assumption that the starting point (now given by
Y
) has a separable distribution.
In particular, it generates correlated input data
X
while still allowing for the computa-
tion of the entropy of any
T
.
At the start of a neural network training, weight matrices initialized as i.i.d.
Gaussian random matrices satisfy the necessary assumptions of the formula of claim 1.
In their singular value decomposition
W=USV
(7)
the matrices
U∈O(n)
and
V∈O(n−1)
, are typical independent samples from the
Haar measure across all layers. To make sure weight matrices remain close enough to
independent during learning, we define a custom weight constraint which consists in
keeping
U
and
V
fixed while only the matrix
S
, constrained to be diagonal, is updated.
The number of parameters is thus reduced from
n×n−1
to
min(n,n−1)
. We refer to
layers following this weight constraint as USV-layers. For the replica formula of claim
1 to be correct, the matrices
S
from dierent layers should furthermore remain uncor-
related during the learning. In section 3, we consider the training of linear networks
Figure 1. Two models of synthetic data.
Entropy and mutual information in models of deep neural networks
7
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
for which information-theoretic quantities can be computed analytically, and confirm
numerically that with USV-layers the replica predicted entropy is correct at all times.
In the following, we assume that is also the case for non-linear networks.
In section 3.2 of the supplementary material, we train a neural network with USV-
layers on a simple real-world dataset (MNIST), showing that these layers can learn
to represent complex functions despite their restriction. We further note that such
a product decomposition is reminiscent of a series of works on adaptative structured
ecient linear layers (SELLs and ACDC) [27, 28] motivated this time by speed gains,
where only diagonal matrices are learned (in these works the matrices
U
and
V
are
chosen instead as permutations of Fourier or Hadamard matrices, so that the matrix
multiplication can be replaced by fast transforms). In section 3, we discuss learning
experiments with USV-layers on synthetic datasets.
While we have defined model (1) as a stochastic model, traditional feed forward neu-
ral networks are deterministic. In the numerical experiments of section 3, we train and
test networks without injecting noise, and only assume a noise model in the computa-
tion of information-theoretic quantities. Indeed, for continuous variables the presence
of noise is necessary for mutual informations to remain finite (see discussion of appen-
dix C in [5]). We assume at layer
an additive white Gaussian noise of small amplitude
just before passing through its activation function to obtain
H(T)
and
I(T;T−1)
,
while keeping the mapping
X→T−1
deterministic. This choice attempts to stay as
close as possible to the deterministic neural network, but remains inevitably somewhat
arbitrary (see again discussion of appendix C in [5]).
2.1. Other related works
The strategy of studying neural networks models, with random weight matrices and/
or random data, using methods originated in statistical physics heuristics, such as the
replica and the cavity methods [10] has a long history. Before the deep learning era,
this approach led to pioneering results in learning for the Hopfield model [29] and for
the random perceptron [15, 16, 30, 31].
Recently, the successes of deep learning along with the disqualifying complexity of
studying real world problems have sparked a revived interest in the direction of random
weight matrices. Recent results–without exhaustivity–were obtained on the spectrum
of the Gram matrix at each layer using random matrix theory [32, 33], on expressivity
of deep neural networks [34], on the dynamics of propagation and learning [35–38], on
the high-dimensional non-convex landscape where the learning takes place [39], or on
the universal random Gaussian neural nets of [40].
The information bottleneck theory [1] applied to neural networks consists in com-
puting the mutual information between the data and the learned hidden representa-
tions on the one hand, and between labels and again hidden learned representations
on the other hand [2, 3]. A successful training should maximize the information with
respect to the labels and simultaneously minimize the information with respect to
the input data, preventing overfitting and leading to a good generalization. While
this intuition suggests new learning algorithms and regularizers [41–47], we can also
hypothesize that this mechanism is already at play in a priori unrelated commonly used
optimization methods, such as the simple stochastic gradient descent (SGD). It was
Entropy and mutual information in models of deep neural networks
8
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
first tested in practice by [3] on very small neural networks, to allow the entropy to be
estimated by binning of the hidden neurons activities. Afterwards, the authors of [5]
reproduced the results of [3] on small networks using the continuous entropy estimator
of [45], but found that the overall behavior of mutual information during learning is
greatly aected when changing the nature of non-linearities. Additionally, they inves-
tigate the training of larger linear networks on i.i.d. normally distributed inputs where
entropies at each hidden layer can be computed analytically for an additive Gaussian
noise. The strategy proposed in the present paper allows us to evaluate entropies and
mutual informations in non-linear networks larger than in [3, 5].
3. Numerical experiments
We present a series of experiments both aiming at further validating the replica estima-
tor and leveraging its power in noteworthy applications. A first application presented
in the paragraph 3.1 consists in using the replica formula in settings where it is proven
to be rigorously exact as a basis of comparison for other entropy estimators. The same
experiment also contributes to the discussion of the information bottleneck theory for
neural networks by showing how, without any learning, information-theoretic quanti-
ties have dierent behaviors for dierent non-linearities. In the following paragraph 3.2,
we validate the accuracy of the replica formula in a learning experiment with USV-
layers—where it is not proven to be exact—by considering the case of linear networks
for which information-theoretic quantities can be otherwise computed in closed-form.
We finally consider in the paragraph 3.3, a second application testing the information
bottleneck theory for large non-linear networks. To this aim, we use the replica estima-
tor to study compression eects during learning.
3.1. Estimators and activation comparisons
Two non-parametric estimators have already been considered by [5] to compute entro-
pies and/or mutual informations during learning. The kernel-density approach of
Kolchinsky et al [45] consists in fitting a mixture of Gaussians (MoG) to samples of the
variable of interest and subsequently compute an upper bound on the entropy of the
MoG [48]. The method of Kraskov et al [49] uses nearest neighbor distances between
samples to directly build an estimate of the entropy. Both methods require the com-
putation of the matrix of distances between samples. Recently [46], proposed a new
non-parametric estimator for mutual informations which involves the optimization of
a neural network to tighten a bound. It is unfortunately computationally hard to test
how these estimators behave in high dimension as even for a known distribution the
computation of the entropy is intractable in most cases. However the replica method
proposed here is a valuable point of comparison for cases where it is rigorously exact.
In the first numerical experiment we place ourselves in the setting of theorem 1: a
2-layer network with i.i.d weight matrices, where the formula of claim 1 is thus rigor-
ously exact in the limit of large networks, and we compare the replica results with
the non-parametric estimators of [45] and [49]. Note that the requirement for smooth
Entropy and mutual information in models of deep neural networks
9
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
activations
(H2)
of theorem 1 can be relaxed (see discussion below the theorem).
Additionally, non-smooth functions can be approximated arbitrarily closely by smooth
functions with equal information-theoretic quantities, up to numerical precision.
We consider a neural network with layers of equal size n = 1000 that we denote:
X→T1→T2
. The input variable components are i.i.d. Gaussian with mean 0 and
variance 1. The weight matrices entries are also i.i.d. Gaussian with mean 0. Their
standard-deviation is rescaled by a factor
1/√n
and then multiplied by a coecient
σ
varying between 0.1 and 10, i.e. around the recommended value for training initializa-
tion. To compute entropies, we consider noisy versions of the latent variables where
an additive white Gaussian noise of very small variance (
σ2
noise = 10−5
) is added right
before the activation function,
T1=f(W1X+1)
and
T2=f(W2f(W1X)+2)
with
1,2 ∼N
(0, σ
2
noise
I
n)
, which is also done in the remaining experiments to guarantee the
mutual informations to remain finite. The non-parametric estimators [45, 49] were
evaluated using 1000 samples, as the cost of computing pairwise distances is significant
in such high dimension and we checked that the entropy estimate is stable over inde-
pendent draws of a sample of such a size (error bars smaller than marker size). On
figure 2, we compare the dierent estimates of
H(T1)
and
H(T2)
for dierent activa-
tion functions: linear, hardtanh or ReLU. The hardtanh activation is a piecewise linear
approximation of the tanh,
hardtanh(x)=−1
for x < −1, x for −1 < x < 1, and 1 for
x > 1, for which the integrals in the replica formula can be evaluated faster than for
the tanh.
In the linear and hardtanh case, the non-parametric methods are following the
tendency of the replica estimate when
σ
is varied, but appear to systematically over-
estimate the entropy. For linear networks with Gaussian inputs and additive Gaussian
noise, every layer is also a multivariate Gaussian and therefore entropies can be directly
computed in closed form (exact in the plot legend). When using the Kolchinsky estimate
in the linear case we also check the consistency of two strategies, either fitting the MoG
to the noisy sample or fitting the MoG to the deterministic part of the
T
and aug-
ment the resulting variance with
σ2
noise
, as done in [45] (Kolchinsky et al parametric in
the plot legend). In the network with hardtanh non-linearities, we check that for small
weight values, the entropies are the same as in a linear network with same weights
(linear approx in the plot legend, computed using the exact analytical result for linear
networks and therefore plotted in a similar color to exact). Lastly, in the case of the
ReLU–ReLU network, we note that non-parametric methods are predicting an entropy
increasing as the one of a linear network with identical weights, whereas the replica
computation reflects its knowledge of the cut-o and accurately features a slope equal
to half of the linear network entropy ( 1/2 linear approx in the plot legend). While non-
parametric estimators are invaluable tools able to approximate entropies from the mere
knowledge of samples,they inevitably introduce estimation errors. The replica method
is taking the opposite view. While being restricted to a class of models, it can leverage
its knowledge of the neural network structure to provide a reliable estimate. To our
knowledge, there is no other entropy estimator able to incorporate such information
about the underlying multi-layer model.
Beyond informing about estimators accuracy, this experiment also unveils a simple
but possibly important distinction between activation functions. For the hardtanh
activation, as the random weights magnitude increases, the entropies decrease after
Entropy and mutual information in models of deep neural networks
10
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
reaching a maximum, whereas they only increase for the unbounded activation func-
tions we consider—even for the single-side saturating ReLU. This loss of information
for bounded activations was also observed by [5], where entropies were computed by
discretizing the output as a single neuron with bins of equal size. In this setting, as
the tanh activation starts to saturate for large inputs, the extreme bins (at −1 and 1)
concentrate more and more probability mass, which explains the information loss. Here
we confirm that the phenomenon is also observed when computing the entropy of the
hardtanh (without binning and with small noise injected before the non-linearity). We
check via the replica formula that the same phenomenology arises for the mutual infor-
mations
I(X;T)
(see section 3.1 of the supplementary material).
3.2. Learning experiments with linear networks
In the following, and in section 3.3 of the supplementary material, we discuss training
experiments of dierent instances of the deep learning models defined in section 2. We
seek to study the simplest possible training strategies achieving good generalization.
Hence for all experiments we use plain stochastic gradient descent (SGD) with constant
learning rates, without momentum and without any explicit form of regularization.
The sizes of the training and testing sets are taken equal and scale typically as a few
hundreds times the size of the input layer. Unless otherwise stated, plots correspond to
single runs, yet we checked over a few repetitions that outcomes of independent runs
lead to identical qualitative behaviors. The values of mutual informations
I(X;T)
are
computed by considering noisy versions of the latent variables where an additive white
Gaussian noise of very small variance (
σ2
noise = 10−5
) is added right before the activation
function, as in the previous experiment. This noise is neither present at training time,
where it could act as a regularizer, nor at testing time. Given the noise is only assumed
at the last layer, the second to last layer is a deterministic mapping of the input variable;
Figure 2. Entropy of latent variables in stochastic networks
X→T1→T2
, with
equally sized layers n = 1000, inputs drawn from
N(0, In)
, weights from
N(0, σ2In
2
/n)
,
as a function of the weight scaling parameter
σ
. An additive white Gaussian noise
N(0, 10−5In)
is added inside the non-linearity. Left column: linear network. Center
column: hardtanh–hardtanh network. Right column: ReLU–ReLU network.
Entropy and mutual information in models of deep neural networks
11
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
hence the replica formula yielding mutual informations between adjacent layers gives
us directly
I(T;T−1)=H(T)−H(T|T−1)=H(T)−H(T|X)=I(T;X)
. We
provide a second Python package [50] to implement in Keras learning experiments on
synthetic datasets, using USV- layers and interfacing the first Python package [12] for
replica computations.
To start with we consider the training of a linear network in the teacher-student
scenario. The teacher has also to be linear to be learnable: we consider a simple sin-
gle-layer network with additive white Gaussian noise,
Y=˜
WteachX+
, with input
x∼N(0, In)
of size n, teacher matrix
˜
Wteach
i.i.d. normally distributed as
N(0, 1/n)
,
noise
∼N(0, 0.01In)
, and output of size nY = 4. We train a student network of three
USV-layers, plus one fully connected unconstrained layer
X→T1→T2→T3→ˆ
Y
on the regression task, using plain SGD for the MSE loss
(ˆ
Y
−
Y)
2
. We recall that in
the USV-layers (7) only the diagonal matrix is updated during learning. On the left
panel of figure 3, we report the learning curve and the mutual informations between the
hidden layers and the input in the case where all layers but outputs have size n = 1500.
Again this linear setting is analytically tractable and does not require the replica form-
ula, a similar situation was studied in [5]. In agreement with their observations, we
find that the mutual informations
I(X;T)
keep on increasing throughout the learning,
without compromising the generalization ability of the student. Now, we also use this
linear setting to demonstrate (i) that the replica formula remains correct throughout
the learning of the USV-layers and (ii) that the replica method gets closer and closer
to the exact result in the limit of large networks, as theoretically predicted (2). To this
aim, we repeat the experiment for n varying between 100 and 1500, and report the
maximum and the mean value of the squared error on the estimation of the
I(X;T)
over all epochs of 5 independent training runs. We find that even if errors tend to
increase with the number of layers, they remain objectively very small and decrease
drastically as the size of the layers increases.
3.3. Learning experiments with deep non-linear networks
Finally, we apply the replica formula to estimate mutual informations during the train-
ing of non-linear networks on correlated input data.
We consider a simple single layer generative model
X
=
˜
WgenY+
with normally
distributed code
Y∼N(0, InY)
of size nY = 100, data of size nX = 500 generated with
matrix
˜
W
gen
i.i.d. normally distributed as
N
(0, 1/n
Y)
and noise
∼N(0, 0.01InX)
. We
then train a recognition model to solve the binary classification problem of recovering
the label y= sign(Y
1)
, the sign of the first neuron in
Y
, using plain SGD but this time
to minimize the cross-entropy loss. Note that the rest of the initial code
(Y2, ..YnY)
acts
as noise/nuisance with respect to the learning task. We compare two 5-layers recog-
nition models with 4 USV- layers plus one unconstrained, of sizes 500-1000-500-250-
100-2, and activations either linear-ReLU-linear-ReLU-softmax (top row of figure 4) or
linear-hardtanh-linear-hardtanh-softmax (bottom row). Because USV-layers only fea-
ture
O
(n
)
parameters instead of O(n2) we observe that they require more iterations to
train in general. In the case of the ReLU network, adding interleaved linear layers was
key to successful training with 2 non-linearities, which explains the somewhat unusual
architecture proposed. For the recognition model using hardtanh, this was actually
Entropy and mutual information in models of deep neural networks
12
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
not an issue (see supplementary material for an experiment using only hardtanh acti-
vations), however, we consider a similar architecture for fair comparison. We discuss
further the ability of learning of USV-layers in the supplementary material.
This experiment is reminiscent of the setting of [3], yet now tractable for networks
of larger sizes. For both types of non-linearities we observe that the mutual information
Figure 3. Training of a 4-layer linear student of varying size on a regression
task generated by a linear teacher of output size
nY=4
. Upper-left: MSE loss
on the training and testing sets during training by plain SGD for layers of size
n = 1500. Best training loss is 0.004 735, best testing loss is 0.004 789. Lower-
left: corresponding mutual information evolution between hidden layers and
input. Center-left, center-right, right: maximum and squared error of the replica
estimation of the mutual information as a function of layers size n, over the course
of five independent trainings for each value of n for the first, second and third
hidden layer.
Figure 4. Training of two recognition models on a binary classification task with
correlated input data and either ReLU (top) or hardtanh (bottom) non-linearities.
Left: training and generalization cross-entropy loss (left axis) and accuracies (right
axis) during learning. Best training-testing accuracies are 0.995–0.991 for ReLU
version (top row) and 0.998–0.996 for hardtanh version (bottom row). Remaining
colums: mutual information between the input and successive hidden layers. Insets
zoom on the first epochs.
Entropy and mutual information in models of deep neural networks
13
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
between the input and all hidden layers decrease during the learning, except for the
very beginning of training where we can sometimes observe a short phase of increase
(see zoom in insets). For the hardtanh layers this phase is longer and the initial increase
of noticeable amplitude.
In this particular experiment, the claim of [3] that compression can occur during
training even with non double-saturated activation seems corroborated (a phenomenon
that was not observed by [5]). Yet we do not observe that the compression is more
pronounced in deeper layers and its link to generalization remains elusive. For instance,
we do not see a delay in the generalization w.r.t. training accuracy/loss in the recogni-
tion model with hardtanh despite of an initial phase without compression in two layers.
Futhermore, we find that changing the weight initialization can drastically change
the behavior of mutual informations during training while resulting in identical train-
ing and testing final performances. In an additional experiment, we consider a setting
closely related to the classification on correlated data presented above. On figure 5 we
compare three identical 5-layers recognition models with sizes 500-1000-500-250-100-2,
and activations hardtanh–hardtanh-hardtanh- hartanh-softmax, for the same genera-
tive model and binary classification rule as the previous experiment. For the model pre-
sented at the top row, initial weights were sampled according to
W,ij ∼N
(0, 4/n
−1)
,
for the model of the middle row
N(0, 1/n−1)
was used instead, and finally
N(0,
1
/
4n
−
1
)
for the bottom row. The first column shows that training is delayed for the weight
initialized at smaller values, but eventually catches up and reaches accuracies superior
to 0.97 both in training and testing. Meanwhile, mutual informations have dierent
Figure 5. Learning and hidden-layers mutual information curves for a classification
problem with correlated input data, using a 4-USV hardtanh layers and 1
unconstrained softmax layer, from three dierent initializations. Top: initial weights
at layer
of variance
4/n−1
, best training accuracy 0.999, best test accuracy 0.994.
Middle: initial weights at layer
of variance
1/n−1
, best train accuracy 0.994, best
test accuracy 0.9937. Bottom: initial weights at layer
of variance
0.25/n−1
, best
train accuracy 0.975, best test accuracy 0.974. The overall direction of evolution of
the mutual information can be flipped by a change in weight initialization without
changing drastically final performance in the classification task.
Entropy and mutual information in models of deep neural networks
14
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
initial values for the dierent weight initializations and follow very dierent paths.
They either decrease during the entire learning, or on the contrary are only increasing,
or actually feature an hybrid path. We further note that it is to some extent surpris-
ing that the mutual information would increase at all in the first row if we expect the
hardtanh saturation to instead induce compression. Figure 4 of the supplementary
material presents a second run of the same experiment with a dierent random seed.
Findings are identical.
Further learning experiments, including a second run of the last two experiments,
are presented in the supplementary material.
4. Conclusion and perspectives
We have presented a class of deep learning models together with a tractable method
to compute entropy and mutual information between layers. This, we believe, oers
a promising framework for further investigations, and to this aim we provide Python
packages that facilitate both the computation of mutual informations and the train-
ing, for an arbitrary implementation of the model. In the future, allowing for biases
by extending the proposed formula would improve the fitting power of the considered
neural network models.
We observe in our high-dimensional experiments that compression can happen dur-
ing learning, even when using ReLU activations. While we did not observe a clear link
between generalization and compression in our setting, there are many directions to be
further explored within the models presented in section 2. Studying the entropic eect
of regularizers is a natural step to formulate an entropic interpretation to generaliza-
tion. Furthermore, while our experiments focused on the supervised learning, the replica
formula derived for multi-layer models is general and can be applied in unsupervised
contexts, for instance in the theory of VAEs. On the rigorous side, the greater perspec-
tive remains proving the replica formula in the general case of multi-layer models, and
further confirm that the replica formula stays true after the learning of the USV-layers.
Another question worth of future investigation is whether the replica method can be
used to describe not only entropies and mutual informations for learned USV-layers,
but also the optimal learning of the weights itself.
Acknowledgments
The authors would like to thank Léon Bottou, Antoine Maillard, Marc Mézard, Léo
Miolane, and Galen Reeves for insightful discussions. This work has been supported
by the ERC under the European Union’s FP7 Grant Agreement 307087-SPARCS
and the European Union’s Horizon 2020 Research and Innovation Program 714608-
SMiLe, as well as by the French Agence Nationale de la Recherche under grant ANR-
17-CE23-0023-01 PAIL. Additional funding is acknowledged by MG from ‘Chaire de
recherche sur les modéles et sciences des données’, Fondation CFM pour la Recherche-
ENS; by AM from Labex DigiCosme; and by CL from the Swiss National Science
Entropy and mutual information in models of deep neural networks
15
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
Foundation under Grant 200021E-175541. We gratefully acknowledge the support of
NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
References
[1] Tishby N, Pereira F C and Bialek W 1999 The information bottleneck method 37th Annual Allerton Conf.
on Communication, Control, and Computing
[2] Tishby N and Zaslavsky N 2015 Deep learning and the information bottleneck principle IEEE Information
Theory Workshop pp 1
[3] Shwartz-Ziv R and Tishby N 2017 Opening the black box of deep neural networks via information
(arXiv:1703.00810)
[4] Chechik G, Globerson A, Tishby N and Weiss Y 2005 Information bottleneck for Gaussian variables
J. Mach. Learn. Res. 6 165–88
[5] Saxe A M, Bansal Y, Dapello J, Advani M, Kolchinsky A, Tracey B D and Cox D D 2018 On the informa-
tion bottleneck theory of deep learning Int. Conf. on Learning Representations
[6] Kabashima Y 2008 Inference from correlated patterns: a unified theory for perceptron learning and linear
vector channels J. Phys.: Conf. Ser. 95 012001
[7] Manoel A, Krzakala F, Mézard M and Zdeborová L 2017 Multi-layer generalized linear estimation IEEE Int.
Symp. on Information Theory pp 2098–102
[8] Fletcher A K, Rangan S and Schniter P 2018 Inference in deep networks in high dimensions IEEE Int.
Symp. on Information Theory vol 1 pp 1884–8
[9] Reeves G 2017 Additivity of information in multilayer networks via additive Gaussian noise transforms 55th
Annual Allerton Conf. on Communication, Control, and Computing
[10] Mézard M, Parisi G and Virasoro M 1987 Spin Glass Theory and Beyond (Singapore: World Scientific)
[11] Mézard M and Montanari A 2009 Information, Physics, and Computation (Oxford: Oxford University
Press)
[12] 2018 Dnner: deep neural networks entropy with replicas, Python library (https://github.com/sphinxteam/
dnner)
[13] Tulino A M, Caire G, Verdú S and Shamai S 2013 Support recovery with sparsely sampled free
random matrices IEEE Trans. Inf. Theory 59 4243–71
[14] Donoho D and Montanari A 2016 High dimensional robust M-estimation: asymptotic variance via
approximate message passing Probab. Theory Relat. Fields 166 935–69
[15] Seung H S, Sompolinsky H and Tishby N 1992 Statistical mechanics of learning from examples Phys. Rev. A
45 6056
[16] Engel A and Van den Broeck C 2001 Statistical Mechanics of Learning (Cambridge: Cambridge University
Press)
[17] Opper M and Saad D 2001 Advanced mean field methods: Theory and practice (Cambridge, MA: MIT
Press)
[18] Jean Barbier, Florent Krzakala, Nicolas Macris, Léo Miolane and Lenka Zdeborová 2019 Optimal errors and
phase transitions in high-dimensional generalized linear models Proc. Natl Acad. Sci. 116 5451–60
[19] Barbier J, Macris N, Maillard A and Krzakala F 2018 The mutual information in random linear estimation
beyond i.i.d. matrices IEEE Int. Symp. on Information Theory pp 625–32
[20] Donoho D, Maleki A and Montanari A 2009 Message-passing algorithms for compressed sensing Proc. Natl
Acad. Sci. 106 18914–9
[21] Zdeborová L and Krzakala F 2016 Statistical physics of inference: thresholds and algorithms Adv. Phys.
65 453–552
[22] Rangan S 2011 Generalized approximate message passing for estimation with random linear mixing IEEE
Int. Symp. on Information Theory pp 2168–72
[23] Rangan S, Schniter P and Fletcher A K 2017 Vector approximate message passing IEEE Int. Symp. on
Information Theory pp 1588–92
[24] Barbier J and Macris N 2019 The adaptive interpolation method for proving replica formulas. Applications
to the Curie–Weiss and Wigner spike models J. Phys. A 52 294002
[25] Barbier J and Macris N 2019 The adaptive interpolation method: a simple scheme to prove replica formulas
in Bayesian inference Probab Theory Relat. Fields 174 1133–85
[26] Barbier J, Macris N and Miolane L 2017 The layered structure of tensor estimation and its mutual informa-
tion 55th Annual Allerton Conf. on Communication, Control, and Computing pp 1056–63
Entropy and mutual information in models of deep neural networks
16
https://doi.org/10.1088/1742-5468/ab3430
J. Stat. Mech. (2019) 124014
[27] Moczulski M, Denil M, Appleyard J and de Freitas N 2016 ACDC: a structured ecient linear layer Int.
Conf. on Learning Representations
[28] Yang Z, Moczulski M, Denil M, de Freitas N, Smola A, Song L and Wang Z 2015 Deep fried convnets IEEE
Int. Conf. on Computer Vision pp 1476–83
[29] Amit D J, Gutfreund H and Sompolinsky H 1985 Storing infinite numbers of patterns in a spin-glass model
of neural networks Phys. Rev. Lette. 55 1530
[30] Gardner E and Derrida B 1989 Three unfinished works on the optimal storage capacity of networks J. Phys.
A 22 1983
[31] Mézard M 1989 The space of interactions in neural networks: Gardner’s computation with the cavity method
J. Phys. A 22 2181
[32] Louart C and Couillet R 2017 Harnessing neural networks: a random matrix approach IEEE Int. Conf. on
Acoustics, Speech and Signal Processing pp 2282–6
[33] Pennington J and Worah P 2017 Nonlinear random matrix theory for deep learning Advances in Neural
Information Processing Systems
[34] Raghu M, Poole B, Kleinberg J, Ganguli S and Sohl-Dickstein J 2017 On the expressive power of deep neural
networks Int. Conf. on Machine Learning
[35] Saxe A, McClelland J and Ganguli S 2014 Exact solutions to the nonlinear dynamics of learning in deep
linear neural networks Int. Conf. on Learning Representations
[36] Schoenholz S S, Gilmer J, Ganguli S and Sohl-Dickstein J 2017 Deep information propagation Int. Conf. on
Learning Representations
[37] Advani M and Saxe A 2017 High-dimensional dynamics of generalization error in neural networks
(arXiv:1710.03667)
[38] Baldassi C, Braunstein A, Brunel N and Zecchina R 2007 Ecient supervised learning in networks with
binary synapses Proc. Natl Acad. Sci. 104 11079–84
[39] Dauphin Y, Pascanu R, Gulcehre C, Cho K, Ganguli S and Bengio Y 2014 Identifying and attacking the sad-
dle point problem in high-dimensional non-convex optimization Advances in Neural Information Process-
ing Systems
[40] Giryes R, Sapiro G and Bronstein A M 2016 Deep neural networks with random Gaussian weights: a univer-
sal classification strategy? IEEE Trans. Signal Process. 64 3444–57
[41] Chalk M, Marre O and Tkacik G 2016 Relevant sparse codes with variational information bottleneck
Advances in Neural Information Processing Systems
[42] Achille A and Soatto S 2018 Information dropout: learning optimal representations through noisy computa-
tion IEEE Trans. Pattern Anal. Mach. Intell. pp 2897–905
[43] Alemi A, Fischer I, Dillon J and Murphy K 2017 Deep variational information bottleneck Int. Conf. on
Learning Representations
[44] Achille A and Soatto S 2017 Emergence of invariance and disentangling in deep representations ICML 2017
Workshop on Principled Approaches to Deep Learning
[45] Kolchinsky A, Tracey B D and Wolpert D H 2017 Nonlinear information bottleneck (arXiv:1705.02436)
[46] Belghazi M I, Baratin A, Rajeswar S, Ozair S, Bengio Y, Courville A and Hjelm R D 2018 MINE: mutual
information neural estimation Int. Conf. on Machine Learning
[47] Zhao S, Song J and Ermon S 2017 InfoVAE: information maximizing variational autoencoders
(arXiv:1706.02262)
[48] Kolchinsky A and Tracey B D 2017 Estimating mixture entropy with pairwise distances Entropy 19 361
[49] Kraskov A, Stögbauer H and Grassberger P 2004 Estimating mutual information Phys. Rev. E 69 066138
[50] 2018 lsd: Learning with Synthetic Data, Python library (https://github.com/marylou-gabrie/
learning-synthetic-data)