*theory and numerical simulation of heat, mass, and charge transport in condensed matter*

*last updated on May 19, 2019* — a printable version of these notes is available here

## tentative syllabus for a five-lecture introductory course

- What diffusion is all about;
- Extensive variables, intensive variables, thermal equilibrium;
- Conserved densities and currents: continuity equation;
- Osmotic forces, Fick's law, and Einstein's relation between friction and diffusion coefficients;
- A mathematical intermezzo: stochastic processes and related concepts;
- Brownian motion.

**Lecture 2-3: Theoretical foundations**

- Hydrodynamic variables and Onsager's transport equations;
- The operator form of classical mechanics: Hamilton's
*vs.*Liouville's equations; - Linear-response theory and the Green-Kubo and Einstein-Helfand expressions for transport coefficients.

**Lecture 3-4: Heat transport**

- Thermal
*vs.*mechanical perturbations: heat transport; - The classical formula for the heat current;
- Gauge invariance of transport coefficients;
- Density-functional theory of adiabatic heat transport.

**Lectures 4-5: Computer simulations**

- Computer simulations: classical and
*ab initio*molecular dynamics; - Data analysis for equilibrium properties: the correlation time;
- Computer simulation of transport phenomena;
- Spectral theory of time series: the Wiener-Khintchine theorem;
- Cepstral analysis for uni- and multi-variate time series.

## Lecture 1 - Introduction

### What diffusion is all about

Diffusion is the process by which a conserved quantity (such as, *e.g.*, mass, charge, energy, or the number of molecules of a given species in a non-reactive system) spreads evenly in space. The movie below (pilfered somewhere in the internet: sorry for forgetting the source and being thus unable to acknowledge the author) displays the diffusion of a red dye in two vessels of water at different temperatures: can you tell which one is warmer? Why? Much of this short course is devoted to presenting the beautiful theory that explains this process and related transport phenomena. This first lecture aims at providing the basic background for understanding the founding paper of this field, written by Albert Einstein in his *annus mirabilis* (1905).

### Extensive variables, intensive variables, thermodynamic equilibrium

Extensive quantities, such as, *e.g.*, energy, are characterized by the property that the value they assume for a system, \(\Omega\), is equal to the sum of the values it has for any of its partitions into nonverlapping subsystems, say \(\Omega_1\) and \(\Omega_2\):

\begin{equation} E(\Omega) = E(\Omega_1) + E(\Omega_2). \label{eq:extensivity} \end{equation}

The principle of maximum entropy means that the intensive variables conjugate to extensive conserved ones are uniform across a system at equilibrum. Entropy is a function of extensive conserved quantities, such as energy, volume, and numbers of molecules \eqref{eq:extensivity}:
\begin{equation}
S = S(E,V,N_1, N_2,\cdots). \label{eq:entropydef}
\end{equation}
The corresponding conjugate, intensive, variables are temperature, pressure, and the various chemical potentials:

\(\begin{align} \frac{\partial S}{\partial E} &= \phantom{-}\frac{1}{T}, \nonumber \\ \frac{\partial S}{\partial V} &= \phantom{-}\frac{p}{T}, \label{eq:eqstate} \\ \frac{\partial S}{\partial N_i} &= -\frac{\mu_i}{T}. \nonumber \end{align}\)

Let \(S(\Omega_1,A_1)\) and \(S(\Omega_2,A_2)\) be the entropies of subsystems 1 and 2, as functions of one of their respective extensive variables, \(A\), and let \(\alpha=\frac{\partial S}{\partial A}\) be the corresponding intensive variable. The total entropy must be stationary with respect to the variation of \(A\) for one of its subsystems, say \(A_1\), with the constraint that \(A_1+A_2=A\), namely:

\(\begin{align} \frac{\partial S}{\partial A_1} &= \frac{\partial}{\partial A_1} \bigl ( S(\Omega_1,A_1) + S(\Omega_2,A-A_1) \bigr ) \nonumber \\ &= \alpha_1-\alpha_2 \\ &= 0. \nonumber \end{align}\)

We conclude that \(\alpha_1=\alpha_2\) and that the intensive variables conjugate to the arguments of the entropy, such as temperature and chemical potentials, are constant across the system.

When \(\alpha(\mathbf{r})\) is not constant, \(\nabla \alpha\ne0\), \(A\) will flow from the regins with higher \(\alpha\) to those of lower \(\alpha\). The rate of change of \(A\) within one of the subsystems is:

\(\begin{align} \begin{aligned} \dot A(\Omega_1,t) &= \int_{\Omega_1} \dot a(\mathbf{r},t) d\mathbf{r} \nonumber \\ &= -\int_{\partial\Omega_1} \mathbf{j}_a(\mathbf{r},t) \cdot d\boldsymbol{\sigma}, \end{aligned}\label{eq:rateq} \end{align}\)
where \(\mathbf j_a\) is the associated current density. Integrating by parts Eq. \eqref{eq:rateq}, we obtain the *continuity equation*:
\begin{equation}
\dot a(\mathbf{r},t) + \nabla\cdot \mathbf{j}_a(\mathbf{r},t)=0, \label{eq:continuity}
\end{equation}
which is the defining equation of conserved extensive quantities. The density of a conserved quantity and its associated density current are usually dubbed a *conserved density* and a *conserved current*. Assuming local thermodynamic equilibrium (see below), one can define an intensive variable, \(\alpha(\mathbf r)\), as a function of position. In the absence of external fields, the current density \(\mathbf{j}_a(\mathbf r,t)\) is evidently a functional of \(\alpha(\mathbf r,t)\), which vanishes for \(\nabla \alpha = 0\). To leading order in \(\nabla \alpha\) and assuming that the response is local in space and time (which is the case in the long-wavelength and low-frequency regimes) and isotropy, one has:
\begin{equation}
\mathbf{j}_a=\lambda \nabla \alpha. \label{eq:jvsy}
\end{equation}
The sign of \(\lambda\) is fixed by the condition that the total entropy of a system can only increase. Let a system \(\Omega\) be partitioned into \(n\) subsystems, \(\{\Omega_1,\Omega_2,\cdots\Omega_n\}\). The rate of change of the total entropy is:

\(\begin{align} \begin{aligned} \frac{dS(\Omega,t)}{dt} &= \sum_i \frac{\partial S(\Omega_i,t)}{\partial A_i} \frac{d A_i}{dt} \\ &= \sum_i \alpha_i \dot A_i \end{aligned} \end{align}\) Passing to the continuum, one has:

\(\begin{align} \begin{aligned} \frac{dS(\Omega,t)}{dt} &= \int_\Omega \alpha(\mathbf r,t) \dot a(\mathbf r,t) d\mathbf r \\ &= -\int_\Omega \alpha(\mathbf r,t) \nabla\cdot \mathbf j(\mathbf r,t) d\mathbf r \\ &= \int_\Omega \nabla \alpha(\mathbf r,t) \cdot \mathbf j(\mathbf r,t) d\mathbf r. \end{aligned} \label{eq:SSource} \end{align}\) By combining Eq. \eqref{eq:SSource} with Eq. \eqref{eq:jvsy} and requiring that the rate of change of the entropy is positive, one obtains the condition that \(\lambda>0\).

#### Local thermodynamical equilibrium

Systems at equilibrium are *characterized* by the relation amongst extensive variables expressed by Eq. \eqref{eq:entropydef}. Any of the relations between intensive and extensive variables obtained by differentiating the entropy in Eq. \eqref{eq:entropydef} with respect to one of its arguments is an *equation of state* that characterizes thermodynamical equilibrium in the system. The very possibility of defining an intensive variable is a consequence of equilibrium, which in turns implies the constancy of intensive variables across the system. Therefore, the possible occurrence of an inhomogenous temperature, pressure, or chemical potential is intrinsically contradictory. Notwithstanding, we are all used to temperature or pressure gradients that are maintained by external perturbations and gradually leveled off when the perturbation is removed. What is assumed here is that the inhomogeneities of the intensive variables occur on such long wavelengths, that at any given time local intensive variables can be defined such that they satisfy the equation of state with the local densities of the extensive variables. This means in turn that the time it takes to establish global thermodynamical equilibrium is much longer that the time it takes to satisfy locally the equation of state. Much of this course is devoted to the theory of the approach of equilibrium of intensive variables and conserved densities and to their response to external perturbations.

### Fick's law, diffusion equation, and Einstein's relation between friction and diffusion coefficients

Let us consider the specific case of the number-particle distribution of a solute (the dye) in a solvent (water). The relevant extensive variable is the number of solute molecules, \(A=N\), whose density we indicate with \(n(\mathbf{r})\), and the corresponding intensive variable is the chemical potential divided by temperature, \(\alpha=-\frac{\mu}{T}\). Assuming thermal equilibrium (\(\nabla T=0\)), Eq. \eqref{eq:jvsy} reads: \begin{equation} \mathbf{j}=-\frac{\lambda}{T} \nabla \mu. \label{eq:jvsmu} \end{equation}

When thermodynamic equilibrium holds locally, the chemical potential in Eq. \eqref{eq:jvsmu} is the sum of the equilibrium chemical potentials at the local \((nTp)\) conditions, \(\mu^\circ (nTp)\), plus the external potential, \(v(\mathbf r)\): \(\mu(\mathbf{r}) = \mu^\circ(\mathbf{r}) + v(\mathbf{r})\). When the concentration of the solute is small (*i.e.* for *weak solutions*), the gradient of the local equilibrium chemical potential reads (see here for a derivation):
\begin{equation}
\nabla\mu^\circ=\frac{k_BT}{n} \nabla n.
\end{equation}

One has therefore:

\(\begin{align} \mathbf{j} &= \mathbf{j}_{dr} + \mathbf{j}_{df}, \label{eq:jtot} \\ \mathbf j_{dr} &= \frac{\lambda}{T}\mathbf f, \label{eq:jdrift} \\ \mathbf j_{df} &= -\frac{\lambda k_B}{n} \nabla n, \label{eq:jdiff} \end{align}\)
where \(\mathbf j_{dr}\) and \(\mathbf j_{df}\) are *drift* and *diffusion* currents, respectively, and \(\mathbf f = -\nabla v\) is the external force acting on the solute.

Let us examine three interesting cases, where two of the fields, \(\nabla n\), \(\mathbf j\), or \(\mathbf f\), are alternatively equal or different from zero. The third field is constrained by Eqs. (\ref{eq:jtot}-\ref{eq:jdiff}), whereas the fourth case, where all the fields vanish, is trivial.

- \(\mathbf f\ne 0\) at global thermodynamic equilibrium. In this case, \(\mathbf j=0\) and \begin{equation} \nabla n = \frac{n}{k_BT} \mathbf f \end{equation} Describe here Perrin's experiment.
- \(\mathbf f\ne 0\) and \(\nabla n = 0\). In this case, one has:
\begin{equation} \mathbf j = \frac{\lambda}{T} \mathbf f. \end{equation}
In a viscous fluid, an external force imparts to a particle a steady-state velocity proportional to its strength: \(\mathbf v= \mathcal{M}\,\mathbf f\), \(\mathcal{M}\) being the particle's
*mobility*. This is a consequence of the external force being compensated by a friction force, \(\mathbf f_f = -\mathcal{M}^{-1}\mathbf v\). The external force imparts therefore to the solute a steady-state current: \begin{equation} \mathbf j = n\mathcal{M}\,\mathbf f, \end{equation} so that: \begin{equation} \lambda = n\mathcal{M}T. \label{eq:lambda-M} \end{equation} - \(\mathbf f=0\) and \(\nabla n\ne 0\). In this case, by casting Eq. \eqref{eq:lambda-M} into Eq. \eqref{eq:jdiff}, one gets: \begin{equation} \mathbf j = -D \nabla n, \label{eq:fick} \end{equation} which is Fick's law, where \begin{equation} D=k_B T \mathcal{M} \label{eq:D-Fick} \end{equation} is the diffusion constant. Stokes' law for the mobility, \begin{equation} \mathcal{M} = \frac{1}{6\pi \eta R}, \label{eq:stokes} \end{equation} where \(\eta\) is the viscosity of the solvent and \(R\) the radius of the solute particles, supposed to be spherical, one finally arrives at Einstein's expression for the diffusion constant: \begin{equation} D = \frac{k_BT}{6\pi\eta R}. \end{equation}

By combining Fick's law with the continuity equation \eqref{eq:continuity}, we finally arrive at the diffusion equation, which is a paradigmatic case of parabolic partial differential equation: \begin{equation} \frac{n(\mathbf r,t)}{\partial t} = D\Delta n(\mathbf r,t). \label{eq:diffusion} \end{equation}

The function \begin{equation} g(\mathbf r,t) = \frac{1}{(4\pi Dt)^\frac{3}{2}} \mathrm{e}^{-\frac{r^2}{4Dt}} \label{eq:green-diffusion} \end{equation} is the (unique) solution of Eq. \eqref{eq:diffusion} corresponding to the initial condition: \begin{equation} n(\mathbf r,0)=\delta(\mathbf r) \end{equation} and vanishing at infinity, and is therefore the (free-space) Green's function of Eq. \eqref{eq:green-diffusion}. The general solution corresponding to an arbitrary initial condition is: \begin{equation} n(\mathbf r,t) = \int g(\mathbf r-\mathbf r',t) n(\mathbf r,0) d\mathbf r'. \label{eq:diffusion-propagation} \end{equation}

The space Fourier transform of the Green's function in Eq. \eqref{eq:green-diffusion} is: \begin{equation} {\tilde{g}} (\mathbf k,t)=\mathrm{e}^{-Dtk^2}. \end{equation}

Let's suppose that the initial (\(t=0\)) particle distribution is Gaussian with variance \(\sigma_0^2\), \begin{equation} n(\mathbf r,0) = \frac{1}{(2\pi\sigma_0^2)^\frac{3}{2}} \mathrm{e}^{-\frac{r^2}{2\sigma_0^2}}. \end{equation}

According to Eq. \eqref{eq:diffusion-propagation}, at a later time the distribution is still a Gaussian, whose variance increases linearly with time: \begin{equation} \sigma^2(t) = \sigma_0^2 + 2Dt. \end{equation}

This linear dependence of the spatial spread of a physical quantity on time is a distinctive feature of diffusion phenomena, and we will see that it is deeply rooted in the physics and mathematics of Brownian motion.

### A mathematical intermezzo

#### Multivariate normal distributions

A collection of \(n\) zero-mean stochastic variables, \(X=\{x_{l}\}\), is said to be a multi-dimensional normal variate, and will be indicated as \(X\sim\mathcal{N}(0,C)\), if their joint probability density is \(\mathsf{P}(x_{1},x_{2},\cdots x_{n})=\sqrt{\frac{\det K}{(2\pi)^{n}}}\mathrm{e}^{-\frac{1}{2}\sum_{ij}K_{ij}x_{i}x_{j}}\), with \(K=C^{-1}\), \(C\) being the covariance matrix:

\(\begin{align} \begin{aligned} C_{ij} & =\langle x_{i}x_{j}\rangle \\ & =\int\mathsf{P}(x_{1},x_{2},\cdots x_{n})x_{i}x_{j}dX \\ & =-2\sqrt{\frac{\det K}{(2\pi)^{n}}}\frac{\partial}{\partial K_{ij}}\int\mathrm{e}^{-\frac{1}{2}\sum_{ij}K_{ij}x_{i}x_{j}}dX \\ & =-2\sqrt{\frac{\det K}{(2\pi)^{n}}}\frac{\partial}{\partial K_{ij}}\sqrt{\frac{(2\pi)^{n}}{\det K}} \\ & =\frac{1}{\det K}\frac{\partial\det K}{\partial K_{ij}} \\ & =\left(K^{-1}\right) \end{aligned} \end{align}\)

Let us consider a multi-dimensional normal variate, as above. What is the probability density of the sum of all its elements?

\(\begin{align} S &=\sum_{l=1}^{n}x_{l} \\ \mathsf{P}(S) &=\sqrt{\frac{\det K}{(2\pi)^{n}}}\int dX\mathrm{e}^{-\frac{1}{2}\sum_{ij}K_{ij}x_{i}x_{j}}\delta\left(S-\sum_{l=1}^{n}x_{l}\right) \nonumber \\ &=\sqrt{\frac{\det K}{(2\pi)^{n}}}\int\frac{dq}{2\pi}\mathrm{e}^{iqS} \int dX \mathrm{e}^{-\frac{1}{2}\sum_{ij}K_{ij}x_{i}x_{j}} \mathrm{e}^{-iq\sum_l x_{l}} \nonumber \\ &=\int\frac{dq}{2\pi} \mathrm{e}^{iqS} \mathrm{e}^{-\frac{1}{2}q^{2}\sum_{ij}C_{ij}} \nonumber \\ &=\frac{1}{\sqrt{2\pi\sigma^{2}}}\mathrm{e}^{-\frac{S^{2}}{2\sigma^{2}}} \\ \sigma^{2} &=\sum_{ij}C_{ij}, \end{align}\)
where one has used the fact that the \(n\)-dimensional Fourier transform of a multi-variate Gaussian is a multi-variate Gaussian:

\(\begin{align} \int dX \mathrm{e}^{-\frac{1}{2}\sum_{lm} K_{lm}x_l x_m} \mathrm{e}^{-i\sum_l x_l q_l} = \sqrt{\frac{(2\pi)^n}{\det K}} \mathrm{e}^{-\frac{1}{2}\sum_{lm} K^{-1}_{lm}q_l q_m}. \end{align}\)

In particular, the sum of several independent normal variates is a normal variate, whose variance is the sum of the variances of the individual variables.

#### Central-limit theorem(s)

Let \(\{x_{i}\}\) be a set of \(n\) zero-mean independent identically distributed (iid) variables, with finite variance: \(\langle x^{2}\rangle=\sigma^{2}\). In the limit \(n\to\infty\), and in some sense to be properly defined, the following central-limit theorem (CLT) holds:

\(\begin{align} \mu_{n} & =\frac{1}{n}\sum_{\ell}x_{\ell} \nonumber \\ & \sim\mathcal{N}\left(0,\frac{\sigma^{2}}{n}\right). \end{align}\)

The following generalization due to Lyapunov also holds. If the \(\{x_{\ell}\}\) are still independent, but not necessarily identically distributed, then:

\(\begin{align} \mu_\ell \sim\mathcal{N}\left ( 0,\frac{1}{n^2}\sum_{\ell}\sigma_{\ell}^{2} \right ). \end{align}\)

#### Stochastic processes

A collection of stochastic variables labeled by an index \(t\), \(x(t)\), is usually referred to as a stochastic process and the index is often assumed to indicate time. The process may be continuous or discrete according to whether the index is continuous or discrete. For the sake of simplicity we assume here that the expectations of the process vanish at all times: \(\langle x(t)\rangle=0\). A discrete stochastic process can be seen as the discretization of a continuous one, \(x_{\ell}=x(\ell\epsilon)\), where \(\epsilon\) is some discretization (time) step. An “observed” sample, or realization of a (discrete) stochastic process is often referred to as a *time series*.

The covariance of the process at two different times, \(\langle x(t_{1})x(t_{2})\rangle\), is also called its time correlation function. The autocorrelation time of the process is defined as:

\(\begin{align} \tau_{x}=\frac{1}{\langle x^{2}\rangle}\int_{0}^{\infty}\langle x(t)x(0)\rangle dt, \end{align}\)

and is a measure of how long it takes to the process to loose memory of the past.

Given a multivariate probability density, \(\mathsf P_{XY}(x,y)\), a marginal density is defined as the probability density of (one group of) its arguments: \(\mathsf P_X(x)=\int \mathsf P_{XY}(x,y)dy\). A process is said to be *strongly stationary* if all of its marginal distributions are invariant under time translation: \(\mathsf{P}\bigl(x(t_{1}),\cdots x(t_{n})\bigr)=\) \(\mathsf{P}\bigl(x(t_{1}+t),\cdots x(t_{n}+t)\bigr)\). Strong stationarity implies that: \(\langle x(t_{1})x(t_{2})\rangle=C(t_{1}-t_{2})\), which is often referred to as *weak stationarity*. In general, weak stationarity does not imply strong stationarity.

A stochastic process is said to be Gaussian if the marginal distribution of any n-uple of its values, \(\{x_{1}=x(t_{1}),x_{2}=x(t_{2}),\cdots x_{n}=x(t_{n})\}\), is a multivariate normal distribution. In Gaussian processes, weak sationarity implies strong stationarity, because the covariance uniquely determines the marginal distributions at any order.

### Brownian Motion

The above video shows a bunch of 100- and 200-nm polystyrene nano-particles performing a Brownian motion dispersed in water.

Let us consider a particle in 1D (for simplicity) in contact with a thermal bath whose interaction with the particle will be schematized by a friction force, \(f_{\gamma}=-\gamma p\), proportional to the momentum \(p\) of the particle, and a random force characterized by some stochastic process, \(\phi(t)\), resulting from the combined effect of a large number of collisions between the particle and the molecules which form the thermal bath. The equation of motion of the particle reads: \begin{equation} \dot{p}=\phi - \gamma p. \label{eq:Ornstein-Uhlenbeck} \end{equation} The solution of Eq. \eqref{eq:Ornstein-Uhlenbeck} is: \begin{equation} p(t)=\mathrm{e}^{-\gamma t}\left(p(0)+\int_{0}^{t}\phi(t')\mathrm{e}^{\gamma t'}dt'\right). \label{eq:OU-solution} \end{equation}

The time correlation function of the \(p\) process is:

\(\begin{align} \begin{aligned} \langle p(t)p(0)\rangle &= \left\langle p(0)^{2}\right\rangle \mathrm{e}^{-\gamma t} \\ &= mk_{B}T\mathrm{e}^{-\gamma t}, \end{aligned} \end{align}\)
where we have assumed that the force process is uncorrelated from momentum: \(\langle p(0)\phi(t)\rangle=0\) and use has been made of the equilibrium condition:
\begin{equation}
\langle p^2\rangle = mk_BT. \label{eq:thermal-equilibrium}
\end{equation}

The autocorrelation time of the \(p\) process is:

\(\begin{align} \begin{aligned} \tau_p &= \frac{1}{\langle p^2 \rangle} \int_0^\infty \langle p(t)p(0) \rangle dt \\ &= \gamma^{-1}. \end{aligned} \end{align}\)

Let us now consider the distance traveled by the Brownian particle in a time t:

\(\begin{align} \begin{aligned} \Delta x(t) &= x_t-x_0 \\ &= \int_0^t v(t')dt'. \end{aligned} \label{eq:distance-traveled} \end{align}\)

For \(t\gg \tau_p=\gamma^{-1}\), \(\Delta x(t)\) is the sum of many iid stochastic variables and is therefore a normal variate. The probability density for \(\Delta x(t)\) is the contitional probability that the brownian particle is at position \(x_t\) a time \(t\), given that it is known to have been at position \(x_0\) at time \(t=0\):

\(\begin{align} \mathsf{P}(x_{t},t|x_{0},0) &=\frac{1}{\sqrt{2\pi \sigma_t^2}}\mathrm{e}^{-\frac{(x_{t}-x_{0})^{2}}{2\sigma_t^2}} \label{eq:normal-Brown} \end{align}\)
Its variance is:

\(\begin{align} \begin{aligned} \sigma^2_t = \left \langle \left (\Delta x(t) \right )^2 \right \rangle &= \int_0^t dt' \int_0^t dt'' \langle v(t') v(t'') \rangle \\ &= \int_A C_v(|t'-t''|)dt' dt'' \\ &= 2 \int_{A^>} C_v(|t'-t''|)dt' dt'' \\ &= 2 \int_0^t C_v(\tau) (t-\tau)d\tau \\ &= 2 t \int_0^t C_v(\tau) d\tau -2 \int_0^t C_v(\tau) \tau d\tau, \end{aligned} \label{eq:pre-EH} \end{align}\)
where the symbols and steps in Eq. \eqref{eq:pre-EH} are explained in a hopefully comprehensible way in the picture below. When both the integrals in the last line of Eq. \eqref{eq:pre-EH} converge in the large-\(t\) limit, in this limit the variance \(\sigma^2_t\) is proportional to \(t\), and a comparison between Eqs. \eqref{eq:normal-Brown} and \eqref{eq:green-diffusion} allows us to establish a relation between the diffusion constant and the integral of the velocity auto-correlation function:
\begin{equation}
D=\int_0^\infty \langle v(t)v(0) \rangle dt. \label{eq:D-vv}
\end{equation}

Let us now compute the variance of the momentum resulting from Eq. \eqref{eq:OU-solution}. For \(\gamma t\gg 1\), one has: \begin{equation} \bigl \langle p(t)^2 \bigr \rangle = \mathrm{e}^{-2\gamma t} \int_0^t dt' \int_0^t dt'' \mathrm{e}^{\gamma (t'+t'')} \bigl \langle \phi(t') \phi(t'') \bigr \rangle . \label{eq:p2-1} \end{equation}

Time translation and inversion invariance requires that: \(\langle \phi(t') \phi(t'') \rangle = C_\phi(|t'-t''|)\). Let us introduce the *power spectral density* (aka simply as the *power spectrum*) of the \(\phi\) process:

\(\begin{align} S_{\phi} (\omega) = \int_{-\infty}^\infty \mathrm{e}^{i\omega t}C_{\phi} (t)dt. \end{align}\)

Eq. \eqref{eq:p2-1} reads then:

\(\begin{align} \begin{aligned} \bigl \langle p(t)^2 \bigr \rangle &= \mathrm{e}^{-2\gamma t} \int_{-\infty}^\infty \frac{d\omega}{2\pi} S_\phi(\omega) \left | \int_0^t \mathrm{e}^{(\gamma +i\omega )t'} dt' \right |^2 \nonumber \\ &= \mathrm{e}^{-2\gamma t} \int_{-\infty}^\infty \frac{d\omega}{2\pi} S_\phi(\omega) \frac{|\mathrm{e}^{(\gamma +i\omega )t}-1|^2}{\gamma^2+\omega^2} \nonumber \\ &\approx \int_{-\infty}^\infty \frac{d\omega}{2\pi} S_\phi(\omega) \frac{1}{\gamma^2+\omega^2} & (\gamma t \gg 1) \nonumber \\ &\approx \frac{1}{2} S_\phi(0) \gamma^{-1}, & (\gamma\tau_\phi \gg 1) \end{aligned} \label{eq:p(t)^2} \end{align}\)
where \(\tau_\phi\) is the autocorrelation time of the \(\phi\) process and we have used the Lorentzian representation of the \(\delta\) function: \(\frac{1}{\pi}\lim_{\gamma\to\infty} \frac{\gamma}{\gamma^2+\omega^2}=\delta(\omega)\). By combining Eq. \eqref{eq:thermal-equilibrium} with Eq. \eqref{eq:p(t)^2}, we finally obtain:
\begin{equation}
S_\phi(0) = 2\gamma m k_B T, \label{eq:SphiFD}
\end{equation}
which is a first example of *fluctuation-dissipation theorem*, *i.e.* a relation between the strength of a fluctuating force, \(S_\phi(0)\), and that of a dissipative one, \(\gamma\), imposed by thermal equilibrium.

In order to establish a relation between the power spectra of the force acting on the Brownian particle and its momentum, let us enunciate the *Wiener-Khintchine theorem*.

#### The Wiener-Khintchine theorem

Let
\begin{equation}
{\tilde X}_t(\omega)=\int_0^t X(t') \mathrm{e}^{i\omega t'}dt'
\end{equation}
be the (truncated) Fourier transform of a stationary process. Evidently, \(\tilde X_t(\omega)\) is a process itself. Then, under appropriate general conditions, the following relation holds in the large-\(t\) limit:
\begin{equation}
\left \langle |{\tilde X}_t(\omega)|^2 \right\rangle = tS_X(\omega) + \mathcal{O}(1),
\end{equation}
where \(S_X(\omega)\) is the *power spectrum* (*i.e.* the Fourier transform of the time-correlation function) of the \(X\) process:
\(\begin{align} S_X(\omega)=\int_{-\infty}^{\infty}\langle X(t)X(0) \rangle \mathrm{e}^{i\omega t}dt. \end{align}\)
The demonstration is a slight generalization of the arguments that lead us to Eq. \eqref{eq:D-vv}.

By computing the Fourier transform of Eq. \eqref{eq:OU-solution}, one gets: \begin{equation} \tilde p(\omega) = \left ( p(0) + {\tilde \phi}(\omega) \right ) \frac{1}{\gamma-i\omega}. \end{equation} By computing the expectation of the squared modulus of this relations, and considering that the force and momentum distributions are uncorrelated (and also that \(|\tilde\phi|^2\sim t\)), one obtains:

\(\begin{align} S_p(\omega)=\frac{1}{\gamma^2+\omega^2} S_\phi(\omega). \end{align}\)

We conclude that: \(S_\phi(0)=\gamma^2 S_p(0) =m^2 \gamma^2 S_v(0)\). Noticing that, according to Eq. \eqref{eq:D-vv}, \(D=\frac{1}{2}S_v(0)\) and plugging these relations into Eq. \eqref{eq:SphiFD}, we finally arrive at: \begin{equation} D=\frac{k_BT}{m\gamma}, \end{equation} which is Eq. \eqref{eq:D-Fick}, obtained "from bottom up'' (\(\mathcal M=\frac{1}{m\gamma}\)).

## Lecture 2 - Theoretical Foundations

### Hydrodynamic variables

For the time being, please refer to Sec. 1.1 of *Heat transport in insulators from ab-initio Green-Kubo theory* by S. Baroni *et al.*

### Hamilton's and Liouville's equations

Let us consider a classical Hamiltonian system with canonical coordinates \(\Gamma\doteq(q,p)\) and Hamiltonian \(H(q,p)\). The canonical equations of motion read:

\(\begin{align} \begin{aligned} \dot{q} &= \phantom{{-}}\frac{\partial H}{\partial p} \\ \dot{p} &= -\frac{\partial H}{\partial q}. \end{aligned} \label{eq:Hamilton} \end{align}\)

Let \(\rho(\Gamma,t)\) be the phase-space number density and \(J(\Gamma,t)\) the corresponding current, which satisfy the continuity equation:

\begin{equation} \frac{\partial\rho(\Gamma,t)}{\partial t} = -\frac{\partial J(\Gamma,t)}{\partial\Gamma}. \label{eq:GammaContinuity} \end{equation}

The current density is equal to the product of the number density times the velocity field, \(v(\Gamma)=\left ( \frac{\partial H}{\partial p},-\frac{\partial H}{\partial q}\right )\), \(J(\Gamma)v(\Gamma)\). Using this expression the continuity equation, Eq. \eqref{eq:GammaContinuity}, can be cast into the form:

\(\begin{align} \begin{aligned} \frac{\partial\rho}{\partial t} &= -\frac{\partial}{\partial q}\left(\frac{\partial H}{\partial p}\rho\right)-\frac{\partial}{\partial p}\left(-\frac{\partial H}{\partial q}\rho\right) \\ &= \frac{\partial H}{\partial q}\frac{\partial\rho}{\partial p}-\frac{\partial H}{\partial p}\frac{\partial\rho}{\partial q} \\ &\doteq \{H,\rho\} \\ &\doteq -i\hat{\mathcal{L}}\cdot\rho, \end{aligned} \label{eq:LiouvilleEquation} \end{align}\)
\(\{\bullet,\bullet\}\) being the Poisson brackets,

\(\begin{align} \{ A,B \} \doteq\frac{\partial A}{\partial q}\frac{\partial B}{\partial p}-\frac{\partial B}{\partial p}\frac{\partial A}{\partial q}, \end{align}\)
and the Liouvillian operator \(\hat{\mathcal{L}}\) being defined as:

\(\begin{align} \hat{\mathcal{L}}\cdot\bullet\doteq i \{ H,\bullet \} . \label{eq:Liouvillian} \end{align}\)

The sign convention adopted here for the Poisson brackets is the same as Goldstein's (Ref. [1], Eq. 8-42). The sign convention adopted here for the Liouvillian is opposite to Kubo's (Ref. [2], Eq. 2.9.3, p. 98).

The Liouvillian operator defined in Eq. \eqref{eq:Liouvillian} is Hermitian. To see this, let us consider the scalar product:

\(\begin{align} (f,\hat{\mathcal{L}}\cdot g) &\doteq i\int f^* \left(\frac{\partial H}{\partial q}\frac{\partial g}{\partial p}-\frac{\partial H}{\partial p}\frac{\partial g}{\partial q}\right)d\Gamma. \label{eq:liouville-hermitian} \end{align}\)

Integrating by parts and neglecting surface terms at infinity, leads to:

\(\begin{align} \begin{aligned} (f,\hat{\mathcal{L}}\cdot g) &= i\int g\left[-\frac{\partial}{\partial p} \left(f^* \frac{\partial H}{\partial q}\right)+\frac{\partial}{\partial q}\left(f^* \frac{\partial H}{\partial p}\right)\right]d\Gamma \\ &= i\int g\left[-\frac{\partial H}{\partial q}\frac{\partial f^* }{\partial p}+\frac{\partial H}{\partial p}\frac{\partial f^* }{\partial p}\right]dqdp \\ &= (g,\hat{\mathcal{L}}\cdot f)^* . \end{aligned} \end{align}\)

A little more algebra would show that \(\hat{\mathcal{L}}\) is Hermitian also with respect to the scalar product defined as:

\(\begin{align} \begin{aligned} \langle f,g\rangle &\doteq \langle f^* g\rangle_{eq} \\ &= (\rho_{eq},f^* g) \\ &= \frac{1}{Z}\int\mathrm{e}^{-\beta H(q,p)}f^* (q,p)g(q,p)d\Gamma. \end{aligned} \end{align}\)

Let \(f(\Gamma,t)=f(\Gamma_{t})\) be the value that a phase-space function, \(f(\Gamma)\), assumes along a trajectory \(\Gamma_{t}\) whose initial conditions at \(t=0\) are \(\Gamma_{0}=\Gamma\): \(f(\Gamma,0)=f(\Gamma)\). The function \(f(\Gamma,t)\) is ruled by an equation similar to Eq. \eqref{eq:LiouvilleEquation}, with a different sign, though. Using the chain rule and Eqs. \eqref{eq:Hamilton}, differentiation of \(f(\Gamma_{t})\) with respect to time gives:

\(\begin{align} \begin{aligned} \frac{d}{dt}f(\Gamma_{t}) &= \frac{\partial f}{\partial q_{t}}\dot{q}_{t}+\frac{\partial f}{\partial p_{t}}\dot{p}_{t} \\ &= \frac{\partial f}{\partial q_{t}}\frac{\partial H}{\partial p_{t}}-\frac{\partial f}{\partial p_{t}}\frac{\partial H}{\partial q_{t}} \\ &= -\{H,f\}(\Gamma_{t}). \end{aligned} \label{eq:ft-Liouville} \end{align}\)

The Poisson brackets in Eq. \eqref{eq:ft-Liouville} are to be evaluated at time \(t\). The time evolution of \(\Gamma_t\) in phase space is a canonical transformation and Poisson brackets are canonical invariants, so that the time at which they are evaluated is irrelevant. These Poisson brackets could therefore be evaluated at \(t=0\) as well. By doing so, Eq. \eqref{eq:ft-Liouville} can be seen as a partial differential equation for \(f(\Gamma_0,t)\), which is first first-order in \(t\) and \(\Gamma_0\), and which is formally analogous to Eq. \eqref{eq:LiouvilleEquation}:

\(\begin{align} \begin{aligned} \frac{\partial}{\partial t}f(\Gamma,t) &= -\{H,f\}(\Gamma,t) \\ &= i\hat{\mathcal{L}}\cdot f, \end{aligned} \label{eq:ft0-Liouville} \end{align}\)
where \(\Gamma=\Gamma_0\) for short. Note that the sign of the right-hand side of Eqs. \eqref{eq:LiouvilleEquation} and \eqref{eq:ft0-Liouville} is opposite.

### Linear-response theory

#### A quantum mechanical digression

Let us assume that the Hamiltonian of a quantum system can be split into the sum of an unperturbed, time-independent, term, \(\hat{H}^{\circ}\), and a perturbation whose strength depends on time: \begin{equation} \hat{H}(t)=\hat{H}^{\circ}+\lambda(t)\hat{V}, \label{eq:H(t)} \end{equation} and let us ask ourselves how the expectation value of a generic operator, \begin{equation} \bar{A}(t)=\langle\Phi(t)|\hat{A}|\Phi(t)\rangle, \label{eq:A(t)} \end{equation} depends on time. In particular, let us calculate the generalized susceptibility defined as the functional derivative of \(A(t)\) with respect to the strength of the perturbation, \begin{equation} \chi_{AV}(t-t')\doteq\frac{\delta\bar{A}(t)}{\delta\lambda(t')}. \label{eq:chiAV} \end{equation}

In Eq. \eqref{eq:chiAV} the dependence on the difference \((t-t')\) is a consequence of the time-translation invariance of the unperturbed system, whereas causality requires that \(\chi_{AV}(t)=0\) for \(t<0\). In order to calculate \(\chi_{AV}\), let us apply the rule for the derivative of the product of functions:

\begin{equation} \frac{\delta\bar{A}(t)}{\delta\lambda(t')}=\Bigl\langle\frac{\delta\Phi(t)}{\delta\lambda(t')}\Bigr|\hat{A}\Bigl|\Phi(t)\Bigr\rangle+\Bigl\langle\Phi(t)\Bigr|\hat{A}\Bigl|\frac{\delta\Phi(t)}{\delta\lambda(t')}\Bigr\rangle. \label{eq:product-derivative} \end{equation}

The wavefunction \(\Phi(t)\) is solution of the time-dependent Schrödinger equation:
\begin{equation}
i\frac{\partial\Phi(t)}{\partial t}=\hat{H}(t)\Phi(t),
\label{eq:TD-schroedinger}
\end{equation}
whose formal solution is:
\begin{equation}
\Phi(t)=\hat{U}(t,0)\Phi(0),
\end{equation}
where \(\hat{U}(t,0)\) is the quantum propagator. In order to calculate the functional derivative of the wavefunction with respect to the strength of the external perturbation, \(\lambda(t)\), let us first approximate the propagator as a function of the values \(\lambda_{i}=\lambda(t_{i})\) evaluated at discrete times, \(t_{i}=i\epsilon\) where \(\epsilon=t/P\). The discretized propagator reads:

\(\begin{align} \begin{aligned} \hat{U}(t,0) = \hat{U}(t_{P},t_{P-1}) \hat{U}(t_{P-1},t_{P-2}) & \cdots \hat{U}(t_{1},t_{0}) = \left(1-i\epsilon\hat{H}(t_{P-1})\right) \times \\ & \left(1-i\epsilon\hat{H}(t_{P-2})\right)\cdots\left(1-i\epsilon\hat{H}(t_{0})\right)+\mathcal{O}(\epsilon t). \end{aligned} \label{eq:discretized-propagator} \end{align}\)

The estimate of the error in Eq. \eqref{eq:discretized-propagator} results from the product of the errors made in replacing each short-time propagator with its first-order approximation, \(\mathcal{O}(\epsilon^{2})\), with the number of such short-time terms, \(P\): \(\epsilon P=t\). Given the form of the Hamiltonian, Eq. \eqref{eq:H(t)}, the partial derivative of the propagator with respect to the strength of the perturbation at one of the discrete times reads:

\(\begin{align} \left . \frac{\partial \hat{U}(t,0)}{\partial\lambda_i} \right |_{\lambda=0}= - i\epsilon \hat{U}^{\circ}(t-t_i) \hat{V} \hat{U}^\circ (t_i) . \end{align}\)

Using the standard rule to obtain functional derivatives from their discrete counterparts, \(\left(\frac{\delta}{\delta f(x_{i})}\approx\frac{1}{\epsilon}\frac{\partial}{\partial f_{i}}\right)\), the functional derivative of the propagator is finally cast as:

\(\begin{align} \left.\frac{\delta\hat{U}(t,0)}{\delta\lambda(t')}\right|_{\lambda=0} {}_{} = -i\hat{U}^{\circ}(t-t')\hat{V}\hat{U}^{\circ}(t'), \end{align}\)
where \(\hat{U}^{\circ}(t)=\mathrm{e}^{-i\hat{H}^{\circ}t}\) is the unperturbed propagator. The functional derivative of the wavefunction finally reads:

\(\begin{align} \begin{aligned} \left . \frac{\delta\Phi(t)}{\partial\lambda(t')}\right| &= -i\hat{U}^{\circ}(t-t')\hat{V}\hat{U}^{\circ}(t')\Phi(0) \\ &= -i\mathrm{e}^{-i\hat{H}^{\circ}(t-t')}\hat{V}\mathrm{e}^{-i\hat{H}^{\circ}t'}\Phi(0) . \end{aligned} \label{eq:wavef-derivative-1} \end{align}\)

We now insert this expression for the derivative of the wavefunction into Eq. \eqref{eq:product-derivative} and assume that \(\Phi(0)=\Psi^{\circ}\), \(\Psi^{\circ}\) being the unperturbed ground state, to obtain:

\(\begin{align} \begin{aligned} \chi_{AV}(t-t') &= i\langle\Phi^{\circ}|\mathrm{e}^{i\hat{H}^{\circ}t'} \hat{V} \mathrm{e}^{i\hat{H}^{\circ}(t-t')} \hat{A} \mathrm{e}^{-i\hat{H}^{\circ}t} | \Phi^{\circ}\rangle -i\langle\Phi^{\circ}|\mathrm{e}^{i\hat{H}^{\circ}t}\hat{A}\mathrm{e}^{-i\hat{H}^{\circ}(t-t')}\hat{V}\mathrm{e}^{-i\hat{H}^{\circ}t'}|\Phi^{\circ}\rangle \\ & = i\langle\Psi^{\circ}|\hat{V}(t')\hat{A}(t)|\Psi^{\circ}\rangle-i\langle\Psi^{\circ}|\hat{A}(t)\hat{V}(t')|\Psi^{\circ}\rangle \\ & = -i\left\langle \left[\hat{A}(t),\hat{V}(t')\right]\right\rangle ^{\circ}, \end{aligned} \end{align}\)
where \(\hat{X}(t)=\mathrm{e}^{i\hat{H}^{\circ}t'}\hat{X}\mathrm{e}^{-i\hat{H}^{\circ}t'}\) is the Heisenberg representation of the operator \(\hat{X}\), \([\bullet,\bullet]\) is the quantum commutator, and \(\langle\bullet\rangle\) indicates ground-state expectation values.

#### Classical linear response

Let \begin{equation} H(\Gamma,t)=H^{\circ}(\Gamma)+\lambda(t)V(\Gamma) \end{equation} be a time-dependent Hamiltonian and \(\rho(\Gamma,t)\) the phase-space density of a system initially at equilibrium with respect to the unperturbed Hamiltonian: \begin{equation} \rho(\Gamma,0)=\rho^{\circ}(\Gamma), \end{equation} where \(\{H^{\circ},\rho^{\circ}\}=0\). The linear response of a dynamical variable, \(A(\Gamma)\), to the time-dependent perturbation is expressed by a susceptibility analogous to Eq. \eqref{eq:chiAV}, where \(\bar{A}(t)\) is defined as \begin{equation} \bar{A}(t)=\int A(\Gamma)\rho(\Gamma,t)d\Gamma. \label{eq:<A(t)>} \end{equation}

In order to calculate \(\chi_{AV}\), one has to calculate the functional derivative of the phase-space density with respect to the strength of the perturbation: \(\frac{\delta\rho(t)}{\delta\lambda(t')}\). As the Liouville equation for \(\rho\), Eq. \eqref{eq:LiouvilleEquation}, is formally identical to the Schrödinger equation for the wavefunction, Eq. \eqref{eq:TD-schroedinger}, where \(\Psi\) and \(\hat{H}\) are replaced by \(\rho\) and \(\hat{\mathcal{L}}\), respectively, the results of Sec. [subsec:QM-digression] apply immediately. The analog of Eq.\eqref{eq:wavef-derivative-1} reads:

\(\begin{align} \begin{aligned} \frac{\delta\rho(t)}{\delta\lambda(t')} &= -i\hat{\mathcal{U}}^{\circ}(t-t')\hat{\mathcal{L}'}\hat{\mathcal{U}}^{\circ}(t')\rho^{\circ}, \\ & = -i\hat{\mathcal{U}}^{\circ}(t-t')\hat{\mathcal{L}'}\rho^{\circ} \end{aligned} \end{align}\)

where \(\hat{\mathcal{U}}^{\circ}(t)=\mathrm{e}^{-i\hat{\mathcal{L}}^{\circ}t},\hat{\mathcal{L}}^{\circ}\) is the unperturbed Liouvillian corresponding to \(H^{\circ}\), and \(\hat{\mathcal{L}}'\cdot\bullet=i\{V,\bullet\}\) is the perturbing Liouvillian. The first-order perturbation to the phase-space density reads then:

\(\begin{align} \begin{aligned} \rho'(t) &= \int_{0}^{t}\frac{\delta\rho(t)}{\delta\lambda(t')}\lambda(t')dt' \\ &= \int_{0}^{t}\mathrm{e}^{-i\hat{\mathcal{L}}^{\circ}(t-t')}\{V,\rho^{\circ}\}\lambda(t')dt' \\ &= \int_{0}^{t}\{V(t'-t),\rho^{\circ}\}\lambda(t')dt'. \end{aligned} \label{eq:rho'(t)} \end{align}\)

Notice that in the above equation one has used: \(\mathrm{e}^{-i\hat{\mathcal{L}}^{\circ}t}V=V(-t)\), rather than V(t), because phase-space functions evolve with an opposite sign as phase-space densities (cfr. Eqs. [eq:liouville-equation-2] and [eq:F(q,p,t)-time-evolution-2]). Inserting Eq. \eqref{eq:rho'(t)} into Eq. \eqref{eq:<A(t)>}, one obtains:
\(\begin{align} \bar{A}(t)=\bar{A}^{\circ}+\int_{0}^{t}\lambda(t') \{ V(t'-t),\rho^{\circ} \} (\Gamma)A(\Gamma)d\Gamma+\mathcal{O}(\lambda^{2}). \end{align}\)
Eq. [eq:A'(t)] can be cast into the form:

\(\begin{align} \bar{A}(t) = \bar{A}^\circ + \int_0^t \chi_{AV}(t-t') \lambda(t') dt', \end{align}\)
where

\(\begin{align} \begin{aligned} \chi_{AV}(t) &= \theta(t)\int\{V(-t),\rho^{\circ}\}(\Gamma)A(\Gamma)d\Gamma \\ &= \theta(t)\int\{V,\rho^{\circ}\}(\Gamma)A(\Gamma,t)d\Gamma \\ &= \theta(t)\int\left[\frac{\partial V}{\partial q}\frac{\partial\rho^{\circ}}{\partial p}-\frac{\partial V}{\partial p}\frac{\partial\rho^{\circ}}{\partial q}\right]A(t)d\Gamma \\ &= \theta(t)\int\left[-\frac{\partial V}{\partial q}\frac{\partial A(t)}{\partial p}+\frac{\partial V}{\partial p}\frac{\partial A(t)}{\partial q}\right]\rho^{\circ}d\Gamma \\ &= \theta(t)\langle\{A(t),V\}\rangle, \end{aligned} \label{eq:chi-AV-1} \end{align}\)
and \(\langle\bullet\rangle\) indicates a phase-space average performed with respect to the unperturbed density, \(\rho^{\circ}\). In the above equation we pass from the first to the second line by moving the time dependence from V to A, and from the third to the fourth by integrating by parts.

Let us now suppose the unperturbed density to be canonical: \(\rho^{\circ}(\Gamma)\propto\mathrm{e}^{-\beta H^{\circ}(\Gamma)}\). The Poisson bracket in the second row of Eq. \eqref{eq:chi-AV-1} reads:

\(\begin{align} \begin{aligned} \{V,\rho^{\circ}\} &= \frac{\partial V}{\partial q}\frac{\partial\rho^{\circ}}{\partial p}-\frac{\partial V}{\partial p}\frac{\partial\rho^{\circ}}{\partial q} \\ &= -\beta\rho^{\circ}\left(\frac{\partial V}{\partial q}\frac{\partial H}{\partial p}-\frac{\partial V}{\partial p}\frac{\partial H}{\partial q}\right) \\ &= \beta\rho^{\circ}\{H,V\} \\ &= -\beta\rho^{\circ}\dot{V}. \end{aligned} \label{eq:AVdot} \end{align}\)

Casting Eq. \eqref{eq:AVdot} into (the second line of) Eq. \eqref{eq:chi-AV-1}, one finally obtains:

\(\begin{align} \begin{aligned} \chi_{AV}(t) &=-\beta\theta(t)\langle A(t)\dot{V}\rangle_{eq} \\ &=-\beta\theta(t)\langle A\dot{V}(-t)\rangle_{eq} \\ &=-\beta\theta(t)\dot{S}_{VA}(-t) \\ &=\phantom{-}\beta\theta(t)\dot{S}_{AV}(t), \end{aligned} \end{align}\)

where last step derives from the properties:

\(\begin{align} \begin{aligned} S_{AV}(t) &= S_{VA}(-t) \\ \dot{S}_{AV}(-t) &=-\dot{S}_{VA}(-t). \end{aligned} \end{align}\)

Let us compute the Laplace-Fourier transform of Eq. [eq:chi-S]:

\(\begin{align} \begin{aligned} \tilde{\chi}_{AV}(z) &= \beta\int_{0}^{\infty}\dot{S}_{AV}(t)\mathrm{e}^{izt}dt \\ &= \beta\left[-S_{AV}(0)+iz\int_{0}^{\infty}S_{AV}(t)\mathrm{e}^{izt}dt\right], \end{aligned} \end{align}\)
and let \(\chi'(\omega)\) and \(\chi''(\omega)\) be the real and imaginary parts of \(\tilde{\chi}\left(\omega+i0^{+}\right)\). If the time-reversal signatures of A and V are the same, one has \(S_{AV}(t)=S_{AV}(-t)\) and therefore:

\(\begin{align} \tilde{S}_{AV}(\omega)=\int_{-\infty}^{\infty}S_{AV}(t)\mathrm{e}^{i\omega t}dt=2\mathrm{Re}\int_{0}^{\infty}S_{AV}(t)\mathrm{e}^{i\omega t}dt\in\mathbb{R}. \end{align}\)

We conclude that:

\(\begin{align} \mathrm{Im}\tilde{\chi}_{AV}(\omega)=\frac{\beta\omega}{2}\tilde{S}_{AV}(\omega).\quad\quad\quad(\tau_{A}\tau_{V}=1) \end{align}\)

This is the celebrated fluctuation-dissipation theorem relating the absorptive part of a response function with the equilibrium correlations of the corresponding fluctuations.

### The Green-Kubo and Einstein-Helfand expressions for transport coefficients

For the time being, please refer to Sec. 1.2 of *Heat transport in insulators from ab-initio Green-Kubo theory* by S. Baroni *et al.*

## Lecture 3 - Heat transport

For the time being, please refer to Sec. 1.3 of *Heat transport in insulators from ab-initio Green-Kubo theory* by S. Baroni *et al.*

### Thermal *vs.* mechanical perturbations: heat transport;

For the time being, please refer to Sec. 1.3 of *Heat transport in insulators from ab-initio Green-Kubo theory* by S. Baroni *et al.*

### The classical formula for the heat current;

See Sec. 1.3.1 of the above paper.

### Gauge invariance of transport coefficients;

See Sec. 2 of the above paper.

### Density-functional theory of adiabatic heat transport.

See Sec. 3 of the above paper.