- the Prandtl’s Ansatz is false for shear profiles that are unstable to Rayleigh equations;
- the Prandtl’s asymptotic expansion is invalid for shear profiles that are monotone and stable to Rayleigh equations.

Precisely, we consider incompressible Navier Stokes equations on the half plane and , with a forcing :

As the viscosity goes to , we expect to converge to a solution of Euler equations. The justification of this convergence is however very delicate, since the boundary conditions dramatically change. As a consequence, a boundary layer is expected near in order to describe the transition between Navier Stokes boundary conditions and Euler boundary conditions. To take into account this transition, Prandtl back in 1904 introduced the following Ansatz

where describes the behavior of in a boundary layer of size , called the Prandtl’s boundary layer, and the remainder tends to zero in the inviscid limit.

The existence of the Prandtl corrector has been proven for monotonic initial data by Oleinik in the 60s. For analytic initial data, the existence of the corrector together with the validity of the Prandtl’s Ansatz has been established by Caflisch and Sammartino in 1998. This latter result in particular proves that if a boundary layer Ansatz exists to describe the limiting behavior of , then it must be of the above Prandtl’s form. However, considering analytic initial data is too restrictive, since it precludes small but high frequencies perturbations, which are more physically relevant. Up to now, there were no result which proved, or disproved, the Prandtl’s Ansatz for Sobolev initial data, despite several efforts over the years, and our work is the first in this direction: namely, we prove that there exists particular regular initial data such that the Prandtl’s Ansatz is *wrong*.

Precisely, we consider a time-independent shear layer profile is a solution of the form

* *

where is a sufficiently smooth function with and converges when to a constant . To get a time independent shear layer profile, we add a stationary forcing term which compensates for the viscosity; precisely, we take

* *

These shear profiles are smooth solutions of Navier Stokes, Prandtl, and Heat equations, with the same forcing . As will be seen, our instability results occur at a vanishing time in , and thus some can also be extended to time-dependent shear profiles.

Our two main theorems read as follows:

Theorem 1 (Grenier-Toan 2018)Let be a time-independent, analytic, and Rayleigh unstable shear profile. Then, for any and arbitrarily large, there exists , and a sequence of solutions of Navier Stokes equations with forcing , on some interval , such that

but

and

Theorem 2 (Grenier-Toan 2017)Let be a sufficiently smooth, time-dependent, monotone, and Rayleigh stable shear profile. Then, for any and arbitrarily large, there exists , and a sequence of solutions of Navier Stokes equations with forcing , on some interval , such that

but

and

The Rayleigh stable or unstable shear profiles are those that are stable or unstable to the corresponding Rayleigh equation (or the linearized Euler equations). Theorem 1 disproves the Prandtl’s Ansatz. Physically, the Rayleigh unstable profile may correspond to a reverse flow and thus rules out the exponential profile . The Prandtl equation is well posed for and for neighboring analytic profiles. However we do not know whether the Prandtl equation is well posed for nearby profiles with only Sobolev regularity. When has no inflection point or is stable to the Rayleigh equation as in Theorem 2, we do not know how to prove the instability result of order one as in Theorem 1. Nevertheless, Theorem 2 proves that there exists no asymptotic expansion of Prandtl’s type. The question of the instability of monotonic shear profiles remains open.

]]>

Precisely, we consider the Vlasov-Poisson system:

on , with denoting the charge density. The starting point in constructing Bardos-Degond’s solutions is the dispersive estimate for the free transport

posed on . Indeed, the solution reads and hence

This implies that as long as is integrable, the density satisfies . Here and in what follows, we consider the time , for otherwise the density is already bounded by . In addition, a direct computation also yields , for , under some extra regularity assumption on initial data.

Going back to the Vlasov-Poisson system (1), once the density is shown to decay at order , the standard elliptic estimates yield decaying for the electric field; namely,

On the other hand, the standard elliptic estimates yield that decays essentially as fast as , possibly up to some logarithmic growth in time: precisely,

The decay will be sufficient to apply the standard nonlinear iteration, yielding the global solution to the Vlasov-Poisson system.

Theorem 1 (Bardos-Degond ’85)There exists a positive constant so that for any initial data satisfying

for , smooth solutions to the Vlasov-Poisson system (1) exist globally in time. In addition, there is some universal constant so that

for all .

In what follows, I give a proof of the Bardos-Degond’s theorem.

**1.1. Estimates on characteristics**

Recall the particle trajectories solving

with . Assuming a priori that is sufficiently small and decays as in (3), we show that the particle trajectories are not far from those from the free transport. To keep track of the iteration, let us introduce

Lemma 2There is a universal constant so that for any , as long as remains sufficiently small, there hold

*Proof:* Fix . For any , we compute

with and . Integrating with respect to time yields

Let be the right hand side. Using (5), we check that

for some universal constant . That is, maps from to itself and it is contractive for sufficiently small. This proves the claimed estimate for . The estimates for the other quantities follow similarly. The lower bound (6)follows from the estimate on .

**1.2. Density etimates**

We obtain the following a priori estimates on the density .

Lemma 3Let be defined as in (5). Assume that and its derivatives satisfy

for and for some finite . Then, as long as remains sufficiently small, there holds

for .

*Proof:* Recall that solutions to the Vlasov-Poisson system satisfy

Thus, similar to (2), we compute

Using (6), we obtain

which proves the estimate for , when . For , we have . As for derivatives, we compute

Since both and are uniformly bounded from Lemma 2, the estimate for follows similarly.

**1.3. Nonlinear iteration**

Let us now introduce the nonlinear iteration. Set

Our goal is to prove that for sufficiently large and sufficiently small , there holds

which would end the proof of Theorem 1, upon taking sufficiently small. Note that the local existence theory shows that exists for some short time and is continuous. The estimate (7) yields a global solution, upon using the standard continuous induction.

We need to check each term in the definition of . First, using Lemma 3, we have

Note that by definition (5), . As for the fields, the elliptic estimates yield

Next, using the elliptic estimate 4, we estimate

Thus, taking large so that and small so that , we obtain (7) at once. The Bardos-Degond’s global solutions follow.

]]>

To fix the discussion, consider the following system of reaction-diffusion equations

with being sufficiently smooth.

**Wave trains.** Typically, the system exhibits one-parameter families of spatially-periodic travelling wave solutions or wave trains that are parametrized by their spatial wavenumber : namely,

for in an open nonempty interval, where the profile is -periodic in and denotes the temporal frequency or is often referred to as the nonlinear dispersion relation associated to the wave trains. Wave trains therefore propagate with the phase velocity , as one can directly check that wave trains solve the wave equation . To visualize, one may think of the simplest possible example: the traveling sinusoidal wave , for some nonzero .

**Group velocity.** An interesting quantity associated with a wave train is its group velocity , which is defined by the derivative of the nonlinear dispersion relation:

The group velocity turns out to be the speed of propagation of small localized wave-package perturbations along the wave train as a function of time . More precisely, for near , we may write

That is, the “envelope” of the wave package at formally travels with the speed , the group velocity speed. This can also be made rigorously; for instance, see Doelman-Sandstede-Scheel-Schneider.

**Defects.** Defect solutions in fact arise in many biological, chemical, and physical processes: examples are planar spiral waves, flip-flops in chemical reactions, and surface waves in hydrothermal fluid flows. They are characterized as interfaces between stable wave trains with possibly different wavenumbers on each side. Precisely, defects are an exact solution to (1) of the form

where the defect profile is assumed to be periodic in and, for appropriate left and right wavenumbers , we have that

uniformly in so that the defect converges to, possibly, different, wave trains in the far field, that is, as . We remark that periodicity of the defect profile in time implies that , with being the time periodicity of the defect, and hence the defect velocity is determined by the Rankine–Hugoniot condition

Depending on the relative sign of group velocities of the asymptotic wave trains on each side of the defect, Sandstede and Scheel have systematically classified them into four distinct types:

sinks: | exist for arbitrary | when | |

transmission defects: | exist for arbitrary | when | or |

contact defects: | exist for arbitrary | when | |

sources: | exist for unique | when | . |

This way, sinks can be thought of as passive interfaces that accommodate two colliding wave trains. Similarly, transmission and contact defects accommodate phase-shift dislocations within wave-train solutions. On the other hand, source defects occur only for discrete wavenumbers and therefore select the wavenumbers of the wave trains that emerge from the defect core into the surrounding medium: hence, they may be thought of as organizing the surrounding global dynamics, rather than the reverse. Yet, among the main non-characteristic varieties (sinks, transmission defects, and sources), source defects are the only type whose stability properties have not been understood mathematically.

Only recently, the team M. Beck, B. Sandstede, K. Zumbrun, and myself are able to settle this longstanding open problem: the nonlinear stability of spectrally stable source defects, which I shall now describe. Working in the co-moving frame, we study the perturbed source solution of the form

leading to the following system for perturbation :

The problem is to prove that remains small, if initially so (which of course appears impossible, since solutions to the heat equation with quadratic nonlinearity blow up in finite time), or rather to understand the large time dynamics of . Assuming that source defects are spectrally stable (to the linearized problem), there are several difficulties in proving the nonlinear stability. First, sources are time periodic solutions, and hence their spectrum has to be understood in the sense of Floquet via the the -periodic map of the corresponding linearization. In addition, there are two eigenvalues at the origin, with corresponding eigenfunctions (corresponding to the space-time invariants), while the continuous spectrum touches the origin, leaving no spectral gap between the continuous spectrum and the imaginary axis. These difficulties are in fact not new, and have been treated elsewhere in similar contexts; for instance, Beck-Sandstede-Zumbrun treats time-periodic viscous Lax shocks with asymptotically constant end states. The present situation is much delicate, as asymptotic “states” are non constants, or precisely, spatially periodic wave trains.

What greatly troubles us is that localized perturbations can lead to a non-localized response, precisely due to the fact the group velocities on each side of the defect have opposite sign and point outward away from the defect core, thus forming a plateau-like structure in the phase dynamics. In particular, the norm of perturbations can in fact grow in time, for (only norm remains bounded). I will now detail this point.

First of all, due to the spacetime translation invariants, are in the kernel of . As a consequence, we need to take care of projections of onto the kernel of ; namely, we write perturbation , and aim to bound unknowns . As a matter of fact, it turns out to be more convenient to look for solutions of the form, in place of (3),

for unknowns . A direct computation then yields, in place of (4),

We stress that the nonlinearity appears precisely in derivatives of the spacetime shift functions , which is crucial in the nonlinear iteration. Recall now that is the linearization around a time-periodic defect solution , which at the far fields are the asymptotic wave trains whose group velocities have opposite sign and point outward away from the defect core (i.e., ).

Loosely speaking, let us consider the following toymodel

which is the simplest possible model that incorporates both the dynamical effect of the core and the correct far-field dynamics of the spacetime translation functions. Notice that the group velocities at far fields have opposite sign and point away from each other: . The problem is reduced to study the large-time stability of the zero solution of (7). Here, we wish to obtain the precise large time dynamics of the phase solution . The Green function of the linearized problem (7) (around zero) is in fact explicit:

where denotes the error function, and the Gaussian term behaves like . That is, the two error functions precisely produce a plateau of constant height and of an expanding width of order that spreads outward; this term arises because of the zero eigenvalue of the corresponding linearized problem. In particular, the norm of this plateau grows in time of order . In the nonlinear analysis, we write , where denotes the leading plateau due to the error functions in the Green function, with an unknown function carefully introduced to collect the height of the plateau, formally leaving the remainder to solve

The nonlinear iteration follows similarly to the case of heat equations with nonlinearity .

Going back to the case of source defects, it’s crucial in our analysis to get the “right” Green function behavior for the linearized problem about a source defect; namely,

with collecting the leading behavior (error functions, plus a Gaussian) and being the remainder which behaves as a derivative of a Gaussian. Recall that are two eigenfunctions associated with the two eigenvalues at the origin. Here, it is very crucial that the remainder is of order of a derivative of a Gaussian, since otherwise (i.e., behaving as a Gaussian) the solution would blowup as in the case of heat equations with nonlinearity ; see (6). The leading Green kernels correspond to the spacetime translations.

In view of (6), we write Duhamel’s principle

in which I.C. denotes the contribution from initial data. Thanks to the Green function decomposition (8), the unknown spacetime shifts are constructed via

which are corresponding projections of solutions onto the kernel of , leaving the remainder satisfying

Now, the standard nonlinear iteration follows. To conclude, our main result is roughly as follows:

]]>

Theorem 1[Beck, Toan, Sandstede, Zumbrun, preprint 2018] Assume that source defects are spectrally stable. Then, for initial data that are sufficiently close to the defect, the solution of (1) exists globally in time, and there are functions and such that

That is, perturbed solutions converge to a shifted defect solutions. In addition, the spacetime shifts are approximately an expanding plateau: namely, there are smooth functions , which are exponentially close to constants, such that

for all . Here, is the expanding plateau:

with group velocities This gives a rather complete picture of the dynamics near stable source defects.

on , with the relativistic velocity .

The fields are defined through the Maxwell system, which can be written in term of electric and magnetic potentials as and , with , solving

with being the Leray projector on the divergence-free vector field.

Here, is the inverse of the speed of light. We are interested in the so-called non-relativistic limit: (i.e., the speed of light is sufficiently large as compared to initial data and other parameters in the model). Formally, as we obtain the Vlasov-Poisson system

which is widely used in plasma physics to describe the dynamics of electrons. On any fixed time interval, the convergence from Vlasov-Maxwell to Vlasov-Poisson can be made rigorously. Formally speaking, one observes that from the wave equation for the magnetic potential , the standard energy estimate yields

which asserts that the nonlinear coupling is formally of order on any finite time interval. The convergence follows from the standard Gronwall’s lemma, for instance, for smooth solutions in high Sobolev spaces. For convergence on any finite time interval, see the classical papers: Asano-Ukai, Degond, and Schaeffer ’86.

In this work, we are interested in justifying the non-relativistic limit on a large time interval. It is worth noting immediately that the existence of global in time smooth solutions to the Vlasov-Maxwell system is a challenging open problem, except for data in lower dimensions (i.e., 1D or 2D in ) or for data that are small on the whole space; Glassey-Schaeffer ’88 and ’97-’98.

We precisely study regular solutions near homogenous equilibria; namely, solutions of the form

for small perturbation data of size . For simplicity, take . The Vlasov-Maxwell system then reads

with . As might be of order one as seen from the energy estimate of the wave equation, it is convenient to introduce

which essentially does not change the charge and current density. The Vlasov-Maxwell becomes

Together with , the Vlasov-Maxwell is formally a perturbation of the linearized Vlasov-Poisson near homogenous equilibria .

Two scenarios:

- is Penrose unstable to Vlasov-Poisson (e.g., double bump equilibria), in which case there are exact solutions to Vlasov-Maxwell so that
This is proved in Han-Kwan & Toan ’16.

- is Penrose stable to Vlasov-Poisson (e.g., Gaussian or in fact any radial and positive equilibria), in which case the linearized Vlasov-Poisson is invertible (e.g., Mouhot-Villani on Landau damping or Han-Kwan & Rousset on quasineutral limit), which I shall detail further, below.

Let us write the linearized Vlasov-Poisson system with source , now written in Laplace-Fourier variables,

Integrating in variable yields

The Penrose condition (e.g., Mouhot-Villani) is read

which implies that

Going back to Vlasov-Maxwell system:

we can perform the Penrose inversion, plus bootstrap, yielding

yielding uniform bounds for large time of order . Several comments are to follow. First, high Sobolev norms are needed to treat the nonlinearity. Second, more seriously, the usual energy estimates for high derivatives in fact do not work, due to the apparent loss of derivatives, when working with the density , instead of . That is, we lose the divergence structure in deriving estimates for . We overcome the issue by straightening the characteristics, the idea developed by Han-Kwan & Rousset in their work on quasi-neutral limit, namely introducing the change of variables , in which is chosen so that the characteristic in the new variables are constant in .

Next, we in fact can go beyond the time of order , precisely to any order of , at the expense of restriction on well-prepare initial data. Let me detail this point. Recall the Vlasov-Maxwell system

The idea is then to rely on the so-called (high-order) Darwin approximation: namely formally, we write

That is, one may express in term of and , which can be computed in term of higher moments.

To conclude, we end up with (high-moment) Penrose inversions of the form

for some bounded operators in , in which denotes the moments of , yielding the following theorem.

Theorem 1 (Han-Kwan, Rousset, Toan, arxiv preprint 2017)Let be a radial, positive, smooth, and fast decaying equilibrium. For any and for any initial data that are well-prepared up to order , there is a unique smooth solution of the relativistic Vlasov-Maxwell system, satisfying

for and for arbitrarily large .

This is, at least to the best of our knowledge, the first instance of long time estimates for any singular limit in which the Vlasov-Poisson system is the target equation. As a matter of fact, polynomial times were also reached in the context of the mean field limit, but for the case of smoother interaction potentials (Caglioti-Rousset ’08). The linear estimates given can be interpreted as some type of linearized Landau damping for the Vlasov-Maxwell system on the torus , for large time and for well-prepared initial data.

]]>

**1. Basic quantum mechanics**

The state of a classical particle is described by its position and momentum , while the state of a quantum particle is described by a wavefunction at each instant of time , with giving the probability of finding the particle at position in .

In 1924, Louis de Broglie proposed that all particles can be represented by waves of the form , having their frequency and wavenumber determined through the energy relation and momentum , with the Planck’s constant . Viewing and as differential operators acting on the wave, one gets

For instance, consider a particle in a potential . Its energy is computed by

Using the de Broglie’s relation, we get a PDE equation

posed on the physical space . This is the well-known *Schrödinger equation* describing the dynamics of a wave representing one particle in a potential. The equation is the foundation of quantum mechanics, and it plays a fundamental role as do the Hamilton’s equations in classical mechanics.

For real-valued potential , the Hamiltonian is self-adjoint in the space, equipped with the usual inner product . This implies that is unitary on , for , and in particular, remains a probability measure if initially does. In what follows, we normalize so that .

**Heisenberg’s picture.** As the physical state is a wave function, observables are understood as function-valued operators acting on state . For instance, the expected value of is For states satisfying (1), we compute

with the commutator notation . When the state is understood, we simply write

This is often referred to as the Heisenberg’s equation or Heisenberg’s picture of quantum mechanics as opposed to (1), the Schrödinger’s picture. The Heisenberg’s equation plays a fundamental role in quantum mechanics as does the Liouville’s equation in the classical case.

**Density operator.** As in classical mechanics, one is interested in the dynamics of some type of density distribution , which is now a self-adjoint nonnegative operator in . Precisely, we introduce , the density operator defined by

or explicitly, , for all in . Since , is a bounded and positive definite operator on . In particular, . We note that is an integral bounded operator on , whose kernel is

Lemma 1Let , with the Hamiltonian as in (1). Then, the density operator satisfies the Heisenberg’s equation

*Proof:* It follows from a direct computation, starting from (3). In term of the integral kernel, it reads

which gives (4), upon taking the integration in .

**1.1. -particle quantum system**

We now consider identical particles of mass with an interaction symmetric potential . Let and be the -particle wavefunction, with giving the probability of finding particle at position at each instant of time . The -particle Schrödinger equation reads

posed on , with the Hamiltonian defined by

This is of course the classical Hamiltonian as in the previous chapter, with . As in the case of one-particle system, we introduce the density operator

whose integral kernel is computed by

for and . It follows that solves the -particle Heisenberg’s equation , or explicitly,

As in the classical case, we are interested in the density of the first particles. To this end, we introduce the density kernels

and inductively, for each , writing ,

We can then define their corresponding density operators. For instance, the density operator of the first one and two particles is defined by

which are operators on functions of one and two variables, respectively.

**1.2. BBGKY for quantum particles**

Starting from the Heisenberg’s equation for the -particle density kernel (7), we derive the BBGKY hierarchy for density kernels for the particle. The analysis is similar to what was done in the classical case. Indeed, we compute

By definition, this yields

For sake of presentation, we introduce the one-particle Hamiltonian . The above yields

Inductively, for , we introduce the -particle Hamiltonian

The density kernel satisfies the following Heisenberg’s equation

in which and . To avoid confusion, we note that the commutator with the -particle Hamiltonian explicitly reads

The Heisenberg’s equations (9) are known as the BBGKY hierarchy for quantum particles. From the evolution of the density kernels, we can obtain the BBGKY hierarchy for the density operators , which we skip the details.

As an example, we assume that the -particle wave function can be factorized as

for the one-particle wave function at position . This implies that , and the interaction terms in the above hierarchy can be simplified. For instance, we compute

recalling that , and hence, with , we have

which is understood as an equation for operators acting on functions. In term of the wave function, the above Heisenberg’s equation for yields the Schrödinger equation for

We shall come back to this equation in the next section, as a mean field limit equation of -particle Schrödinger equations.

**2. Mean field Hartree equations**

For mean field equations, we consider the Hamiltonian for -particle systems of the form

(noting we replace by ). Let be the solution to the -particle Schrödinger equation . The starting point deriving the mean field equations is the hierarchy for -particle density kernels

now indicating the -dependence for each -particle kernel. Here, the interaction operator

in which and .

Then, take initial wave configuration of a factorized form

for some one-particle wave function . Formally, one expects that in the limit of , the density kernels remain in some factorized state of a one-particle wave function. Precisely, as , we formally have

for some one-particle wave function , with . The goal is to find the dynamics of .

For each , letting , we have

and, using the factorization of ,

This yields the following infinite hierarchy equations for , for all ,

In particular, for , , and hence the Heisenberg’s equation (14) yields

posed on the physical space . Here, denotes the usual convolution in . This is known as the Hartree equation for the mean field limit of -particle quantum systems (an analogue of Vlasov equations in the classical case).

Example 1 (Schrödinger-Poisson)In the case when is the Coulomb potential, that is in the three-dimensional case, the equation (15) is often referred to as the Schrödinger-Poisson system:

Example 2 (Cubic NLS or Gross-Pitaevskii)In case of short-range potentials with and , in the limit, we get , with , as . Hence, in the above computation, , and the Hartree equation (15) is reduced to the cubic nonlinear Schrödinger’s equation or the Gross-Pitaevskii equation:

with being a constant. The sign of corresponds to focusing () or defocusing case ().

Remark 1 (Connection to the classical case)There is in fact an interesting connection of the Schrödinger equations with an interaction potential to the classical Vlasov models introduced in the previous chapter. This is done via the classical Wigner transform in the classical limit , a subject that is well documented (e.g., Lions-Paul or Gerard at al.). For instance, via the classical limit, one recovers the Vlassov-Poisson system from the Schrödinger-Poisson, or the so-called Vlasov-Dirac-Benney system from the above Gross-Pitaevskii or cubic NLS equations.

For a rigorous justification of some of the above formal derivations and references, we refer the readers to the lecture notes by Golse.

**3. Quantum Boltzmann equations**

Arguing formally as in the classical case, the interacting quantum particles can also be described by the Boltzmann equations:

for (here for simplicity, . In general, velocity for energy function ). Here, under the molecular chaos assumption, the integral collision operator now becomes

in which

in which are computed via the conservation laws as in the classical case: and . The integral collision operator has new terms in the form of , with the sign for **bosons** and the sign for **fermions**. These additional terms are to indicate that the number of particles can’t exceed one (for fermions), or on the contrary for bosons, the particles have tendency to cluster.

For the quantum Boltzmann models, the Boltzmann’s -theorem applies, yielding the equilibrium states of the form

respectively for the model. These states are often referred to in literature as the **Bose-Einstein** and **Fermi-Dirac** distributions for bosons and fermions, respectively.

The idea of using the Boltzmann equations to describe quantum particles was developed much earlier, back in 1920s, by physicists such as Nordheim, Uehling, Uhlenbeck, among others. There however appear only few mathematical results on the subject (see, for instance, Escobedo-Mischler , Lu, ….)

]]>

Precisely, we consider the Vlasov-Poisson system (considering the plasma case, only)

on , with denoting the charge density.

The global classical solution to the Cauchy problem for general compactly supported data was constructed by Pfaffelmoser ’91, Horst ’91, and Schaeffer ’91, and in fact even earlier, by Batt ’77 and Horst ’82 for data with symmetry and by Bardos-Degond ’85 for small data. Then, around the same time in 1991, Lions-Perthame proved the propagation of finite moments. It’s also worth mentioning the averaging lemma was introduced around this time by Golse-Lions-Perthame-Sentis ’88, giving the extra regularity on the macroscopic density.

In this section, we present the proof of Schaeffer (see Glassey‘s book, chapter 4) to construct the classical global solution. Precisely, the theorem reads

Theorem 1For compactly supported initial data , there is the unique classical solution to the VP problem, with . In addition, the velocity support grows at most in the large time, for any small positive .

Remark 1Over the years, there have been efforts to improve upper bounds on the velocity support. I shall not attempt to give the best possible results, but refer the readers to, for instance, Schaeffer ’11, Pallard ’11 and ’12, where an upper bound essentially of order for large time is obtained. In addition, the compactly supported data can be relaxed to have finite moments; see, for instance, Lions-Perthame ’91 and Pallard’ 12.

**1.1. A priori estimates**

We shall derive various uniform a priori estimates for smooth solutions to the VP problem (1). As seen in the last chapter, the Hamiltonian or total energy

is conserved in time. This in particular yields the a priori energy bound . In addition, due to the transport structure, we have

for any time and for being the particle trajectory satisfying the ODEs

with initial data at . In particular, (2) yields the uniform bound: , for all .

*Proof:* For the first inequality, we write

Optimizing and recalling the conservation of energy give the first inequality. Similarly, by definition, we write

Again, by optimizing , the lemma follows.

Lemma 3 (Velocity support)For compactly support initial data , the velocity support defined by

*Proof:* Recalling (3), for bounded initial velocity , we have

By definition, we have . The lemma follows.

In particular, by the Gronwall’s lemma, there is a positive time so that

We stress that and are bounded, as long as the velocity support remains bounded.

Remark 2In the two dimensional case, a similar analysis as in Lemma 3 yields the boundedness of velocity support for all (finite) time .

**1.2. Derivative estimates**

Let us give bounds on derivatives of and the field . We start with the following potential estimates.

Lemma 4For , the field satisfies

*Proof:* The lemma is classical.

Lemma 5As long as for , there holds

for some constant depending on and .

*Proof:* We differentiate the Vlasov equation with respect to and , yielding

Using the method of characteristics and the fact that , we obtain

Setting and using the boundedness assumption on , we have

The lemma follows from applying the Gronwall’s lemma to the above inequality.

**1.3. Velocity support**

As seen in the previous subsection, it suffices for the global classical solution to bound the velocity support. This turns out to be tricky and we shall follow the proof of Schaeffer. Recalling (3), we have

for any particle trajectory . To improve the estimates in the last section, we need to estimate the time integral of along the particle trajectory.

Now for any , we fix a in , and the corresponding particle trajectory that starts from at . For any , we estimate

The classical analysis is to divide the integral over three parts: namely,

for to be determined later and for (this choice will be clear in Lemma 7 below, eventually for the integral to be integrable and optimal). We shall use the notation for the characteristic function over .

Lemma 6There holds

*Proof:* In , we shall first take the integration with respect to , yielding

in which , being the characteristic function over . Since and , the same computation as done in the proof of Lemma 2 gives the lemma.

Lemma 7There holds .

*Proof:* In , we first compute the integration with respect to , yielding

in which the -integrals are taken over the set and . The lemma follows.

It remains to give estimates on . For this, we need to make use of time integration. To this end, let us introduce the particle trajectory with initial value at . Note that and the particle trajectory is an Hamiltonian flow (hence, incompressible in the phase space; in particular, the volume is preserved:). It follows that

We prove the following.

Lemma 8As long as , there holds

*Proof:* Due the the energy conservation, it suffices to prove that

Let be such that and let be the argmin of over . Introducing the distance , we compute

Since the minimum occurs at , and

upon recalling that . This yields

Recall that and in particular is not in , that is, . This implies that . Using the assumption that , the above yields

In addition, the assumption that implies that

upon using again the assumption that . We now take the integration over , upon using (9) when and (10) otherwise, yielding at once (8), and hence the lemma.

Remark 3The proof of the above lemma shows that it suffices to assume that , which plays a role in the improvements of the growth of the velocity support in large time. See, for instance, Schaeffer ’11 and Pallard ’11 and ’12.

Combining, as long as , we have

for some universal constant . Fix an . We now choose so that the above is bounded by . Without optimizing them, we take , , and . It follows that , which is clearly smaller than , the condition used above. Hence this proves that

for all finite . We are now in the position to give estimates on , starting from (6). Indeed, we partition the interval into roughly subintervals and apply the above bound. Setting , and repeatedly using(11), we have

Since , this implies that . Taking sufficiently small, we have , for any small, but fixed, . In particular, remains finite for any finite time. Thus, Lemma 5 yields the boundedness of for all finite time.

Observe that the bound only depends on time , initial energy , and norm of the initial data. One can now construct a solution to the Vlasov-Poisson problem following the standard iterative scheme. For instance, for fixed field , construct solving the linear Vlasov equation

The iterative field is then constructed by solving the Poisson equation . Performing the a priori estimates on these iterative system yields bounds on in norm, uniformly in . Passing to the limit, one obtains a classical solution to the Vlasov-Poisson system. The uniqueness follows similarly.

]]>

——————————————————————————-

The basic idea in kinetic theory is to introduce dynamical equations, using statistical physics, to replace -particle systems or Liouville’s equations, when is too large (in practice, is of order). The subject of this chapter (Chapter 1: classical kinetic models) is thus to formally derive several kinetic models, starting from basic classical mechanics, and introduce some important facts of solutions to these models.

**1. Basic classical mechanics**

In classical mechanics, the state of a particle at any instant of time is determined by its position and its momentum . The particle motion is described by the least action principle of the work carried by a Lagrangian function , defined by

Here, denotes the kinetic energy of the particle and the potential energy. Precisely, the position and momentum , , are a solution to the Lagrange’s equations:

with an initial position and initial momentum . The system (1) is simply the Euler-Lagrange equations of the minimal work carried by the Lagrangian . Note that the system (1) consists of second-order differential equations.

Example 1Consider a particle of mass moving in a potential . The Lagrangian is thus given by

The equation of motion (1) becomes the classical Newton’s second law of motion: .

Example 2Consider a particle of mass moving in an electromagnetic field, subject to the Lorentz force

with a charge constant . Write and for electric and magnetic potential functions . One can check that

is a Lagrangian of the dynamical system.

**Legendre transform.** Before introducing Hamiltonian mechanics, let us introduce the Legendre transform, which is a geometric transformation of two coordinate systems. Precisely, let be a smooth scalar function in . We consider the hypersurface , parametrized by , in . Alternatively, the surface can be parametrized by the envelope of tangent spaces on . Indeed, the tangent plane equation at , for each fixed , reads

with parametrizing variables . Equivalently, the plane equation is written as with unknown coefficients

These coefficients determine the envelope of the tangent planes of . Thus, if we can solve from , the hypersurface can also be parametrized by . The transformation to is called the Legendre transform.

We may further apply the Legendre transform to the new parametrization by setting and . It follows that

and . To conclude, the Legendre transform and its inverse are identical. This often reveals remarkable symmetry.

**Hamiltonian mechanics.** Hamilton reformulated the Lagrangian mechanics via the Legendre transformation. Precisely, we introduce

Definition 1 (Hamiltonian)Define the Hamiltonian function as the Legendre transform of the Lagrangian function:

Example 3 (Quadratic kinetic energy)Consider a Lagrangian , with . Then, the corresponding Hamiltonian is

That is, in this case, the Hamiltonian is the total energy of the system.

Example 4The Hamiltonian associated with the particle moving in a potential reads

Example 5The Hamiltonian associated with the particle motion in an electromagnetic field reads

Theorem 2As long as the Legendre transform is one-to-one, the Lagrange’s equations (1) are equivalent to Hamilton’s equations:

*Proof:* The theorem follows from a direct calculation. Indeed, using the definition of and recalling that , we compute

which is , thanks to (1) and (3). Similarly,

The theorem follows.

Theorem 3 (Conservation laws)The Hamiltonian is invariant under the Hamilton’s dynamics (4).

*Proof:* The theorem follows at once from the computation

for solving (4).

**2. Liouville’s theorem**

The classical Liouville’s theorem asserts that the phase-space distribution function is constant along the trajectories of a Hamiltonian system. More precisely, let and let be the probability distribution for the Hamiltonian system (4) to be found in the infinitesimal volume in the phase space . Let be any domain in the phase space and let be its image under the evolution (4), with initial data in . The conservation of mass asserts that the probability of finding the system in is the same as that in :

in which denotes the Hamilton’s trajectory, with initial data , and the Jacobian determinant of the map . A direct computation yields , and hence (more generally, the Liouville’s theorem holds for any “incompressible” flow: , with ). Since was arbitrary, the last identity in the above display yields the Liouville’s theorem

for trajectory solving (4), with initial data . Differentiating (5) and using the Hamilton’s equations (4) yield the Liouville’s equation:

For convenience, we introduce the Poisson’s bracket:

The Liouville’s equation simply becomes . This is a first-order PDE transport equation in , whose characteristic equations are those Hamilton’s equations (4). Of course, the equation is derived from the Hamiltonian system of only one particle(!), and hence it is a linear PDE. In the next section, we study the case with particles, focusing on the interaction between particles, which results in nonlinear PDEs in the limit of large number of particles.

Remark 1 (Casimir’s invariants)It follows from a direct computation that the casimir’s functionals

are invariant under the dynamics of Liouville’s equations (6).

Remark 2 (Stationary distribution)For time independent Hamiltonian, we note that any function is a stationary probability distribution of (6). The classical example is the Maxwellian distribution

for a system at a constant temperature , with being the Boltzmann constant.

Example 6 (Free particle)The Hamiltonian for a free particle is , and thus the Liouville’s equation is the free transport equationwith

.

Example 7 (Particle in a potential)The Liouville’s equation for a particle moving in a potential iswith

.

Example 8 (Vlasov equation)The Liouville’s equation for a particle in an electromagnetic field, as in Examples 2 and 5, reads

in which . In plasma physics, this equation is often referred to as the Vlasov equation, with the fields solving Maxwell equations. We shall come back to this system later in the course.

**3. -particle system**

In this section, we consider identical point particles of mass , with . Let be the position and momentum of each particle, and write . We consider a Hamiltonian system of particles with an associated Hamiltonian of the form

Here, denotes an external potential, which yields a force that acts equally on each particle, and is a symmetric potential function that models the interaction between particles.

Example 9Consider a system of identical point particles with mass and charge . The electrostatic (Coulomb’s) potential is of the form

with being the electric constant. Hence, the electrostatic force exerted on any particle located at the position by another particle located at the position is equal to .

The corresponding Hamilton’s equations for the -particle system are

which consists of first-order differential equations. As in the previous section, we are interested in the dynamics of the probability distribution , determining the probability of finding the system in the -particle phase space.

The Liouville’s equation (6) for the Hamiltonian system of particles becomes

which is a first-order PDE equation in . Occasionally, when no confusion is possible, we simply write the Liouville’s equations in form of the Poisson’s bracket

Using the Hamilton’s equations, we may write the Liouville’s equations in the divergence form:

In practice, is extremely large of order , for instance. Thus, we need to reduce the -particle phase space to one-particle phase space.

**3.1. Frist marginal probability**

Let be the probability distribution defined in the -particle phase space . Let us write . We introduce the first marginal probability function

We stress that particles are identical, and thus the “first” particle is chosen without introducing any new probabilistic constraint. We are interested in the dynamics of . Let us compute

Using the divergence form (11) of the Liouville’s equations for and the fact that vanishes at infinity (due to the integrability of ), we can integrate by parts the above identity, yielding

in which the first two integrals can be computed in term of . Let us introduce the following Hamiltonian for one particle system

and the collision integral defined by

It follows that

in which the right-hand side, defined as in (14), accounts for the interaction between particles. We now examine the collision integral. Since particles are identical, we can relabel the particles and write , yielding

Here in the above, we have introduced the two-particle probability distribution:

which is the probability of finding the first two particles.

To summarize, the one-particle probability distribution satisfies the one-particle Liouville’s equation:

which is not a closed equation in , and involves the particle marginal probability distribution . We now compute the dynamics for , or more generally for , the -particle marginal probability.

**3.2. BBGKY hierarchy**

Let us now derive the equation for the second marginal probability defined as in (15). Similarly as done in the previous section, we compute

upon recalling the divergence structure of the Liouville’s equations (11)and taking integration by parts in . In view of (8), we compute

Thus, except terms that involve with , the integral on the right hand side can be computed in term of . For this reason, we introduce the -particle Hamiltonian

We obtain the -particle Liouville’s equation

for being the -particle marginal probability distribution defined by

Inductively, we introduce the -particle marginal probability distribution

It follows that is equal to

which, as done exactly as above, yields the -particle Liouville’s equation

for all . In particular, when , (19) simply gives the Liouville’s equation (10) for the -particle system.

These equations (19) are known as the BBGKY hierarchy (after Bogoliubov, Born, Green, Kirkwood, and Yvon). Unfortunately, these equations are not closed equations (except ) in the sense that the dynamics of the -particle system depends also on the interaction with remaining particles. Nevertheless, these are true equations from the Hamiltonian system (i.e., without approximation), and are useful in the situation when higher order interactions are neglected. For instance, for dilute gases, the probability of finding three or higher particles colliding is significantly smaller than that of finding two, and hence the BBGKY hierarchy is reduced to the first two Liouville’s equations, one of the basic assumptions in deriving the Boltzmann equation.

**4. Mean field Vlasov equations**

We are interested in an average behavior of the -particle system, as . In mean field theory, the relevant average is the empirical measure, defined by

for each set of points in the phase space . Here, denotes the Dirac measure in . Now, let , with , be the solution to the -particle Hamiltonian system

with the Hamiltonian defined as in (8). As we are interested in the interaction potential, for simplicity we take the external potential and we consider the Hamiltonian

in which the prefactor was added, asserting that the potential energy is of the same order as the kinetic energy. That is, in this case, we keep the long or macroscopic range of interaction, but scale its strength to be sufficiently small and of order (i.e., weak interaction potential), as .

We now introduce the time-dependent empirical measure supported on the set of points . For any smooth test function , we compute

Using the Hamilton’s equations (20), we note that

upon recalling that . Integrating by parts, we end up with the following mean field Liouville’s equation

posed on the one-particle phase space , which is of course understood in the distributional sense. Here, the force exerted on each particle is computed by

We now send to infinity. *Formally,* we assume that the sequence of measures converges to a probability measure in some appropriate sense, as . Since is distributed under the probability , we expect in the limit is distributed under the limiting probability and hence

as . Thus, in the mean field limit of (22), one gets the nonlinear Vlasov equation

in which , the density probability distribution in the physical space , and the notation denotes the usual convolution.

Example 10 (Vlasov-Poisson)In the case of the Coulomb potential, Example 9, , with , the Green function of the three-dimensional Laplacian: . In this case, the system (23), coupled with the Poisson equation, is known as the classical Vlasov-Poisson system, widely used in plasma physics.

Example 11 (Vlasov-Maxwell)Another example of the mean field Vlasov equation is the Vlasov-Maxwell system, consisting of the Vlasov equation

and the Maxwell equations for . Occasionally, it is convenient to write the fields in term of potentials

with . The Maxwell equations become

in which denotes the Leray projection on the divergence-free space. For the derivation of (modified) Vlasov-Maxwell system from the -particle system, see Golse’s paper.

The mean field limit significantly reduces the complexity of the -particle problem. For one, the limiting equation (23) is a PDE on the one-particle phase space, instead of a system of first-order differential equations, with being extremely large. In addition, the well-posedness theory for the Vlasov equations is better developed, a theory that we shall come back in later chapters in the course.

The mean field limit also preserves the time reversibility and the Hamiltonian structure of the Liouville’s equations. For instance, the casimir’s functionals

are invariant under (23). In addition, we have the following theorem.

Theorem 4 (Hamiltonian)Let be a symmetric and time-independent potential. The Hamiltonian

is invariant under the dynamics of (23).

*Proof:* Indeed, using the Vlasov equation and integrating by parts, we compute

in which we have denoted by the current density in the physical space. Observe that by integrating with respect to the Vlasov equation, there holds the local conservation law

Using this and the symmetry of , we compute

Combining with (24) yields the theorem.

Example 12 (Hamiltonian for Vlasov-Poisson/Maxwell)It follows that the Hamiltonian for Vlasov-Maxwell system is

In particular, this is also the Hamiltonian for Vlasov-Poisson system with and .

Remark 3 (Vlasov equilibria)Let and let be an arbitrary function of the particle energy . Then, is a stationary solution to (23) if any only if . For instance, in case of Vlasov-Poisson system, is an equilibrium, provided solves the Poisson equation

In one-dimensional case, that is , the equilibria are well-known BGK waves (named after Bernstein, Greene, and Kruskal). In higher dimension, periodic BGK waves do not exist (exercise. Hint: show that the elliptic system for has only trivial solutions).

Finally, let us remark that regarding a rigorous justification of the mean field Vlasov equations, the case of smooth potentials, essentially , in any dimension is proved by several groups: Braun-Hepp, Dobrusin, and Neunzert-Wick. Especially, the celebrated work of Dobrushin provides quantitative estimates for the mean field limit, which are now referred to as Dobrushin’s estimates. Meanwhile, the case of singular potentials are much less understood. In particular, the justification of the mean field Vlasov-Poisson (or Vlasov-Maxwell) system remains open in two and higher dimensions. See, however, the proof in the one-dimensional case by Cullen-Gangbo-Pisante, Hauray, and Trocheris, or less singular potentials in high dimensions Hauray-Jabin and by Lazarovici-Pickl. We refer to the lecture notes of Golse and to the recent review of Jabin for other mean field equations and further discussions on the subject.

**5. Boltzmann equations**

In this section, we shall derive the famous Boltzmann equations for dilute gases. For simplicity, we assume that particles are hard spheres of diameter , with mass (hence, momentum of particles is simply their velocity ). The Hamiltonian reads

in which we assume that is a short-range potential, but remains of oder one in (i.e., strong interaction potential). That is, whenever , asserting that two particles at positions and collide or interact precisely when .

Our basic assumption is that for dilute gases the probability of finding three or higher particles are significantly smaller than finding two particles. Thus, we can ignore higher order interactions in the BBGKY hierarchy for -particle system (see Section 3), leading to the following two Liouville’s equations:

for the first marginal probability , and

for the second marginal probability . Here and in what follows, the notation is to indicate when an approximation is made.

Next, assume that the probability of finding two particles depends mainly on their relative position and momentum. That is, . Plugging this into the Liouville’s equation for yields

Using this, we can now compute the collision integral on the right-hand side of (26), giving

in which we note that the integral over the domain is zero (no collision occurs). Integrating by parts the above integral reduces it to a surface integral on the sphere in ; namely,

in which denotes the unit sphere in . The complication in computing the above integral lies in determining the probability distribution . Let us detail this point. There are two cases:

- : this is the case when two particles start the collision.
- : this is the case when two particles leave the collision.

In order to compute , the key assumption is the *molecular chaos assumption:* the two particles are uncorrelated before and after collision. Precisely, letting and be the momentum of the two particles before and after the collision, we can write

for the two-particle probability before or after the collision, respectively. In addition, since the particle diameter is sufficiently small, for particles that are on the sphere, centered at and having radius . Thus, we obtain

after suppressing the -dependence on the collision integral. In particular, the collision integral is relevant in the case when

is a finite and positive number. This is often referred to as the Boltzmann-Grad limit.

**Boltzmann equation.** Going back to the equation (26), we obtain the Boltzmann equation for dilute gases. Dropping the subscripts and renaming the incoming velocities and outgoing velocities , the Boltzmann equation for reads

posed on the one-particle phase space , with the collision integral , defined as in (27), or more generally,

upon adopting the common notation , , and so on, with dependence suppressed.

Here, in the case of colliding hard-sphere particles. In general, depending on the interaction potential , the kernel is typically of the form

in which is the angle between and . For instance, if the potential is repulsive and of order , then and , for small angle .

**Physical interpretation.** The collision integral in (29) can be understood as a collection of the new particles after collision and a subtraction of the lost particles due to collision. For this reason, we occasionally write the collision integral as

summing all possible pre- and post-collision particles, subject to the conservation of momentum and energy at collision (see (32), below). Here, the kernel is often referred to as the scattering kernel. Conversely, due to the quadratic structure of the particle energy, lie on the sphere as obtained in Lemma 5, and we can then compute . In the case of hard-sphere particles, the scattering kernel is precisely the angle of colliding particles. The collision operator (30) becomes (29).

Let us now compute the velocities after collision, assuming that particles collide elastically. Indeed, let be the unit vector , where are centers of two hard-sphere particles at collision. We decompose velocities as

Observe that for hard-sphere particles, only the projection of velocities on is (fully) exchanged after the collision. Thus, we get

We have the following simple lemma.

Lemma 5The elastic collision preserves momentum and energy after collision:

In addition, are on the sphere center at and having radius .

*Proof:* The lemma follows from a direct calculation.

**Collision invariants.** Let us give a few basic properties of the Boltzmann equation. First, we observe that

*Proof:* In view of (29), we compute

Due to the symmetry in and in before and after collision, the lemma follows from exchanging variables.

Thus, applying Lemma 6 for and using the conservation laws (32), we obtain the following three local-in-space conservation laws:

which in particular yields the conservation of total mass, momentum, and energy, upon taking integration in . We observe that these are macroscopic equations (i.e., equations for macroscopic quantities), which are however not closed. Precisely, the last equation for the second moment depends on the third moment of . One may try to derive an equation for the third moment, which would then in general involve higher moments. This presents a certain hierarchy in the Boltzmann equation when deriving the macroscopic equations.

**H-theorem.** Introduce the entropy functional

The Boltzmann’s H-theorem asserts that and it is equal to zero if and only if is a Maxwellian of the form

in which are the macroscopic density, velocity, and temperature of the gases. As for the proof, we first compute

The decreasing of the entropy follows from the inequality for any nonnegative . In addition, the equality holds if and only if or

for any satisfying the conservation laws (32). The H-theorem is obtained, upon using the following lemma whose proof is left as an exercise.

Lemma 7Let be a nonnegative function satisfying (36) whenever (32) holds. Then, is a Maxwellian

for some constants and .

**Local conservation laws.** The H-theorem shows that there is relaxation of the probability distribution to a local-in-space Maxwellian. At this point, one may wonder whether the local conservation laws (33) might be closed when is at (or near, in some appropriate sense) the Maxwellian (35)? Indeed, in this case, one can compute (or approximate, respectively) the third moment in term of the macroscopic quantities to close the system. Let us detail this point for the case when .

Observe that from (35), the quantities are computed by

with denotes the macroscopic energy. Hence, together with a repeated use of the evenness of in , one computes

and

Putting these back into the local conservation laws (33) yield the full compressible Euler equations:

recalling the macroscopic energy .

The case when is the subject of the *hydrodynamic limit* problem, which yields several fluid models including Navier-Stokes equations. For further discussions, see, for instance, Golse and Saint-Raymond . The readers are also referred to the books of Glassey, Cercignani, Cercignani-Illner-Pulvirenti , and Villani for further study on the Boltzmann equation and its applications. Finally, we mention that the rigorous derivation of the (local-in-time) Boltzmann equations from -particle systems were obtained in Lanford , whose proof has some gaps. It was then partially fixed in Cercignani-Illner-Pulvirenti. A complete proof was finally provided in Gallagher, Saint-Raymond, and Texier. We shall cover some of these topics in the course.

Formally, we expect the so-called Prandtl’s Ansatz:

where solves the Euler equations, and is the Prandtl boundary layer corrector, denoting the small viscosity or the inverse of the large Reynolds number.

The Prandtl boundary layers have been intensively studied in the mathematical literature. Notably, solutions to the Prandtl equations have been constructed for monotonic data (Oleinik ’60s, or recently Masmoudi-Wong or Alexandre at al.), or data with Gevrey or analytic regularity (Sammartino-Caflisch ’98 or recently Gerard-Varet and Masmoudi). The validity of the Prandtl’s Ansatz has been established by Sammartino-Caflisch ’98 for initial data with analytic regularity. The Ansatz, with a specific boundary layer profile, has also been recently justified for data with Gevrey regularity by Gerard-Varet, Maekawa and Masmoudi.

When only data with Sobolev regularity are assumed, Grenier proved in his CPAM 2000 paper that such an asymptotic expansion is false, up to a remainder of order in norm. The invalidity of the expansion is proved near boundary layers with an inflection point or more precisely near those that are spectrally unstable for the Rayleigh equations.

In this paper, we prove the Prandtl’s Ansatz is also false for Rayleigh’s stable shear flows, giving a proof of the Conjecture stated in Section 4 of Grenier-Guo-Toan. Such shear flows are stable for Euler equations, but not for Navier-Stokes equations: adding a small viscosity destabilizes the flow.

Roughly speaking, given an arbitrary stable boundary layer, the two main results in this paper are

Theorem 1 (Grenier-Toan: a rough statement)In the case of time-dependent boundary layers, we construct Navier-Stokes solutions, with arbitrarily small forcing of order , so that the Prandtl’s Ansatz is false near the boundary layer, up to a remainder of order in norm, being arbitrarily small.

]]>

Theorem 2 (Grenier-Toan: a rough statement)In the case of stationary boundary layers, we construct Navier-Stokes solutions, without forcing term, so that the Prandtl’s Ansatz is false, up to a remainder of order in norm.

In this program, we study the linear problem

in which denotes the fluid vorticity and the fluid velocity. We solve the problem with no-slip boundary conditions on . We are aimed to derive uniform estimates in the inviscid limit . Observe that the fluid vorticity is unbounded, but localized, near the boundary, and therefore pointwise bounds on the Green function are needed to study the precise convolution with the boundary layer behavior.

As is a compact perturbation of the Laplacian (say in the usual space), is sectorial, has discrete unstable spectrum, and the corresponding semigroup can be described by the Dunford’s integral:

where is a contour on the right of the spectrum of .

In estimating the semigroup, we can move the contour across the discrete spectrum by adding corresponding projections on the eigenfunction. However, we cannot move the contour of integration across the Euler continuous spectrum (or equivalently, the phase velocity is near the range of and hence critical layers appear). In addition, there are unstable eigenvalues that exist near the critical layers and that vanish in the inviscid limit (Grenier-Guo-Toan, justifying the viscous destabilization phenomenon, pointed out by Heisenberg (1924), C.C. Lin and Tollmien (1940s), among others).

One of the contributions of this paper is to carefully study the contour integral near the critical layers and thus to provide sharp bounds on the Navier-Stokes semigroup.

The first step is to study the resolvent solutions , or equivalently, solutions to the classical Orr-Sommerfeld equations (a ODEs), corresponding to each wavenumber and each phase velocity :

with zero boundary conditions on and . Here, . We need to study the Green function of the Orr-Sommerfeld problem.

When is away from the range of (or equivalently, is away from the continuous spectrum of Euler), solutions of Orr-Sommerfeld are regular and consist of two slow modes linked to the Rayleigh equations and two fast modes linked to the Airy equations . This is studied carefully in Grenier-Toan1.

When is near the range of , we have to deal with critical layers, points at which . The presence of critical layers greatly complicates the analysis of constructing Orr-Sommerfeld solutions and deriving uniform bounds for the corresponding Green function. Roughly speaking, there are two independent solutions to the Orr-Sommerfeld equations that are approximated by the Rayleigh solutions, whose solutions experience a singularity of the form . We thus need to analyze the smoothing effect of the Airy operator, and design precise function spaces to capture the singularity near the critical layers.

Finally, to capture the unbounded vorticity near the boundary, we study the semigroup in the boundary layer norms that were developed recently in Grenier-Toan2.

]]>