Understanding oscillations through Young measures. Pt 1. Motivation and Regularization

Abstract.

We review oscillations and an introduction to Young measures as a means of encoding this behavior. Moreover, we explore simple examples which highlight the need for such an encoding, their role in real-world applications and in the solutions of interesting physical problems.

Motivation

Variational principles provide insight into many physical phenomena: mechanics blossomed with the description of an action functional, such that motion is described by a principle of least action. Many fields including materials science, optics, control theory, image analysis and machine learning have benefited from recasting questions in terms of an appropriate “energy functional,” and exploring its optimizers or equilibrium states.

In general limits of sequences, and as a direct result minimizers of optimization problems, may not exist. This has been a common theme in mathematics since the time of the Babylonian mathematicians Pythagoras and rational or continued fractions approximations to 2\sqrt{2}. For a fun discussion of continued fractions see 1. Unfortunately our sequence “runs out of the space.” Or similarly, consider the sequence of smooth functions

fn(x)=nπe(xn)2.f_{n}(x) = \frac{n}{\sqrt{\pi}} e^{-\left(\frac{x}{n}\right)^2}.

The “limit” is well known to be the “δ\delta-function,” properly interpreted as a distribution on the set of smooth functions or alternatively as a measure. Similarly, minima of variational principles may introduce irregularities, the study of which can be a delicate subject 2.

Throughout the discussion, we will be considering very general problems of the form:

Find uA such that I0[u]=inf{I0[v]:vA}\textrm{Find } u\in{\mathcal A} \textrm{ such that } I_0[u] = \inf\left\{ I_0[v] : v\in{\mathcal A} \right\}

The Direct Method

Hilbert introduced a set of criterion under which minimizers to the problem above are guaranteed: The Direct Method.

Let (un)(u_n) be a minimizing sequence, i.e.

limnI0[un]=inf{I0[v]:vA}.\lim_{n \to \infty} I_0[u_n] = \inf\left\{ I_0[v] : v\in{\mathcal A} \right\}.

Then the direct method can be described simply as selecting an appropriate topology τ\tau satisfying the balance

  1. τ\tau is sufficiently coarse to ensure (un)(u_n) is compact,
  2. τ\tau is sufficiently fine to ensure I0I_0 is lower semi-continuous.

Under these conditions, proof of existence of a minimizer becomes relatively simple. Let (un)\left( u_n \right) be a minimizing sequence. In the topology τ\tau this sequence is (sequentially) compact. Perhaps after selecting an appropriate subsequence, let unuu_n \to u.

Then, by lower semicontinuity of I0I_0:

I0(u)lim infnI0(un)=infAI0,I_0(u) \leq \liminf_{n\to\infty} I_0(u_n) = \inf_{\mathcal A} I_0,

and uu is a minimizer of I0I_0. Uniqueness is clearly not guaranteed.

Now suppose that our minimization problem has more structure. That it can be represented as an integral:

I0[u]=f(u(x))dx.I_0[u] = \int f(u(x)) dx.

The following theorem tells us that convexity is really the key to applying the direct method, and the lack of convexity will result in potential difficulties finding solutions 3.

Theorem

Let f:Rm(,]f:{\mathbb R}^m \to (-\infty, \infty] be lower semicontinuous, and inf{f(v(x))dx:vLp}<\inf \left\{\int f(v(x))dx : v\in L^p\right\} < \infty. Then the mapping

vf(v(x))dxv\mapsto \int f(v(x))dx

is weakly sequentially lower semicontinuous if and only if ff is convex.

Nonconvexity and oscillations

Example (Bolza, Young)

Consider the famous Bolza-Young problem:

minuAI[u]=minuA(xu21)2+u2dx,\min_{u \in \mathcal A} I[u] = \min_{u \in \mathcal A} \int \left(\partial_x u ^2 - 1\right)^2 + u^2 dx,

over the collection of admissable functions

A={uW1,2:u(0)=u(1)=0}.{\mathcal A} = \left\{ u \in W^{1,2} : u(0)=u(1)=0 \right\}.

Let m=infuAI[u]m = \inf_{u \in \mathcal A} I[u]. Clearly m0m \geq 0. In fact, it isn’t too tricky to see that m=0m = 0. Consider

f(x)={x,0<x<121x,12<x<1f(x) = \begin{cases} x, & 0 < x < \frac{1}{2} \\ 1 - x, & \frac{1}{2} < x < 1 \end{cases}

periodically extended to R{\mathbb R}. Let fn(x)=1nf(nx)f_n(x) = \frac{1}{n} f\left(n x\right). This sawtooth function, with increasingly many teeth of decreasing height satisfying xfn(x){1,1}\partial_{x} f_n(x) \in \left\{-1, 1\right\} and point-wise fn(x)0f_n(x) \to 0 for all xx. However, clearly u(x)0u(x) \equiv 0 is not a minimizer.

This fine-scale, oscillatory behavior is a direct consequence of the nonconvexity of the energy. The two “low energy states”: u(x)=1u'(x)=1 and u(x)=1u'(x)=-1 allow for mixing of the two states and lowering the overall energy as a result. Not only is this mathematically interesting, but physically nonconvex problems are important. My first experience with these ideas, and ultimately what hooked me, were examples from thermodynamics.

Example (Phase mixtures)

Consider the free energy of two different material phases, as a function of mole fraction:

Schematic of the Gibbs energy for two different phases as a function of mole-fraction.

Schematic of the Gibbs energy for two different phases as a function of mole-fraction.

Suppose you have constrained the mole-fraction to be 0.5. Then a pure phase, in either case, will have higher energy than the mixing of two phases (with fractions given by the common tangent line). Mixing to lower the energy can be seen as a “regularization” process in which the observed energy is the convexification of the two energies.

In the absence of surface energy, oscillations would equally well be a minimizer of the Gibbs energy for a mixture, provided that the overall mole-fraction constraint is satisfied.

Example (Phase mixtures, distributions)

Consider the free energy of two different material phases, as a function of mole fraction:

Three equivalent distributions of phase in the lack of surface energy case.

Three equivalent distributions of phase in the lack of surface energy case.

Nonconvex energies exist and oscillations inevitably occur. Young measures represent a concise tool for organizing the information about the oscillatory behavior of minimizing sequences.

Relaxation

Suppose that we’re faced with an optimization problem for which the direct method, or any other method we have tried, which guarantees the existence of a minimizer fails. One thing we can do is relax the problem, i.e., consider an alternative problem for which existence can be shown. However, if this alternative, relaxed problem is too different from the true problem, the solution to the relaxed problem will not tell us anything about our original problem.

I like to think of the process of relaxation as two distinct steps:

  1. Enlarge the admissable set, AA~{\mathcal A} \subset \tilde{{\mathcal A}}
  2. Reformulate the energy I0I0~I_0 \to \tilde{I_0}

We can enlarge the admissable set, but not by too much! To this end, it’s often appropriate to ensure that A{\mathcal A} is dense in A~\tilde{{\mathcal A}}. Similarly if we adjust the energy too much, then certainly solutions may exist, but the new problem may not provide us with any insight into our true goal, I0I_0. The new energy must be less than the original, but not substantially so. All of this is very vague; I apologize.

Pedregal describes an general approach to relaxation 4 which should be familiar to most readers.

Choose a collection of functionals, {I}\{ I \}.

  1. Enlarge the space of admissable objects by completion with respect to this set of functionals.

By this, I mean consider sequences {ui},{vi}A\{u_i\}, \{v_i\} \in {\mathcal A}. These two sequences are equivalent if and only if limI(ui)=limI(vi)\lim I(u_i) = \lim I(v_i) for all II. This is much like completing the rational numbers and arriving at R{\mathbb R} or completing a metric space via Cauchy sequences.

  1. Extend each of the functions II to I~\tilde I by continuity
I~(u~)=limiI(ui)\tilde{I}(\tilde u) = \lim_{i \to \infty} I(u_i)

where {ui}\{u_i\} is any representative of u~\tilde{u}.

  1. Define the topology on A~\tilde{\mathcal A} as the weak-topology relative to the collection {I}\{I\}:
u~ju~    limiI~(ui~)=I~(u~)\tilde{u}_j \to \tilde{u} \iff \lim_{i\to\infty} \tilde{I}(\tilde{u_i}) = \tilde{I}(\tilde{u})

Within this framework, we can define the relaxed problem:

Find u~A~ such that I0~(u~)=infA~I0~.\text{Find } \tilde{u}\in\tilde{\mathcal A} \text{ such that } \tilde{I_0}(\tilde u) = \inf_{\tilde{\mathcal A}} \tilde{I_0}.

Within this framework for relaxation we are guaranteed that we haven’t enlarged the admissable set too much. Density is free. Moreover by continuously extending our functions, the relaxed function approximates our original problem nicely. Moreover, a solution exists. We simply added it as a object, i.e. a limit of a minimizing sequence.

So, if this framework always provides us with a relaxed problem that isn’t relaxed too much, and guarantees the existence of as solution, how is that all of our problems aren’t solved? Well, we need to find a perspective or representation that allows us to handle the limit objects. In many cases, for instance integral optimization, Young measures can be a useful representation. Herein lies the power to this very abstract concept and its application to real-world problems.

References

Footnotes

  1. Erik Davis, An introduction to continued fractions.

  2. Giuseppe Mingione, Regularity of minima: an invitation to the dark side of the calculus of variations, (2006).

  3. Fonseca and Leoni, Modern methods in the calculus of variations, Springer (2007).

  4. Pedregal, Optimization, relaxation and Young measures, AMS, (1999).