# Harmonic Functions: Why They’re Nifty

**Harmonic functions** are a class of functions that arise in countless engineering and physics applications. This document will review the definition, associated properties and typical applications of these mathematical marvels. To make this summary useful, it is assumed that you know a little calculus.

#### The Problem

If $u(x)$ is a scalar function in $n$ variables, $x=(x_1, x_2, … , x_n)$, then **Laplace’s equation** is the homogeneous linear partial differential equation:

$\Delta u(x) = \nabla \cdot \nabla u = \sum_1^n {\partial^2 u \over \partial x_i^2} = 0$

The left-hand side above is called the **Laplacian** and is sometimes denoted $\nabla^2$ instead. The solutions of $\Delta u(x) = 0$ are called **harmonic functions**. The name has its root in the study of harmonic motion (sound waves, for example) but the equation has proven applicable to a whole universe of problems beyond that.

More formally (if you care): Laplace’s equation is defined on an open, nonempty set $\Omega$ in $\mathbb{R}^{n}$ or $\mathbb{C}^{n}$, and specify that $u(x)$ is a twice differentiable scalar function, denoted $C^2[\Omega]$, that maps elements from the $\overline \Omega$, the closure of $\Omega$, into $\mathbb{R}$ or $\mathbb{C}$ (and yes, this reads like the math equivalent of a prescription drug ad disclaimer).

#### Properties and Examples

Before continuing, it’s helpful provide a few examples. Namely, what kind of functions are harmonic? Constant and linear functions certainly fit the bill since their second derivatives are always zero. There are of course an infinite number of non-linear harmonic functions too. For example,

$e^x siny + x + 1$

$x^2 – y^2 + x$

$e^{-x}\, (x\,siny \,-\, y\,cosy)$

They can be complex-valued as well:

$f(z) = (x + iy)^2 = x^2 -y^2 + 2xyi$

Harmonic functions are **linearly superimposable**, which means that sums and scalar multiples of harmonic functions are themselves harmonic. So based on the examples above we can conclude that $e^x siny + x + 1 + 4(x^2 – y^2 + x)$ is also harmonic. And so on.

Aside: this is an example of a set being **closed** under an operation: harmonic functions are closed under addition and scalar multiplication. That may seem unimpressive but think about something as simple as odd and evenness: the sum of two odd numbers is even, not odd, and so oddness is *not* closed under addition or scalar multiplication.

Harmonic functions are also **invariant** under rigid motions, meaning harmonic functions can be translated and rotated and still retain their harmonicity. For examples of this, see this Wiki page.

These two properties are incredibly powerful on their own and yet it’s only the tip of the iceberg…

#### The Mean Value Property

Next is the mean value property (MVP), which is arguably its most important property. The MVP describes how the harmonic functions behave within bounded regions. In particular, inside any spherical region the average value of the function will be its value at the sphere’s center, which is also the average value across the surface of that sphere. This is true in n-dimensions too, from a line to a circle to an n-sphere.

The mean value property is expressed mathematically as follows, where $B(x,r)$ is a sphere of radius $r$ centered at $x$:

$$\begin{eqnarray}

\def\avint{\mathop{\,\rlap{-}\!\!\int}\nolimits}

u(x)

= \avint_{ B(x,r)} u dy

= \avint_{\partial B(x,r)}u dS

\end{eqnarray}$$

Note that the dashed integral symbol refers to the following average:

$$\begin{eqnarray}

\avint_{ B(x,r)} u dy = {1 \over Vol} \int_{ B(x,r)} u dy

\end{eqnarray}$$

For the surface integral, divide by the surface area:

$$\begin{eqnarray}

\avint_{ \partial B(x,r)} u dS = {1 \over Area} \int_{ \partial B(x,r)} u dS

\end{eqnarray}$$

The proof for the MVP is widely available [EV] (§2.2.2) [AXL] [SG].

Two consequences of this. First, extrema can only occur on the boundary: there are no max or min within the region unless the function is constant. And related, if a harmonic function is bounded on all of $\mathbb{R}^n$ it is necessarily constant.

In his 1881 *Treatise on Electricity and Magnetism*, James Clark Maxwell made the following remark about the Laplacian:

I propose therefore to call $\nabla^2$ the concentration of q at the point P, because it indicates the excess of the value of q at that point over its mean value in the neighbourhood of the point.

That is, the Laplacian at a point measures how much a function differs from its neighbors. If it’s positive, then it’s less than its neighbors; if it’s negative, it’s greater. And when it’s zero, this value is exactly the average of those neighboring points [LRX]. This has an interesting application in the problem of edge detection in digital images—sudden changes in the color intensity that occur where the content changes: a face against a background. For this reason, the Laplacian is often used to detect these types of intensity changes [AIS].

#### Real Analytic and Complex Analysis

Harmonic functions are real-analytic. Being real-analytic on an open set means that the value of the function at any point in that set can be expressed as a convergent power series. As it turns out, that power series is a Taylor series. As a consequence, harmonic functions are also **infinitely differentiable**, a.k.a., smooth or regular.

Side note: a smooth function doesn’t necessarily have to be analytic. See this example.

In two dimension, harmonic functions have a symbiotic relationship with complex analysis. This leads to a number of interesting outcomes.

The first is that two dimensional harmonic functions are also **analytic** (holomorphic) and vice versa. Analytic is the complex version of differentiable, which is more powerful since it requires the derivative at a point be the same from every direction, not just right and left as it is with real differentiability.

Next, if a function is analytic, then its real and complex parts are harmonic. This follows from the Cauchy-Riemann equations.

#### Dirichlet Problem

When the Laplacian yields something other than zero, the resulting inhomogeneous equation is called **Poisson’s equation**, $\Delta u(x) = -f(x)$. It should be noted that a solution to Poisson’s equation can be linearly combined with a harmonic function and still be a solution. More on Poisson’s equation shortly.

The problem of solving a PDE with a specific boundary function is called the **Dirichlet** **problem**. The solution of the Dirichlet problem for the Laplace equation, if it exists, will be unique. As a consequence, the solution inside the region is uniquely defined by it values on the boundary.

$$\begin{eqnarray}

\begin{cases}

\phantom{-}

\mathop{{}\bigtriangleup}\nolimits

u = 0 \,\,u \in \Omega

\\\phantom{-}

u(x) = g(x) \,\,\,\,u \in \partial \Omega

\end{cases}

\end{eqnarray}$$

This is an example of a **well-posed problem**, which basically means that its solutions represent something realistic and are not pathological. In particular, solutions will continuously approach the boundary function value as $x$ nears the boundary $\partial \Omega$ (this is the notation for the boundary of $\Omega$).

The canonical example of a Laplace Dirichlet problem is a steady-state temperature distribution in a region with one or more heated edges. The image below is an example of this (via Matlab). There are also interesting applications in Brownian Motion.

There are other types of boundary problems too, such as Neumann and Robin depending on how the boundary constraints are defined. For steady-state heat problems, Neumann conditions are often used to represent adiabatic (insulated) edges.

#### Operators

**Operators** are useful when working with these equations. Think of an operator as a kind of function for functions. For example, the first derivative is an operator that takes a function $f(x)$ and returns its derivative, also a function. The inverse operator for the first derivative operator is the antiderivative. Matrices are operators too, as are their inverses. There are many types of operators of course. The Laplacian is considered a** linear differential operator**.

The linear part of “linear operator” means that the operation on a sum of terms is the sum of those operations: $L(\alpha x + \beta y) = \alpha L(x) + \beta L(y)$. For harmonic functions, this is consistent with the property of linear superposition already mentioned.

In operator form, Poisson’s equation takes on the following elegance.

$ L = \Delta$ (the operator)

$ L u = f $ (Poisson’s equation)

$ u = L^{-1}f $ (its solution)

Operators also have **null spaces** (sometimes called the kernel, which is a poor naming choice since the kernel has an unrelated meaning too). These are the functions that the operator maps to zero: $L\Delta =0$. In particular, harmonic functions are the null space of the Laplacian.

Operators also have domains just like functions. For example, when $\Omega$ is the $\mathcal{L}^2(\mathbb{R}^n) $ Banach space, we can specify that the domain of $L$, denoted $D(L)$, is the Sobolev space, $H^2(\mathbb{R}^n)$, which is closed and dense in $\mathcal{L}^2(\mathbb{R}^n)$.

The corresponding boundary conditions for linear differential operators are embedded in the domain of the operator definition. For example, here’s the domain expression for a one dimensional ODE with homogenous boundary conditions:

$D(L) = \{ u \in C^2([0,1]) $

$\space\space\space\space\space\space\space\space\space\space\space\space\space\space: u(0) = u(1) = 0 \}$

More useful terminology hidden above. The set of continuous twice-differentiable functions on the interval $[0,1]$ is denoted $C^2([0,1])$.

Finally operators like functions can also be bounded or unbounded but this depends on the norm used by that operator (the norm provides a notion of size for its objects). For example, the Laplacian is an unbounded operator when its domain is $\mathcal{L}^2(\mathbb{R}^n) $ using the $\mathcal{L}^2 $ norm but it is bounded when that domain is $\mathcal{H}^2(\mathbb{R}^n) $ paired with the $\mathcal{H}^2$ norm. *For more info, see this thread*

#### The Fundamental Solution

The ** fundamental solution** of Laplace’s equation is a function $\Phi(x| \xi)$ which is harmonic everywhere except at a singularity $\xi$. As an operator this is often written $L \Phi(x| \xi)= \delta(\xi)$, where $\delta(\xi)$ is a Dirac distribution centered at $\xi$. A classic example is the potential field of a point charge.

There are three types of fundamental solution for Laplace’s equation, depending on the dimensionality of the problem.

- If $n=1$ the fundamental solution is a linear function $|x – a|$
- For $n=2$ it’s logarithmic: $ln |x – a| \,, x \ne a$
- For $n \ge 3$ it’s $|x – a|^{2-n} \,, x \ne a$

A few observations around this. The first is that these represent a **family of solutions**, not a specific one since linear combinations of these functions are still harmonic.

The second observation is that for $n>1$, the solutions above are not defined at the the point $a$, rather, they are only harmonic on the so-called “punctured region”, $\Omega \backslash \{a\}$. But for all three the derivatives do not exist at $a$. The singularity that occurs there is of tremendous importance.

Third, these equations have radial symmetry. This again aligns with intuition. Both heat and electric fields radiate radially outward, as does the pull of gravity.

#### Free space Poisson equation

The free space (no boundary conditions) Poisson equation in terms of the fundamental solution is the following convolution:

$$\begin{eqnarray}

\Delta u(x)

=

\mathop{{}\bigtriangleup}\nolimits

\Phi(x) * f(x)

\end{eqnarray}$$

$$\begin{eqnarray}

=

\delta (x) * f(x)

=

\int_{\mathbb{R}^n}\delta (x-\xi) \, f(\xi)d\xi

=

f(x)

\end{eqnarray}$$

That is, the driving function $f(x)$ is the linear superposition of fundamental solutions, just as a charge distribution can be thought of as the weighted superposition of point charges.

In operator form:

$Lu(x) = L\Phi(x) * f(x) = \delta (x) * f(x) = f(x) $

#### Green’s Functions

The **Green’s function** $G(x, \xi)$ for Poisson’s equation is a fundamental solution with homogeneous boundary conditions applied. It is sometimes called the Influence function.

The second variable in $G(x, \xi)$, $\xi$, indicates that we define a Green’s function at every point in the region thus providing a mechanism to superimpose them across that region. We set $G(x, \xi)$ to zero on the boundary so that the only component of $u(x)$ on the boundary is $g(x)$, the boundary value function. Green, in his seminal 1828 paper in which this concept was introduced, suggested that we think of the zero potential on the boundary as the result of grounding the conducting surface. But this technique proved to extend far beyond problems in electrostatics.

Collectively, these ideas are expressed by the following accessory equation:

$$\begin{eqnarray}

\begin{cases}

\phantom{-}

\mathop{{}\bigtriangleup}\nolimits

G(x, \xi) = \delta(x – \xi) & u \in \Omega

\\\phantom{-}

G(x, \xi) = 0 & u \in \partial \Omega

\end{cases}

\end{eqnarray}$$

Green’s functions are specific to the boundary geometry too, meaning they can be reused when solving the Dirichlet Problem for the same region regardless of the forcing and boundary functions.

One side note. The Green’s function here is symmetric (or more formally, self-adjoint). That is, $G(x, \xi) = G(\xi, x)$. The proof of this is complicated (see [EV]) but the upshot is that we can switch variables when doing certain integrations. This comes in handy when proving some of the results shown in this section.

It’s also worth stressing that Green’s functions exist for many different differential equations, not just Poisson’s equation. It is but one of the many techniques available for cracking these often intractable problems. We now demonstrate how it is used in the context of the Dirichlet Problem.

#### Poisson Dirichlet Problem

In terms of the Green’s function, the general solution for the the Poisson equation with Dirichlet conditions is shown below. The proof is shown in [EV], [SG] and others.

$$\begin{eqnarray}

u(x) =

– \int_{\partial \Omega} g(\xi)\frac {\partial G(x, \xi)}{\partial \eta} dS(\xi)

+ \int_{\Omega} f(\xi) G(x, \xi) d\xi

\end{eqnarray}$$

The term $\frac {\partial G(x, \xi)}{\partial \eta} $ is called the Boundary Influence Function, which for the two dimensional unit disk becomes the well-known **Poisson Kernel**. In particular, when $f(x)=0$ as it is for Laplace’s equation, the closed-form solution on a circle, $|r| < R$, is:

$$\begin{eqnarray}

u(r, \theta) = \frac{1}{2\pi R}\int_0^{2\pi}

\frac

{R^2 – r^2}

{ r^2 – 2Rr\cos(\theta – \psi) + R^2} g(\psi)

d\psi

\end{eqnarray}$$

You should notice that the denominator above has a singularity at the point $(R, \psi)$. The Poisson Kernel therefore represents the value that would be generated by a delta distribution on the boundary per the singularity in the denominator. This means that the function value $u(r, \theta)$ is the weighted sum of all such distributions along the boundary. Compare this to how the forcing function is the weighted sum of Green’s functions across that region.

The Poisson Kernel for the unit circle can be expressed in rectangular coordinates using the law of cosines:

$$\begin{eqnarray}

u(x) =

{1 -|x|^2 \over 2 \pi}

\int_{\partial \Omega} \frac {g(\xi)}{|x-\xi |^2} d\xi

\end{eqnarray}$$

More generally in n-dimensions,

$$\begin{eqnarray}

u(x) =

{1 -|x|^2 \over n \alpha(n)}

\int_{\partial \Omega} \frac {g(\xi)}{|x-\xi |^n} d\xi

\end{eqnarray}$$

In terms of operators, the Green’s function is kernel of the inverse to the equation $L u = $f

$$\begin{eqnarray}

u(x) = (L^{-1}f)(x) := \int G(x, \xi) f(\xi)d\xi

\end{eqnarray}$$

In general finding the Green’s functions and thus solving Poisson’s equation is not easy since these integrals generally do not have closed-form solutions. But solutions do exist for a few simple geometries (i.e., the shape of the boundary). The solution on a circle is well-known as is the solution in the half-plane both found using the method of images (see [EV], for example).

Fortunately in two dimensions there are many more options thanks to correspondence between harmonic functions and complex analysis. In particular, we have the wonderful technique of conformal mappings and by extension, the technique of Schwartz-Christoffel. For example, this is the two-dimensional unit circle Green’s function translated from another region via a conformal mapping $g(z)$:

$$\begin{eqnarray}

G(z, \xi) =

{1 \over 2\pi} \ln{

|{1 – \overline g(\xi) g(z) } | \over {|g(z)-g(\xi)|}

}

\end{eqnarray}$$

#### Exterior Problems

Exterior solutions are also interesting. In two dimensions, the exterior solution to the Dirichlet problem on a disc tends toward a finite limit that happens to be the mean value solution. That is, the solution at infinity for an exterior region of a disc is that same as the value at the center of the disc interior. This can be shown fairly easily by taking the limit toward infinity of the Poisson Kernel in the exterior Dirichlet Problem.

$$\begin{eqnarray}

\lim_{r\to\infty}

u(r, \theta) = \frac{1}{2\pi }\int_0^{2\pi}

g(\psi)

d\psi

\end{eqnarray}$$

In higher dimensions, this limit needs to be zero to ensure a unique solution. For a proof see [AXL] or this paper by Giles Auchmuty and Qi Han. This result is interesting given the previous result for the two-dimensional problem and arguably physically consistent too. See [SG].

#### Bibliography

[AIS] Sinha, U. The Sobel and Laplacian Edge Detectors. Microsoft.

**[ALF]** Ahflors, L., 1979, *Complex Analysis,* McGraw-Hill Book Company.

**[AXL]** Axler, S., Bourdon, P., Ramsey, W., 2001, *Harmonic Function Theory*, Springer.

**[EV]** Evans, L., 2010, *Partial Differential Equations: Second Edition (Graduate Studies in Mathematics), 2nd Edition*,* *American Mathematical Society*. *

[EV2] Evans, L. 2011?. Overview article on partial differential equations*, *UC Berkeley*.*

[HT] Hunter, J., 2001. *Applied Analysis*, WSPC.

[HT2] Hunter, J., 2014. *Notes on Partial Differential Equations*, University of California at Davis.

[LRX] Lamoureux, M., 2004. *The mathematics of PDEs and the wave equation*. Univ. of Calgary.

**[LV] **Levandosky, J., 2003. Math 220B Lectures, Stanford University.

**[NH]** Nehari, Z., 1952, *Conformal Mapping*, McGraw-Hill Book Company.

[OL2] Olver, P., 2018, Complex Analysis and Conformal Mapping, University of Michigan

[STR] Strauss, W. 2007. *Partial Differential Equations: An Introduction*, Wiley.

**[SG]** Stakgold, I., 1967,* Boundary Value Problems of Mathematical Physics*, Macmillan.