(Lecture 22 of Mathematical Methods II.)
A singularity is a vague concept in Physics, associated to a divergence, that is, a quantity going to infinity as it a parameter on which it depends approaches the singularity. It is an interesting mathematical feature often associated to a profound change in the physical behaviour with important or uncontrollable consequences, such as black holes (a singularity in spacetime), mechanical singularity (where a mechanical system stops being predictable), the Van Hove singularity in the density of states of crystals (with consequences on the optical spectra) and even the concept of a technological singularity with the emergence of an artificial intelligence taking over that of our specie.
A singularity can however be rigorously defined in the context of complex calculus. Overall, it is a point where an analytic function ceases to be analytic. We can make it more precise by not qualifying the function in the first place:
Definitions: A point $z_0$ is a singularity of a function $f$ if there is a point where $f$ is analytic in every neighborhood of $z_0$ and $f$ is not analytic at $z_0$. The singularity is isolated if there exist a neighborhood where the only singularity is $z_0$ itself.
The embodiment of a singularity is certainly:
\begin{equation} \tag{1} f(z)=\frac{1}{z}\,, \end{equation}
at $z_0=0$. The function is holomorphic everywhere else (with derivative $-1/z^2$) but is not defined at the origin, where the function goes to infinity: $\lim_{z\rightarrow0}f(z)=\infty$. Proof: for all $C\in\mathbf{R}$, there exists $z_C$ such that $(|z|<|z_C|)\Rightarrow|f(z)|>C$. Indeed, $z_C=e^{i\theta}/C$ for any $0\le\theta<2\pi$ does the job.
Not all singularities bring the function to infinity, however. In fact, such a singularity can be "canceled" by a numerator going to zero at least as fast as the denominator. For instance:
\begin{equation} \tag{2} f(z)=\frac{\sin z}{z}\,, \end{equation}
is not defined per se at $z=0$. Defining it to be $0$ there, one get a function that is in fact everywhere analytic (it is known as the cardinal sine $\mathrm{sinc}(z)$). This is clear from the Laurent expansion of Eq. (2):
\begin{equation} \tag{3} \mathrm{sinc}(z)=\frac{z-z^3/6+z^5/120+\cdots}{z}=1-z^2/6+z^4/120+\cdots \end{equation}
from which we see that the cardinal sine is a cosine lookalike $1-z^2/2+z^4/23+\cdots$. It indeeds share many links with it (not surprisingly). For instance, its local maxima and minima correspond to its intersections with the cosine.
Such a singularity, that is apparent only as analycity can be restored or enforced by a proper choice of the value of the function at the point that seems to be the problem, is called a removable singularity.
Singularities remain, most of the time, a messy business. We will now see how complex analysis makes the situation quite neat.
Consider the three functions:
\begin{equation} \tag{4} f_1(z)=\sin\frac{1}{z}\,,\quad f_2(z)=\frac{1}{\sin z}\quad\mathrm{and}\quad f_3(z)=\frac{1}{\sin\frac{1}{z}}\,, \end{equation}
what can we say of their singularity at $z=0$? The key is to use Laurent series. This gives:
\begin{align} \tag{5} f_1(z)&=\frac{1}{z}-\frac{1}{3!z^3}+\frac{1}{5!z^5}-\cdots\,,\\ f_2(z)&=\frac{1}{z}+\frac{z}{6}+\frac{7z^3}{360}+\cdots\,,\\ f_3(z)&\text{ has no Laurent series}\,. \end{align}
The isolated singularities are classified by the number of terms in the principal part (remember that this is the series of negative powers from the Laurent expansion). If the principal part is finite, we speak of poles. The order of the singularity is the largest exponent in this finite sum. If the principal part is infinite, the singularity is essential.
Thus, $\sin(1/z)$ has a simple pole (i.e., pole of order 1), $1/\sin(z)$ has an essential singularity while the singularity of $1/\sin(1/z)$ is not isolated. We will not deal with the latter type, but will show that it is indeed not isolated. This means that in any neighborhood of the origin, the function has a singularity, i.e., $\sin(1/z)$ has a zero. Phrased equivalently, for any $\epsilon>0$, there is at least one zero of $\sin(1/z)$ for $|z|<\epsilon$, i.e., there is at least one zero of $\sin(w)$ for $|w|>1/\epsilon$, which is clearly true since the sine has zeros at all multiples of $\pi$.
Now, there is an important difference between poles and essential singularities. The pole behaves as expected from a divergence:
If $f(z)$ has a pole at $z_0$, then $|f(z)|\rightarrow\infty$ as $z\rightarrow z_0$.
On the other hand, an essential singularity has a more complicated behaviour. It does not need diverge in the first place. For instance, $\sin(1/z)$ on the real axis is bounded (it lies in the interval $\mathcal{I}=[-1,1]$). Its limit, if it would exist, would therefore be a finite number. There is no limit on the real axis as the function has the entire interval $\mathcal{I}$ as accumulation points. On the imaginay axis, on the other hand, $\sin(1/z)=\sin(-i/y)=-i\sinh(1/y)$, and its modulus diverges like $e^x/2$ with $x=1/y$ as $y\rightarrow 0$.
In fact, the absence of a limit for an essential singularity is almost maximally violated by Picard's theorem, which states that:
If $f$ is analytic with an isolated essential singularity at $z_0$, it takes all, except possibly one, complex values in any neighborhood of $z_0$.
The proof is a bit involved and we will illustrate it with a particular case. Instead of $f_1$, we will show it on the simpler case $\exp(1/z)$. For any $z=re^{i\theta}$ in a neighborhood of zero (say in an open ball of radius $\epsilon$), we have:
\begin{align} \tag{6} \exp(1/z)=\exp\left(\frac{\cos\theta-i\sin\theta}{r}\right)\,. \end{align}
We now show that this can be equated to any $z_0=r_0\exp(i\theta_0)$, i.e., that there exist $r_0$ and $\theta_0$ such that $|r_0|<\epsilon$, and (by equating the modulus and argument):
\begin{equation} \begin{cases} \exp(\cos(\theta)/r) = r_0\,, \\ -\sin(\theta)/r = \theta_0\,. \end{cases} \end{equation}
which yields:
\begin{equation} \tag{7} r=\frac{1}{\sqrt{(\ln r_0)^2+\theta_0^2}}\,, \end{equation}
and
\begin{equation} \tag{8} \tan(\theta)=-\frac{\theta_0}{\ln r_0}\,, \end{equation}
The tangent takes all real values so for any $\theta_0$ and $\ln(r_0)$, there exists a $\theta$ which will meet the required condition. As for the equation on $r$, the only constrain is that $r<\epsilon$. This however can ensured by adding enough multiples of $2\pi$ to $\theta_0$, which will not change the angle but will draw the value of $z$ closer to the origin and ultimately within any of its neighborhoods. This illustrates Picard's theorem with zero as the particular value that is not taken by the function in any of its neighborhoods (it is not taken at all since $\exp$ is never zero, even in the complex plane).
This is also the case for $f(z)=\exp(-1/z^2)$ for which we could define $f(0)=0$ and have a function everywhere differentiable (on the real axis). The function is not, however, complex differentiable for otherwise it would have a Taylor Series, whereas we know that it has an infinite principal part (and no term of positive power in the Laurent expansion).
Clearly, singularities often are due to a denominator going to zero. This makes the study of zeros of interest in connection with singularities. It is then useful to know that the zeros of an analytic function are isolated.
The terminology for zeros is similar: a $n$-th order zero is such that the function is zero as well as all the $n-1$ successive derivatives. In the Taylor expansion $\sum_{k=0}^\infty c_k(z-z_0)^k$, the coefficient $c_k=0$ for $k<n$ and $c_n\neq0$. We now prove that zeros of analytic functions are isolated. Assume that the zero is of order $n$, then:
\begin{equation} \tag{9} f(z)=(z-z_0)\sum_{k=n}^\infty a_k(z-z_0)^{n-k}\,. \end{equation}
Let us call $g(z)$ the function defined by the Series. It is, by definition, analytic and satisfies $g(z_0)=a_n$ (otherwise the zero would not be of order $n$). Now given that it is continuous (since it is derivable, being analytic), for any $\epsilon>0$, there exists a neighborhood of $z_0$ such that $|g(z)-g(z_0)|<\epsilon$. By using the reverse triangle inequality:
\begin{equation} \tag{10} \big||g(z)|-|g(z_0)|\big|\le|g(z)-g(z_0)| \end{equation}
we can then conclude that $|g(z)|\ge|g(z_0)|-\epsilon$, i.e., for $\epsilon$ small enough, $|g(z)|>0$ in a neighborhood of $z_0$. Since $(z-z_0)$ is zero only at $z_0$, this proves the assertion.
If $f$ is analytic at $z_0$ and has a zero of $n$th order there, then for any $g$ also analytic and such that $g(z_0)\neq0$, we then know that $h(z)=g(z)/f(z)$ is has a pole of order $n$th order at $z_0$. Such functions that are holomorphic except for isolated points where they have poles of finite orders are called meromorphic. They provide useful links to Riemann surface and in particular with the Riemann sphere.