Limits are instrumental to the study of Calculus. Derivatives are defined in terms of the limit, limits are crucial for understanding the behavior of infinite sequences and series, they’re used in the fundamental theorem of Calculus, and also just to better understand the behavior of functions. However, the way limits are typically introduced is useful for the study of Calculus, but not particularly rigorous. The rigorous definition of the limit is achieve through the delta epsilon definition. While the definition is pretty important, actually being able to write delta epsilon proofs is another skill that can take a while to develop.

The rigorous definition of the limit (assuming we’re looking at a limit approaching a finite point that has a finite limit is the following): \[\lim_{x \to c}f(x) = L\] \[\forall \epsilon > 0 \quad \exists \delta \quad \text{s.t.} \quad 0 < \left \lvert x - c \right \rvert < \delta \implies \left \lvert f(x) - L \right \rvert < \epsilon\] This definition is a little bit dense, so let’s unpack it. This is essentially saying that for any arbitrarily chosen epsilon that is a positive real number (the \(\epsilon > 0\) constraint), there exists a number, \(\delta\), such that for ever \(x\) within \(\delta\) of \(c\), the value of the function at \(x\) is within \(\epsilon\) of \(L\), or the true limit of the function at \(c\). Essentially what this is saying, is that we can make \(\epsilon\) as small as we want, and we will be able to find a value of \(\delta\) that makes the value of the function at \(x\) (assuming the constraint between \(x\), \(c\), and \(\delta\) is met) within \(\epsilon\) of \(L\). We’re showing that we can make the value of the function as close to the limit as we want, which means that the limit of the function must be that value as \(x\) approaches \(c\).

Sometimes we want to be able to prove that a specific limit statement actually equals the value that we think it does. Informally, we can do this using limit rules and some intuition, but rigorously we have to use the epsilon delta definition. When using the normal limit rules, it’s also important to note that they are derived through the use of the delta epsilon definition. There are a couple steps that need to be done in order to do a proof like this. Generally, they are as follows: 1. Start off with an epsilon statement, namely \(\left \lvert f(x) - L \right \vert < \epsilon\). 2. Rewrite this epsilon statement to be in the form \(\left \lvert x - c \right \vert < \text{RHS}\). 3. Choose \(\delta\) to be \(\text{RHS}\), assuming that \(\text{RHS}\) is only in terms of \(\epsilon\). Otherwise some extra manipulation might be required. 4. Write some boilerplate proof text. 5. Work backwards, starting with the delta assumption, but with \(\delta\) rewritten to be \(\text{RHS}\). 6. Work towards the statement \(\left \lvert f(x) - L \right \rvert < \epsilon\). It should be pretty similar to the forward scratch work done earlier. After you have an equation of that form, you should be done. 7. Write QED (the most important step). This set of steps will work for some number of equations, but there are a lot of nuances that can crop up in even relatively simple problems that will require some special care. Let’s start off by looking at a couple of examples.

First off, let’s look at the following example: \[\lim_{x \to 0}\frac{(1+x)^2-1}{x}\] In
order to prove that this limit is something, we need a value to prove.
We can find the value of this limit using limit rules and algebraic
simplification: \[\lim_{x \to
0}\frac{(1+x)^2-1}{x} \implies \lim_{x \to
0}\frac{x^2+2x+1-1}{x}\] \[\implies
\lim_{x \to 0}\frac{x^2+2x}{x}\] \[\implies \lim_{x \to 0}x+2 = 2\] Now that
we know that the value of the limit is 2, we can start working on a
delta epsilon proof. As outlined in the instructions above, let’s start
with the epsilon statement (step 1), and try and manipulate it into the
form of the delta statement (step 2): \[\left
\lvert \frac{(1+x)^2-1}{x} - 2 \right \rvert < \epsilon\]
\[\implies \left \lvert \frac{x^2+2x+1-1}{x}
- 2 \right \rvert < \epsilon\] \[\implies \left \lvert x + 2 - 2 \right \rvert
< \epsilon\] \[\implies \left
\lvert x - 0 \right \rvert < \epsilon\] Now we have
successfully transformed the epsilon statement into the form of the
delta statement (\(\left \lvert x - c \right
\rvert < \delta\)). Now we know that the right hand side (RHS)
of our equation is just \(\epsilon\),
so we can go ahead with writing the actual proof:

Proof:

Suppose
\(\epsilon > 0\) and is
arbitrary.

Choose \(\delta=\epsilon\).

Suppose \(\left \lvert x - 0 \right \rvert <
\delta\): \[\left \vert x - 0 \right
\rvert < \delta \implies \left \lvert x - 0\right \rvert <
\epsilon\] \[\implies \left \lvert x +
2 - 2 \right \rvert < \epsilon\] \[\implies \left \lvert \frac{x^2+2x+1-1}{x} - 2
\right \rvert < \epsilon\] \[\implies \left \lvert \frac{(1+x)^2-1}{x} - 2
\right \rvert < \epsilon\] QED! We have now completed the
objective of the proof, which is showing that when we define \(\delta\) correctly in terms of \(\epsilon\), and suppose that the delta
condition, \(\left \lvert x - c \right \rvert
< \delta\), is met, the epsilon condition, \(\left \lvert f(x) - L \right \rvert <
\epsilon\), is also met.

\[\lim_{x \to -2}x^2-1=3\] Proving
the value of this limit can be quite a bit more challenging as we will
se in a moment. First off, let’s start by taking the epsilon formula and
trying to manipulate it to look like the delta formula: \[\left \lvert x^2 - 1 - 3 \right \rvert <
\epsilon\] \[\implies \left \lvert x^2
-4 \right \rvert < \epsilon\] \[\implies \left \lvert (x+2)(x-2) \right \rvert
< \epsilon\] \[\implies \left
\lvert x+2 \right \rvert \left \lvert x-2 \right \rvert <
\epsilon\] \[\implies \left \lvert x+2
\right \rvert < \frac{\epsilon}{\left \lvert x-2 \right
\rvert}\] We have successfully made the left hand side look like
the delta formula, \(\left \lvert x + 2 \right
\rvert < \delta\), but the right hand side is written in terms
of both \(\epsilon\) and \(\delta\). This present a problem because we
need to be able to write \(\delta\)
solely in terms of \(\epsilon\). To
solve this one, we have to use a little trick, namely finding a value in
terms of \(\epsilon\) that is always
going to be less than \(\frac{\epsilon}{\left
\lvert x - 2 \right \rvert}\), at least for small values of
delta. In this case, \(\frac{\epsilon}{8}\) should be a decent
choice. We have to note the restrictions for when we perform this. This
is only true when \(x\) is sufficiently
close to \(c\), so in our case \(x\) needs to be sufficiently close to \(-2\). We can add the restriction that \(\left \lvert x + 2 \right \rvert < 1\).
Now that we’ve chosen \(\delta\) to be
\(\frac{\epsilon}{8}\), yielding the
following formula from above: \[\left \lvert
x + 2 \right \rvert < \frac{\epsilon}{8} < \frac{\epsilon}{\left
\lvert x - 2 \right \rvert}\] We can begin to write out the
proof:

Suppose that \(\epsilon >
0\) and is arbitrary.

Choose \(\delta = \frac{\epsilon}{8}\)

Suppose
\(\left \lvert x + 2 \right \rvert <
\delta\):

\[\left \lvert x + 2
\right \rvert < \frac{\epsilon}{8}\] \[\implies \left \lvert x + 2 \right \rvert <
\frac{\epsilon}{8} < \frac{\epsilon}{\left \lvert x - 2 \right
\rvert}\] \[\implies \left \lvert x +
2 \right \rvert < \frac{\epsilon}{\left \lvert x - 2 \right
\rvert}\] \[\implies \left \lvert x +
2 \right \rvert \left \lvert x - 2 \right \rvert < \epsilon\]
\[\implies \left \lvert x^2-4 \right \rvert
< \epsilon\] \[\implies \left
\lvert x^2 -1 -3 \right \rvert < \epsilon\] QED! While this
only shows that the limit converges when the \(x\) values that we’re evaluating at are
close enough to \(-2\), thus
consequently only working for \(\epsilon\) values that are somewhat small,
this should be somewhat okay because when we’re trying to show the value
of a limit, we only really care about the value around the point that
we’re trying to show convergence to. As long as we can show it for all
points that have a certain level of “closeness” to the value we’re
trying to show convergence to, the proof should be somewhat correct. For
a little bit more validity though, you should be able to take \(\delta\) to be the minimum of two different
values, one being the \(\frac{\epsilon}{8}\) formula that we used
here, and the other being a number sufficiently small enough to make the
equation true so that when we have large values of \(\epsilon\), the work that we do is still
valid.

The previous definition works well when we want to take the limit of
a function at a finite value, but we often want to evaluate the end
behavior of functions, or their limits at infinity or negative infinity.
This requires a slightly different technique, but is overall pretty
similar. The definition for a limit at infinity is as follows for the
limit \(\lim_{n\to \infty}f(n) = L\):
\[\forall \epsilon > 0 \quad \exists N
\quad \text{s.t.} \quad n > N \implies \left \lvert f(x) - L \right
\rvert < \epsilon\] What we’re saying directly with this
statement is that for every \(\epsilon\) greater than zero, there exists
a number (usually taken to be an integer), \(N\), such that for every \(n > N\), \(\left \lvert f(x) - L \right \rvert\) is
less than \(\epsilon\). Essentially
what we’re saying is that we can find a value such that all values
greater than that value when plugged into the function make it within
\(\epsilon\) of \(L\). We’re saying that we can make the
value of the function at \(x\) as close
as we want to \(L\), which means that
the limit exists at that point, and is \(L\). Now that we have the definition out of
the way, let’s look at an example, particularly the proof of the
following limit: \[\lim_{n \to
\infty}\frac{1}{n^2}=0\] So again, let’s start off with the
epsilon formula and try and manipulate it to look like the restriction
on the input, which in this case is just an inequality of the form \(n > \text{RHS}\): \[\left \lvert \frac{1}{n^2} - 0 \right \rvert <
\epsilon\] \[\implies \frac{1}{n^2} -
0 < \epsilon\] We can perform this specific step, because we
know that \(\frac{1}{n^2}\) is going to
be positive over all the real numbers. \[\implies 1 < \epsilon n^2\] \[\implies \frac{1}{\epsilon} < n^2\]
\[\implies \sqrt{\frac{1}{\epsilon}} <
n\] We have no found that we should chose \(N\) to be, in this case \(\sqrt{\frac{1}{\epsilon}}\). Now, let’s
right the actual proof:

Suppose that \(\epsilon > 0\) and is arbitrary:

Choose \(N =
\sqrt{\frac{1}{\epsilon}}\)

Suppose \(n > N\): \[n
> \sqrt{\frac{1}{\epsilon}}\] \[\implies n^2 > \frac{1}{\epsilon}\]
\[\implies n^2 \epsilon > 1\] \[\implies \epsilon > \frac{1}{n^2}\]
\[\implies \epsilon > \left \lvert
\frac{1}{n^2 - 0} \right \rvert\] QED! Notice again that the
process of writing the proof is just the exact opposite of the scratch
work that we had to do because instead we’re starting with the delta/n
assumption and then trying to end up with the epsilon formula.

We might also be interested in showing the divergence of a limit to
\(\infty\) or \(-\infty\). This again requires a different
technique, because the epsilon statement \(\left \lvert f(x) \pm \infty \right \rvert <
\epsilon\) doesn’t mean a whole lot. Instead we need to be able
to show that we can make the limit arbitrarily large (either positive or
negative depending upon if we’re trying to prove it approaching \(\infty\) or \(-\infty\)) in magnitude as \(x\) gets closer and closer to some point.
The definition is essentially as follows for the \(\infty\) case: \[\lim_{x \to c}f(x)=\infty \implies \forall M >
0 \quad \exists \delta \quad \text{s.t.} \quad \left \lvert x - c \right
\rvert < \delta \implies f(x) > M\] Let’s try an example,
namely \(f(x)=\frac{1}{\left \lvert x-4 \right
\rvert}\), which diverges to \(\infty\) when \(x=4\). Specifically, let’s look at the
following limit: \[\lim_{x \to
4}\frac{1}{\left \lvert x - 4 \right \rvert}=\infty\] Let’s start
off with the \(M\) statement that needs
to be satisfied and try and work towards the form of the delta
statement: \[\frac{1}{\left \lvert x-4 \right
\rvert} > M\] \[\implies 1 > M
\left \lvert x-4 \right \rvert\] \[\implies \frac{1}{M} > \left \lvert x-4 \right
\rvert\] This means that we should chose \(\delta\) to be \(\frac{1}{M}\). Knowing that, let’s get into
the proof:

Suppose that \(M>0\)
and is arbitrary:

Choose \(\delta=\frac{1}{M}\).

Suppose \(\left \lvert x - 4 \right \rvert <
\frac{1}{M}\).

\[\implies \left
\lvert x-4 \right \rvert M < 1\] \[\implies M < \frac{1}{\left \lvert x -4 \right
\rvert}\] QED! Again, we’re just solving backwards from the
scratch work. Some of these techniques can be combined as well. For
example, if we want to show that the limit of a function at infinity is
infinity, we need to use a combination of these two techniques. The
definition for that ends up being the following: \[\lim_{x \to \infty}f(x)=\infty \implies \forall M
> 0 \quad \exists N \quad \text{s.t.} \quad x > N \implies f(x)
> M\] An example for that one is currently left to the
reader.