Munkres Introduction to Topology: Section 21 Problem 11

I expected this problem to be fun, and it was. This is a fairly long problem, so it is split up into four parts, which I will go through sequentially. In addition, there are a few other proofs that I am going to go through while I'm writing this solution, to further clarify what exactly is going on. In addition, I'll be citing a proof that I went through the the Section 21 Problem 8 solution, so if you are unfamiliar with my argument, see Problem 8. Without further delay, let's get into the solution.

Part A

We let $s_a \ = \ \sup (s_n)$, and we assert that our sequence converges to $s_a$. We choose some open set $(s_a \ - \ \epsilon, \ s_a \ + \ \epsilon)$ around $s_a$. Now, let us consider the point $s_a \ - \ \epsilon$. We prove that there must be some point $s_N$ such that $s_N \ > \ s_a \ - \ \epsilon$, by contradiction. Consider if there didn't exist any such point, then for all $i$, we would have $s_i \ \leq \ s_a \ - \ \epsilon$. This would make $s_a \ - \ \epsilon$ an upper bound on our sequence, and, since $s_a \ - \ \epsilon \ < \ s_a$, this point would be the lowest upper bound, which is obviously a contradiction, as we have already stated that $s_a$ is our supremum. Thus, there must exist some $s_N$, such that $s_N \ > \ s_a \ - \ \epsilon$. Since we already know $s_{n \ + \ 1} \ \geq \ s_n$, it follows that all points $s_n$, for $n \ > \ N$ are also greater than $s_a \ - \ \epsilon$, and thus contained within the open set. By definition, the sequence then converges to $s_a$. $\blacksquare$

Part B

Now, we let $(a_n)$ be a sequence of real numbers, with:
$$s_n \ = \ \displaystyle\sum_{i \ = \ 1}^{n} a_i$$
If $s_n \ \rightarrow \ s$, then we say the infinite series converges to $s$ as well. We must show that if $\sum a_i$ converges to $s$ and $\sum b_i$ converges to t, then:
$$\displaystyle\sum_{i} \ ca_i \ + \ b_i \ \rightarrow \ cs \ + \ t$$
Where $c$ is just some coefficient. In order to prove this, we have to prove a few other things first.

Proof 1

Firstly, we must show that a function from a metric space to another metric space, $f:X \ \rightarrow \ Y$, will be continuous if an only if for every $\epsilon \ > \ 0$ and every $x$ and $y$, we can choose a $\delta$ such that if $d_X(x, y) \ < \ \delta$, then this implies that $d_Y(f(x), \ f(y)) \ < \ \epsilon$. First, we let:
$$d_X(x, \ y) \ < \ \delta \ \Rightarrow \ d_Y(f(x), \ f(y)) \ < \ \epsilon$$
Now, we prove that $f$ is continuous. If this statement is true, we know that for each $x_0$, that $f(B_X(x_0, \ \delta)) \ \subset \ B_Y(f(x_0), \ \epsilon)$. We then know that $B_X(x_0, \ \delta) \ \subset \ f^{-1}f(B_X(x_0, \ \delta)) \ \subset \ f^{-1}(B_Y(f(x_0), \ \epsilon))$ for each point $x_0 \ \in \ X$. $B_X(x_0, \ \delta)$ is a basis element for the metric topology on $X$, thus $f^{-1}(B_Y(f(x_0), \ \epsilon))$ is open by the basis definition of an open set. Conversely, let $f^{-1}(V)$ be open in $X$ for some open set $V$ in $Y$. Proving continuity with respect to basis elements is equivalent to to proving general continuity for all open sets, as only open set can be written as a union of open sets. Thus, $U \ = \ f^{-1}(B_Y(f(x_0), \ \epsilon))$ is open in $X$ for some $x_0 \ \in \ X$. In the metric topology, there exists an open ball basis element centred at $x_0$ in $U$, thus we have:
$$B_X(x_0, \ \delta) \ \subset \ U \ = \ f^{-1}(B_Y(f(x_0), \ \epsilon))$$
For each $x_0 \ \in \ X$. We thus have:
$$f(B_X(x_0, \ \delta)) \ \subset \ U \ = \ ff^{-1}(B_Y(f(x_0), \ \epsilon)) \ \subset \ B_Y(f(x_0), \ \epsilon)$$
This means that if we have some point $x$ such that $d_X(x_0, \ x) \ < \ \delta$, then we must have $d_Y(f(x_0), \ f(x)) \ < \ \epsilon$, which concludes our proof. $\blacksquare$

Proof 2

We now have to prove that the standard topologies on $\mathbb{R}$ and $\mathbb{R}$ are equivalent to the square metric topology. For $\mathbb{R}$, some basis element is represented as $(a, \ b)$. This open interval is also a basis element of the square metric topology, as:
$$(a, \ b) \ = \ B_{\rho}\Big(\frac{a \ + \ b}{2}, \ \frac{b \ - \ a}{2} \Big)$$
The other way, a basis element of the square metric topology is given a $B_{\rho}(x_0, \ \epsilon)$, which is a basis element of the standard topology of the form $(x_0 \ - \ \epsilon, \ x_0 \ + \ \epsilon)$. The corresponding proof for $\mathbb{R}^2$ follows in an almost identical fashion. We can actually generalize this to $\mathbb{R}^n$, but I'm not going to do that right now).

Proof 3

We now have to demonstrate that the function of addition between two real numbers is a continuous function! This is actually part of Problem 12, but I thought that it makes sense to just do it now, rather than just "accepting" that addition is continuous. We define addition between metric spaces as:
$$+ : \mathbb{R}^2 \ \rightarrow \ \mathbb{R}$$
We will prove continuity with respect to the square metric topology on $\mathbb{R}$ and $\mathbb{R}^2$, which we have just found is equivalent to proving continuity with respect to the standard topology. The textbook gives us a fairly big hint: Munkres tells us to let $\epsilon \ = \ \delta/2$ nd note that:
$$d(x \ + \ y, \ x_0 \ + \ y_0) \ \leq \ |x \ - \ x_0| \ + \ |y \ - \ y_0|$$
This is super helpful. Recall Proof 1. In order to prove continuity of addition, we must show that:
$$\rho((x, \ y), \ (x_0, \ y_0)) \ < \ \delta \ \Rightarrow \ \rho(x \ + \ y, \ x_0 \ + \ y_0) \ < \ \epsilon$$ $$\Rightarrow \ \rho((x, \ y), \ (x_0, \ y_0)) \ = \ \text{max}(|x \ - \ x_0|, \ |y \ - \ y_0|)$$
Let's choose some $\epsilon \ > \ 0$. Our goal is to find a $\delta$ such that the first statement implies the second. For the $\delta$ condition holding true, we must then have:
$$\text{max}(|x \ - \ x_0|, \ |y \ - \ y_0|) \ < \ \delta$$ $$\Rightarrow \ |x \ - \ x_0| \ + \ |y \ - \ y_0| \ \leq \ 2\text{max}(|x \ - \ x_0|, \ |y \ - \ y_0|) \ < \ 2\delta$$
And we have:
$$\rho(x \ + \ y, \ x_0 \ + \ y_0) \ = \ |x \ + \ y \ - \ x_0 \ - \ y_0| \ \leq \ |x \ - \ x_0| \ + \ |y \ - \ y_0|$$
So we conclude that:
$$\rho(x \ + \ y, \ x_0 \ + \ y_0) \ < \ 2\delta$$
If our $\delta$ condition holds true, this must hold true. In order for this to then imply the $\epsilon$ condition, we set $\delta \ = \ \epsilon/2$, to get:
$$\rho(x \ + \ y, \ x_0 \ + \ y_0) \ < \ \epsilon$$
Which is exactly what we wanted, thus addition is continuous with respect to the square metric topology, which implies it is continuous in the standard topology. $\blacksquare$

Proof 4

This is the last external proof before we jump back into the actual problem. We need to show that if $x_n \ \rightarrow \ x$ and if some $f:X \ \rightarrow \ Y$ continuous, then $f(x_n) \ \rightarrow \ f(x)$. Since $f$ is continuous, for each $x \ \in \ X$, and each open set $V$ around $f(x)$ in $Y$, there must exist some neighbourhood $U$ around $x$ such that $f(U) \ \subset \ V$. Let us take the point $x$ to which our sequence converges, and find the open set around it $U$ such that $f(U) \ \subset \ V$ (it exists, as $f$ is continuous). Within this open set, there must exist $x_n$ for $n \ > \ N$, as $x_n \ \rightarrow \ x$, thus we must have $f(x)$, along with $f(x_n)$ for $n \ > \ N$ in $V$. We can repeat this process for every open set $V$ around $f(x)$. It then follows by definition that $f(x_n) \ \rightarrow \ f(x)$. $\blacksquare$

The Actual Proof

Cool, so now we can actually jump into the proof. It was proved earlier in the textbook that if $x_n \ \rightarrow \ x$ and $y_n \ \rightarrow \ y$, then $x_n \ \times \ y_n \ \rightarrow \ x \ \times \ y$. This also means that since addition is continuous, $x_n \ + \ y_n \ \rightarrow \ x \ + \ y$. Finally, the function $f(x) \ = \ cx$ is continuous, thus $cx_n \ \rightarrow \ cx$. Putting this all together, and we get:
$$\displaystyle\sum_{i} \ ca_i \ + \ b_i \ \ = \ c \displaystyle\sum_i a_i \ + \ \displaystyle\sum_i b_i \ = \ cs_n \ + \ t_n \rightarrow \ cs \ + \ t$$
Which concludes our proof. $\blacksquare$

Part C

Next, we have to prove the comparison test. Specifically, if $|a_i| \ < \ b_i$ for all $i$, and $\sum_i b_i$ converges to $b$, then $\sum_i a_i$ converges. First, let us have:
$$s_n \ = \ \displaystyle\sum_{i \ = \ 1}^{n} \ |a_i| \ < \ \displaystyle\sum_{i \ = \ 1}^{n} \ b_i$$
Since $|a_i| \ < \ b_i$, each $b_i$ is positive, so $(s_n)$ is bounded above by $b$, as each term of the sum $s_n$ is less then the corresponding term $b_i$, and the sum of $b_i$ elements is always increasing. Evidently, $(s_n)$ is also bounded below by $0$. This implies that $s_n \ \rightarrow \ s$. Next, let's consider the sum:
$$t_n \ = \ \displaystyle\sum_{i \ = \ 1}^{n} |a_i| \ + \ a_i$$
The series $t_n$ is also evidently bounded below by $0$. Since $a_i \ < \ |a_i|$, then we have:
$$\displaystyle\sum_{i \ = \ 1}^{n} |a_i| \ + \ a_i \ = \ \displaystyle\sum_{i \ = \ 1}^{n} |a_i| \ + \ \displaystyle\sum_{i \ = \ 1}^{n} a_i \ < \ 2 \displaystyle\sum_{i \ = \ 1}^{n} b_i \ < \ 2b$$
So the series $(t_n)$ is also bounded above, thus it also converges. Finally, citing Part B, and multiplying the first sum by $c \ = \ -1$, then adding the two sums together, we find that $\sum_i a_i$ converges. $\blacksquare$

Part D

Finally, we will prove the Weierstrass M-test, which states that for a series of functions $f_n:X \ \rightarrow \ \mathbb{R}$, if:
$$s_n \ = \ \displaystyle\sum_{i \ = \ 1}^{n} f_i(x)$$
And if $|f_i(x)| \ \leq \ M_i$ for all $x \ \in \ X$ and all $i$ and $\sum_i M_i$ converges to $M$, then $(s_n)$ converges uniformly to $s$. To prove this we let:
$$r_n \ = \ \displaystyle\sum_{n \ + \ 1}^{\infty} M_i$$
We now pick elements $s_j$ and $s_k$, and we compute the metric distance between them. Since we are operating with the standard topology on $\mathbb{R}$, we can equivalently use the square metric topology, and thus define our metric to be the square metric. We have, for $j \ < \ k$:
$$|s_k \ - \ s_j| \ = \ \biggr\rvert \displaystyle\sum_{i \ = \ 1}^{k} f_i(x) \ - \ \displaystyle\sum_{i \ = \ 1}^{j} f_i(x) \biggr\rvert \ = \ \biggr\rvert \displaystyle\sum_{i \ = \ j \ + \ 1}^{k} f_i(x) \biggr\rvert \ = \ \displaystyle\sum_{j \ + \ 1}^{k} |f_{i}(x)| \ \leq \ \displaystyle\sum_{j \ + \ 1}^{\infty} |f_{i}(x)| \ \leq \ \displaystyle\sum_{j \ + \ 1}^{\infty} M_i \ = \ r_j$$
So in summary, we have:
$$|s_k \ - \ s_j| \ \leq \ r_j$$
For any value of $k$ and $j$, with $j \ < \ k$. By the comparison test, for the sums of $|f_i(x)|$ and $M_i$, we find that $s_n \ \rightarrow \ s$, thu we get:
$$|s \ - \ s_j| \ \leq \ r_j$$
Notice that:
$$r_j \ = \ \displaystyle\sum_{i \ = \ j \ + \ 1}^{\infty} M_i \ = \ M \ - \ \displaystyle\sum_{i \ = \ 0}^{j} M_i$$
Since $\sum_{i \ = \ 0}^{j} M_i$ converges to $M$, we find that by Part B, $r_j \ \rightarrow \ r \ = \ 0$. This means that for some value of $\epsilon$, we are able to pick some value $|r_j|$ that is smaller (by the definition of convergence), thus giving us for some value $q$:
$$|s \ - \ s_q| \ \leq \ r_q \ \leq \ |r_q| \ < \ \epsilon \ \ \ \ \forall \epsilon \ > \ 0$$
This is the definition of uniform convergence, thus we conclude our proof. $\blacksquare$