The Maximum Distance Problem and Minimal Spanning Trees

Given a compact $E\subset \mathbb{R}^n$ and $s>0$, the maximum distance problem asks to find a compact and connected subset of $\mathbb{R}^n$ of smallest one dimensional Hausdorff measure, whose $s$-neighborhood covers $E$. For $E\subset \mathbb{R}^2$, we prove that minimizing over minimal spanning trees that connect the centers of balls of radius $s$, which cover $E$, solves the maximum distance problem. The main difficulty in proving this result, is overcome by the proof of Lemma 3.5, which states that one is able to cover the $s$-neighborhood of a Lipschitz curve in $\mathbb{R}^2$, with a finite number balls of radius $s$, and connect their centers with another Lipschitz curve, $\Gamma_\ast$, where $\mathcal{H}^1(\Gamma_\ast)$ is arbitrarily close to $\mathcal{H}^1(\Gamma)$.


Introduction
There are many variants of the traveling salesman problem in R 2 . The classic problem seeks the shortest connected tour through a finite collection of points E = {x i } N i=1 ⊂ R 2 , where the points represent cities a salesman has to visit. One variant of the TSP is the analyst's traveling salesman problem, which essentially asks the same question, except for one crucial difference -the set E is not restricted to finite collections of points (otherwise it reduces to the classical traveling salesman problem). In the analyst's traveling salesman problem, one seeks necessary and sufficient conditions for the existence of a finite continuum Γ containing E, where, by finite continuum we mean a set which is compact, connected and has finite H 1 measure. Here H 1 (Γ) is the one dimensional Hausdorff measure of Γ (see Definition 2.7).
Because for general sets in E ⊂ R 2 , it is often the case that E is not contained in any finite continuum, we might consider trying to find a finite continuum, Γ, of smallest 1-dimensional Hausdorff measure, such that the maximum distance from Γ to any point in E is at most s > 0. This is the problem we focus on in the current paper.
We will see that in fact, one could just as easily have defined a finite continuum to be a compact, connected, 1-rectifiable set of finite H 1 measure (or even the Lipschitz image of a compact interval) by using [4, §3.2], or the slightly more precise form, in Theorem 2.11, which is stated and sketched in [3, §1.1]. We discuss this in more detail in Section (2.2). For those who are not familiar with these type of characterizations of rectifiable sets (which is a very active area of current research), we recommend the reader start with an excellent book by Kenneth Falconer [4].

The Maximum Distance Problem and Steiner Trees
As stated in the introduction the minimization problem we focus on in this paper is: λ(E, s) := min{H 1 (K) : K is a finite continuum and E ⊂ B(K, s)}.
In the literature, this problem is called the maximum distance problem, and we will use that name to refer to it here. A finite continuum Γ such that B(Γ, s) ⊃ E and H 1 (K) = λ(E, s) is called a minimizer of λ(E, s), or an s-maximum distance minimizer of E. As we will see, for compact E ⊂ R n and s > 0, minimizers of λ(E, s) always exist.
Note that any bounded E ⊂ R 2 is clearly contained in the s-neighborhood of a finite continuum (for any s > 0). Therefore, asking for sufficient and necessary conditions for the existence of such a set, in analogy to the ATSP question, is not interesting. The existence of minimizers -i.e., finding a Γ such that H 1 (Γ) = λ(E, s) and B(Γ, s) ⊃ E -is more interesting, but is straightforward using a standard application of Gołąb's Theorem. [See Falconer's book [4] for the Gołąb's Theorem in R n , or Section 4.4 of Ambrosio and Tilli's Topics on Analysis in Metric Spaces [1], where Ambrosio and Tilli use these facts to get existence of geodesics in metric spaces.] We give the details in Section 2.3 of this paper.
Because of this difference with the ATSP, we focus on answering a different question which is motivated by the following simple heuristic; which we call, the cover-and-connect heuristic.
Cover-and-Connect 1. Cover E with a finite number of balls of radius s, centered on the set of points X.

2.
Connect all the centers in X with a closed connected curve, Γ. In this paper, we let Γ be either the Steiner tree S X over X, or the minimum spanning tree T X over X.
[For definitions, see Problem 2.2 and Problem 2.3.] Since X ⊂ Γ, the s-neighborhood Γ contain the balls, and therefore E. Thus Γ is a candidate minimizer.
In the cover-and-connect heuristic, notice that since we are connecting all the points in X, we might as well connect them with a Steiner tree over X. This leads us to ask the following question that motivated our main theorem, Theorem 3.6.
How close is H 1 (S X ) to λ(E, s)?
One can come up with many examples of E where any Steiner tree, S X generated over centers of balls that cover E satisfies H 1 (S X ) > λ(E, s). A useful, simple example is given when E equals the s-neighborhood of a finite line segment in R 2 (see Figure 1). Although this example shows that we do not have strict equality with any S X , the main result of this paper shows that there is a sequence of finite point sets In particular, for a given compact E ⊂ R 2 and s > 0, defining the s-spanning length of E as we establish the following result, in Theorem 3.6.
Remark 1.1. In general, Steiner trees S X over X may introduce a new collection of branching points, Y (often called Steiner points in the literature). If we then consider the new collection of points X = X ∪ Y , it is a fact, that the minimal spanning tree, T X = S X . Therefore, the definition of σ(E, s) is unchanged when replacing S X in equation (2) with a minimal spanning tree T X .
By the above remark, we prove the following corollary in 3.7. This is an important clarifying remark, due to the fact that Steiner trees are difficult to compute, but minimal spanning trees are not. Corollary. Let E ⊂ R 2 be compact and let s > 0. Define the analogous , B(X, s) ⊃ E} where we are instead, taking minimal spanning trees T X over X instead of Steiner trees over X. Then σ (E, s) = λ(E, s).
The proof of Theorem 3.6 will follow from Lemma 3.5 which constitutes the heart of our paper. Intuitively, this lemma says that, given any > 0, the s-neighborhood of any Lipschitz curve, Γ, is contained in a finite number of balls of radius s, whose centers are connected by another finite continuum, Γ * , such that H 1 (Γ * ) is within of H 1 (Γ). The precise statement of the lemma is as follows.
Lemma. Let s > 0 and Γ ⊂ R 2 be the image of a Lipschitz function γ : [0, 1] → R 2 . Given > 0, there exist a finite number of points X := {x i } N i=1 ⊂ R 2 and a Lipschitz curve Γ * containing X such that We will now briefly outline previous work on the maximum distance problem and closely related problems, such as the average distance problem, the constrained average distance problem, and it's L p variants.

Previous Work
In the mathematical literature, the maximum distance problem (MDP) evolved from a different starting point than ours, which started by thinking about variations of the ATSP. The problem was first introduced into the literature by Buttazzo, Oudet, and Stepanov in [2] where they studied optimal urban transportation networks in a cities. In their case, optimality meant minimizing the average distance between the population in the city and the transportation network itself. More precisely, the city population was modeled as a measure µ on R 2 , and transportation networks were modeled as connected, compact sets Σ with H 1 (Σ) ≤ l, for some fixed constant l > 0. The objective was to minimize the average distance over all connected compact sets Σ such that H 1 (Σ) ≤ l. One can think of this problem as the L 1 version of the L ∞ "dual" maximum distance problem, where instead of minimizing H 1 (γ) over all closed connected γ such that E ⊂ B(γ, s) for a fixed s, we minimize s > 0 over all closed connected γ such that E ⊂ B(γ, s) and H 1 (γ) ≤ l for fixed l. This is the L ∞ version in the sense that, at least in the case of well behaved measures µ, solving MDP* yields ||dist(x, Σ)|| ∞ µ := inf{r > 0 : µ{x : dist(x, Σ) > r} = 0}.
Noting that we see that , and the connection to the above, L 1 version becomes more apparent.
In [7], Paolini and Stepanov studied both the maximum distance problem and it's dual, and were able to show that minimizers of the of the maximum distance problem and it's dual are in fact equivalent in R n . 1 Figure 1: On the left, the s-neighborhood of the line segment is not covered by the s-neighborhood of a finite number of points lying on the line segment. On the right, we extend our previous points outwards just enough to cover the s-neighborhood of the line segment. The length of the new 1rectifiable set connecting the new points is equal to the union of the red and green line segments is not much larger than the length of the original red line segment.
These works began a large line of work for the average distance problem and for related problems such as the one studied here. For an overview of the average distance problem, see the wonderful survey of Lemanent [6], and references therein.
Along this line of work, Teplitskaya recently announced [8] an enlightening regularity result proven in [9]. The result states that minimizers of the maximum distance problem consist of a finite number of curves which have one sided tangent lines at each point. Teplitskaya also shows that the angles between these tangent lines are greater than or equal to 2π/3.
As far as we can tell, the results in this paper are new, except for the existence result in Section 2.3; which has been included to keep the paper self-contained.

Outline of the Proofs
As a means of illuminating the path to the proof of Lemma 3.5, and hence Theorem 3.6, we show that σ(E, s) = λ(E, s) for two simpler cases. It is our hope that in doing so, a non-expert will be able to get a better instinctive feel for the types of arguments used in the proofs of Lemma 3.5 and Theorem 3.6. In Lemma 3.3 we assume that E is the s-neighborhood of a line segment, and then in Proposition 3.2, we assume that the s-maximum distance minimizer of E is a C 1 curve, rather than merely a finite continuum as we do so in Theorem 3.6.
The approach we take to prove Theorem 3.6, Lemma 3.3 and Proposition 3.2, is to show the existence of Steiner trees {S n }, such that H 1 (S n ) → λ(E, s) as n → ∞. Of course, as our definition of σ(E, s) requires, each S n will be taken over a finite collection of points X n such that B(X n , s) ⊃ E. The way we go about this, is by explicitly constructing X n , and a curve Γ n * that connects all the points in X n . Since by definition, The key technique to proving Lemma 3.5, is shown to us in the simple case where E itself is the s-neighborhood of a line segment L. First notice that the s-maximum distance minimizer for E is the line segment L. If we want a Steiner tree S n over X n = {x i } n i=1 to equal L, X n must contain the endpoints of L, and X n must also be contained in L. However, since X n only contains a finite number of points, B(X n , s) cannot contain E (see the left picture of Figure 1). However, as is depicted on the right picture of Figure 1, we may "extend" each point in X n up and down (and also to the sides for the endpoints of L) by a small amount, δ n , so that the s-neighborhood of these extended points, X n contains E. For any large enough n, we have that H 1 (S n ) for the Steiner tree S n over X n is bounded above by H 1 (Γ n * ), where Γ n * = L ∪ P n , where P n are the short 2n + 2 line segments, each of which is of length δ n (see right picture of Figure 1). Since Γ n * connects all points in X n and since would show that H 1 (S n ) → λ(E, s). In essence, (3) is true due to the fact that x/ √ x → 0 as x → 0. We will explore this with greater detail in the proof of Lemma 3.3.
Extending points out in similar ways is also crucial for the proofs of Theorem 3.6 and Proposition 3.2. However, more care has to be taken in these more complicated cases. In the C 1 case, we partition our minimizer into a finite number of pieces; where each piece is contained in some uniformly thin tube. Because of this, we must not only extend our points out by δ n , but also by the width of our tubes. In the case where the s-neighborhood minimizer, Γ is a finite continuum, even more care must be taken. Using a classical result of Geometric Measure Theory, which states that finite continua are characterized by Lipschitz curves, we simply need to prove the thoerem for Lipschitz curves. The main difficulty for the case of a Lipschitz curve, Γ, is overcome in Lemma 3.5. Since we loose the uniform thinness of our tubes, and differentiability at all points, we partition Γ into a good part, and a bad part. The good part of Γ is around points of differentiabiliy of γ, allowing us to construct the portions of Γ * around these points as in the C 1 case. Since this set of differentiable points of γ, G, has full measure, picking a compact subset K ⊂ G such that L 1 (I \ K) < ξ (for ξ > 0 small) tells us that the bad portions of Γ, are small in measure. This allows us to be more liberal with our construction of Γ * around these bad portions.

Preliminaries
the set of all s-maximum distance minimizers of A σ(A, s) s-spanning length of A Tan(S, a) tangent cone of S at a S(V ; r, t) Closed asymetric strips perpendicular to subspace V ; V the orthogonal projection from R n to subspace V of R n . V ⊥ the perpendicular subspace of V for subspace V of R n . S X A Steiner tree over a finite point set X ⊂ R n . T X A minimal spanning tree over a finite point set X ⊂ R n . We call the number λ(E, s) the s-maximum distance length of E, and a closed and connected Γ ⊂ R 2 such that B(Γ, s) ⊃ E and H 1 (Γ) = λ(E, s) an s-maximum distance minimizer of E, or if it is clear from context, a minimizer of E, or simply, a minimizer.

Problem statements
The minimizers of (4), which can be shown to exist, are known as Steiner trees over X and are denoted S X .
A union T X := ∪ E * l, where E * is a minimizer of (5) is called a minimal spanning tree over X. Remark 2.4. Note that even though we are not minimizing over trees in Problems 2.2 and 2.3, we automatically get minimizers that are trees, since any possible solution with a loops can always be pruned to remove loops and get a strictly shorter connecting sets. Remark 2.5. Computing (5) in problem (2.3) can be done in polynomial time, by solving the minimal spanning tree problem for a corresponding weighted, complete graph, correspond to x i , and give each edge (i, j) ∈ E the weight w ij that is equal to the Euclidean distance between x i and x j in R n .

Definitions and classical theorems
We will start with some standard definitions found in geometric measure theory literature. In Remark 2.10, we emphasize an important fact; that the case of characterizing finite continua in R n plays a very special role in geometric measure theory. This is, in part, due to the strong nature that connectivity has on sets of finite one dimensional Hausdorff measure. Definition 2.6. For x ∈ R n and r > 0, we let B(x, r) and U (x, r) denote the closed r-ball and open r-ball of radius r centered at x, respectively. Similarly, for any A ⊂ R n we denote the closed r-neighborhood of A as B (A, r), and the open r-neighborhood of A as U (A, r), and define them to as Here, dist(x, A) := inf{|x − y| : y ∈ A} and where | · | denotes the standard l 2 Euclidean distance in R n . Definition 2.7. A finite continuum Γ ⊂ R n is a compact, connected set whose 1-dimensional Hausdorff measure, H 1 (Γ), is finite.
Definition 2.8. A 1-rectifiable set A ⊂ R n is any set with finite H 1 measure contained the union of a countable collection of images of Lipschitz functions γ i : R → R n and a set with H 1 -measure 0: Remark 2.10. There are several equivalent definitions of this family of subsets of R n that comprise finite continua that show the intimate connections between these three definitions: The equivalence follows from a classic geometric measure theory result, which states that any compact, connected set Γ such that H 1 (Γ) < ∞ is in fact, 1-rectifiable, and a slightly more refined result, Theorem 2.11 stated next. This theorem tells us that Γ is the Lipschitz image of an interval [0, L] such that L < CH 1 (Γ) for a C that does not depend on the set Γ. The proof of Theorem 2.11 is sketched in [3, §1.1]. Note also that the theorem implies that γ is parameterized by arc-length, and therefore in the statement of the theorem, L = length(γ). Given > 0, we say that X ⊂ A ⊂ R n is an -net for A if A ⊂ B(X, ). If X is finite, we say that X is a finite -net. Definition 2.13. Let V be a k-dimensional linear plane of R n . We denote by V ⊥ the orthogonal complement of V , and V : R n → V the orthogonal projection onto V . For α > 0, we define the cone of slope α with respect to V to be For every x ∈ R n we denote by C(x, V, α) the set x + C(V, α).
In the special case where V is a 1-dimensional linear plane of R 2 , with a prescribed positive direction, in Lemma 3.5 we will be intersecting the cone with the asymmetric closed strip Definition 2.14. [5, §3.2] Whenever S ⊂ R n and a ∈ R n , we define the tangent cone of S at a, denoted as the set of all v ∈ R n such that for every > 0, there exists x ∈ S, 0 < r ∈ R with |x − a| < , |r(x − a) − v| < ; such vectors v are called tangent vectors of S at a.

Existence of Minimizers
Using compactness results for non-empty compact subsets of R n in a bounded portion B of R n (Blaschke selection theorem) and lower-  Proof. Let {K i } ∞ i=1 be a minimizing sequence; that is, for any j = 1, 2, ..., K j is closed, connected, B(K j , s) ⊃ E, and lim i→∞ H 1 (K i ) = λ(E, s). Since we assume E is compact, E lies inside a large enough ball B(0, R − 2s) for some R > 0. We may then assume that each K j in our sequence is a subset of B(0, R), since if it were not then projecting K j radially onto ∂B(0, R) would decrease the H 1 -measure of K and B(K j , s) would still contain E. Hence, by the Blaschke selection theorem [4, §3.4], there exists a subsequence {K i j } ∞ j=1 and a compact set Γ ⊂ R n such that K i j converges to Γ under the Hausdorff metric as j → ∞. Therefore, since each K i j is connected, by Gołąb's Theorem [4, §3.2], we have that and that Γ is connected. To conclude, since K i j converges under the Hausdorff metric to Γ, we also have that B(K i j , s) converges to B(Γ, s) under the Hausdorff metric, and hence the closed set B(Γ, s) also contains E. Therefore Γ is closed, connected, B(Γ, s) ⊃ E and H 1 (Γ) = λ(E, s); meaning that Γ is a minimizer of λ(E, s).

Minimizing over Minimal Spanning Trees solves the Maximum Distance Problem
In this section, we will prove our main lemma 3.5 and theorem 3.6. We believe the intricacies that come from working with Lipschitz curves can cloud the key instincts underlying the proof, so we prove the main result for (1) line segements and then (2) for C 1 curves, before moving on to (3) the main theorem that obtains the same result for finite continua.
Before we treat these three cases, we establish some weaker results which are easier to get due to the fact that we allow ourselves wiggle room in the distance s i.e., we look at s + neighborhoods of Γ.

(s + )-Neighboorhods of Steiner Trees
Although we show that λ(E, s) = σ(E, s) for the cases when minimizers are Lipschitz curves, we have a weaker relationship with λ(E, ) in proposition 3.2, where the following lemma 3.1 becomes crucial. Lemma 3.1. Let E ⊂ R n be compact. If 0 < s < t then λ(E, s) ≥ σ(E, t).
Proof. First, let = s and let δ > 0 such that + δ = t; we will instead show that λ(E, ) ≥ σ(E, + δ). By Theorem 2.15, there exists a minimizer Γ of λ(E, ) that is compact, connected, and H 1 (Γ) < +∞. Since Γ is compact, there exists a finite δ-net, X ⊂ Γ of Γ. Recall, this means that for any a ∈ Γ there exists b ∈ X such that |a − b| < δ. Now, if we are able to show that for any x ∈ B(Γ, ) there exists a y ∈ X such that |x − y| < + δ, we would guarantee that B(X, + δ) ⊃ B(Γ, ); and since Γ is a minimizer of λ(E, ), B(Γ, ) ⊃ E, we would then be able to say that B(X, + δ) ⊃ E. Thus by picking a Steiner tree S X over X, since X was originally picked to be contained in Γ, we would have that H 1 (S X ) ≤ H 1 (Γ) and therefore Let us now show this is indeed the case. Let x ∈ B(Γ, ); since Γ is closed, there exists a z ∈ Γ such that |x − z| ≤ . Since X is a δ-net over Γ, we know there exists y ∈ X such that |y − z| < δ. Therefore by the triangle inequality, |x − y| ≤ |x − z| + |z − y| < + δ.
Proposition 3.2. Let E ⊂ R 2 be compact and consider a positive sequence δ i → 0 as i → ∞.
There exists a sequence of finite point sets X i and Steiner trees S X i such that Proof. We break the argument into steps: 1. Define σ(s) := σ(E, s) and λ(s) := λ(E, s).
3. We can therefore find X i such that ≤ λ( ).
6. But we also know that E ⊂ i B(S X i(k) , + δ i(k) ) which implies (with a little bit of work) that E ⊂ B(S * , ).
7. This in turn implies that H 1 (S * ) ≥ λ( ) which, together with step 5, implies that H 1 (S * ) = λ( ). For each n ∈ N, we may disect [0, L] into n line segments, each of length L/n having endpoints x k = kn/L for k = 0, ..., n. For each n, we will construct a closed and connected set Γ n = L ∪ P n , where P n are what we call, prongs, such that Γ n connecting a finite point set X n , and B(X n , s) ⊃ E. This finite point set will consist of 2n + 2 number of points, which will be obtained by "extending" each x k "upwards" or "downwards", as in Figure 1. Since any Steiner tree S n := S Xn over X n will, by its very definition, satisfy H 1 (S n ) ≤ H 1 (Γ n ), if we can show that H 1 (Γ n ) → L as n → ∞, this will imply that we also have H 1 (S n ) → L as n → ∞ (note that we also know that H 1 (S n ) ≥ L since X n will always contain the two endpoints 0 and L). This gives us a sequence of Steiner trees which converge to the minimal length. In order to show that σ(E, s) = λ(E, s), we must also know that the s-neighborhood of these Steiner trees contain E.

Case I: Line Segments
Let us construct Γ n . For δ > 0 (to be picked later), and for each x k (k = 0, 1, ..., n) pick the two points that are δ distance "above" and "bellow" x i . In other words, let Note that if x, y ∈ R 2 , [x, y] is simply, the closed line segment connecting x and y. Therefore, ∪ n k=0 [y k , y k ] consists of (n + 1) vertical line segments of length 2δ. Denoting the set of points {(−δ, 0), (L + δ, 0)} ∪ {y i , y i } n i=0 by X n , we must find δ n > 0 such that B(X n , s) ⊃ E. To do this, first notice that on [−s, s] and therefore if we let δ n = (L/2n) 2 s , and pick n ∈ N large enough so that δ n < s − δ n , then B(X n , s) ⊃ E (see Figure 2). Now, H 1 (Γ n ) = 2(n + 1)δ n + 2δ n + L = 2nδ n + 4δ n + L = 2(n + 2) L 2 4sn 2 + L = C /n + L and therefore H 1 (Γ n ) → L as n → ∞. y], s) with balls centered on the 2n + 2 points, we must raise and lower the balls byδ, as is shown on the top-left blown-up picture. However, it suffices to extend the balls up by a little more, δ n , as is shown on the top-right blown-up picture. In order to guarantee that raising these balls upwards and downwards will not expose the center line, we must choose δ n small enough so that δ n < s − δ n .

Case II: C 1 Curves
Proof. This proof is an application (and modification) of the ideas behind lemma 3.3. We begin with fact that for any aspect ratio, α > 0, there exists a large enough M ∈ N such that the partition 1] gives us that the images γ([t i , t i+1 ]) are contained in rectangles D i (centered along P 0 i ≡ [γ(t i ), γ(t i+1 )], see Figure (3)) of width µ i and length ρ i where Using our partition, we construct a piecewise linear curve, starting with To each P 0 i we now add prongs P 1 i,j , pointing up and down, of length µ i /2 + δ n i for j = 0, ..., 2n i + 2, We also add 4 horizonal prongs P 1 i,j , j = 2n i + 2, ..., 2n i + 6, two at each end of the rectangle, each of length µ i /2. Centering balls of radius s at each of the free ends of the 2n i + 6 prongs creates a cover for the s-neighborhood of the s-neighborhood of D i . We will call the this piecewise linear curve (shown in green in figure 3 The complete piecewise linear curve is just P = ∪ M i=1 P i whose end points (of which there M i=1 2n i + 6) are centers of an s-neighborhood of Γ. We now show that the excess length of P is as small as you like, provided you choose α small enough.
We begin by noting that we can assume that every ρ i < 1 since choosing M big enough enforces that condition. The length of the vertical prongs goes from δ n i (in the previous lemma) to δ n i +µ i /2 and we have added four horizontal prongs of length µ/2, so the total length of P i goes from Using this parameterization, we will construct such a closed and connected Γ * by adding extra small line segments to particular places of Γ. Precisely how we add these extra line segments will depend on whether we are centered around a good portion of Γ or a bad portion of Γ. Because γ is Lipschitz, most of Γ will be a good portion, and we must therefore have tight control on how exactly we are adding these extra line segments. The line segments around these good portions will be denoted as P , and will be called prongs. In contrast, the bad portions of Γ will be small, and will allow us to be more liberal in how we add the extra line segments around them. The line segments around these bad portions will be denoted as S, and will be called spokes. We will then define

Case III: Lipschitz curves
Since γ is Lipschitz, the set of differentiable points of γ, G ⊂ I := [0, length(γ)] has full measure in I. We will call any x ∈ G a good point, and any x ∈ I \ G a bad point. The image around any good point will be contained in a cone whose aspect ratio will go to 0. Precisely, for any x ∈ G η(x, r, ρ(x, r)) ρ(x, r) → 0 as r → 0 where ρ(x, r) = inf{t : S(y, Tan(Γ, y), t) ⊃ γ(x − r, x + r)} and η(x, r, s) = inf{h : C(y, Tan(Γ, y), h/s; s) ⊃ γ(x − r, x + r)}.
If x and r is clear from context, we will simply refer to the above aspect ratio as η/ρ.
Given a small aspect ratio, α > 0 we will construct a particular partition of I with the help of the following. For any ξ > 0, since G is Borel we may pick a compact subset K of G such that L 1 (G \ K) < ξ. So that the constants work out at the end, we let Differentiability of γ in G implies [5, §3.1.21] that for any x ∈ K there is a small enough R > 0 such that for any r ≤ R η ρ < α.
Without loss of generality, we may assume that ρ < 1 and η < s/100. Therefore from the open cover G α = {U (x, R)} x∈K of K, we may extract a (what we call good) finite subcover Figure 4: The cone is shortened, asymmetrically, so that the ends of the cone intersect γ(Z i ). Notice that µ/(|ν| + |τ |) is at most, 2α.
of K; we may assume that no U (x i , R i ) is contained in any other U (x j , R j ). Since I \ K is equal to a finite union of disjoint, connected, closed subintervals {B(b i , ξ i )} M i=1 for some b i ∈ I \ K and ξ i ≥ 0, we can define the corresponding (what we call bad) finite cover the set of good centers and the set of bad centers, respectively. Note that the way we chose our bad cover implies that there is always at least one good center in-between any two bad centers. Also note that M i=1 2ξ i ≤ ξ. Let us now order all the good and bad centers by their natural ordering in R. Let z i ∈ Z (for i = 1, ..., N + M − 1) and let z i+1 be the next largest center in Z.
In what follows, we will obtain a corresponding u i from each of these z i . Let V i and V i+1 in G ∪ B be the covering elements corresponding to z i and z i+1 , respectively. If both z i and z i+1 are good centers then V i ∩ V i+1 = ∅ and we may therefore pick a point u i ∈ V i ∩ V i+1 such that z i < u i < z i+1 . If z i is a good center and z i+1 is a bad center let u i = cl(V i ) ∩ V i+1 . Similarly, if z i is a bad center and z i+1 is a good center, we let u i = V i ∩ cl(V i+1 ). Lastly, to deal with the endpoints, we will let u 0 = 0 and u N +M = 1. For what follows it is important to notice that and that the map is a bijection. This allows us to partition Γ into good parts and bad parts, with the images of the intervals corresponding to good points and bad points, respectively.
Covering good parts with prongs: For j = 1, ..., N let z i := z i(j) be a good center, let Z i := [u i−1 , u i ] be its corresponding interval, and let y i := γ(z i ). Instead of considering the symmetric cone C(y i , Tan(Γ, y i ), η i /ρ i ; ρ i ) that contains γ(B(z i , R i )), we will instead shorten this cone horizontally, as much as possible, while still containing γ(Z i ). Assuming that y i = 0 and that Tan(Γ, y i ) = {(x, 0) : x ∈ R}, we define this cone as ! ! ! Figure 5: The rectangle of height µ i and widthρ i , the finite point set X i such that B(R i , s) ⊂ B(X i , s), and the prongs P i connecting X i to γ(Z i ). where Denote height of the tallest side of this new cone as µ i := 2η i max{|ν i |, |τ i |}/ρ i , and the width as Figure 4). Since γ(Z i ) is closed, and is contained in the cone C i , we have that Therefore, any line segment at least as long as 2µ i , that is centered on and is perpendicular to Tan(Γ, y i ) [γ(Z i )] will also intersect γ(Z i ).
Still focusing our attention around a good piece γ(Z i ), we will construct a finite point set X n i where Since the cone C i contains γ(Z i ), will show (8) by showing that B(X n i , s) ⊃ B(R i , s) ⊃ B(C i , s), where R i is the smallest rectangle that contains C i as is depicted on Figure (5), i.e., The points in X n i will then be connected by γ(Z i ) ∪ P i , where P i will consist of n + 1 number of equally spaced line segments of small enough length, all perpendicular to the tangent, together with 4 other short line segments.
In order for B(X n i , s) ⊃ B(R i , s), we require that µ + δ < s − δ. This is guaranteed when n is chosen large enough so that 1/s < n. We will however, chose n with more precision later.
Covering bad part with spokes: For j = 1, ..., M , let z i := z i(j) be a bad point, let Z i := [u i−1 , u i ] be its corresponding interval, and let y i := γ(z i ). We now construct the spokes S i connecting points Y i such that Each of these spokes, S i will consist of line-segments emanating from the image of the corresponding bad center. The length of these line-segments will be bounded above by the length of the bad center's interval and Lip(γ).

Recalling that bad intervals
will give us (9). To this end, we simply define Y i to be Now, the collection of spokes S i simply consist of line segments connecting every point in Y i to the center γ(z i ). Precisely, Note that H 1 (S i ) = 8Lip(γ)ξ i . See Figure (6) for an illustration of this step. It is clear that Γ ∪ S i is connected since γ(z i ) is in S i and Γ. Estimating First, by our initial choice of we simply have that H(S) ≤ 8Lip(γ)ξ ≤ 2 .

Case IV: Finite Continua
In the most general case when Γ ⊂ R 2 is a finite continuum, Lemma 3.5 still holds. As mentioned in Remark 2.10, this is due to the important fact that finite continua, 1-rectifiable continua, and Lipschitz curves are all equivalent. Theorem 3.6. Let E ⊂ R 2 be compact and let s > 0. Then σ(E, s) = λ(E, s).
Proof. By existence Theorem 2.15, we know that a minimizer Γ of λ(E, s) is a compact, connected set such that H 1 (Γ) < +∞. Letting > 0, by Lemma 3.5, there exists a finite point set X ⊂ R 2 and a compact and connected Γ containing X such that B(Γ, s) ⊂ B(X , s) and H 1 (Γ ) ≤ H 1 (Γ) + .
In particular, any Steiner tree S X over X will be a candidate minimizer for σ(E, s) and λ(E, s) and will satisfy H 1 (Γ) ≤ H 1 (S X ) ≤ H 1 (Γ ).
The following corollary follows a well-known fact that is stated in Remark 1.1. It says that when we define σ(E, s), instead of taking Steiner trees over X, we can take minimal spanning trees over X, and get the same result of Theorem 3.6. where we are instead, taking minimal spanning trees T X over X instead of Steiner trees over X. Then σ (E, s) = λ(E, s).
Proof. Given any Steiner tree S X over a finite point set X, there exist a finite number of Steiner points, X . Then for any minimal spanning tree T X∪X over X ∪ X , we get that H 1 (T X∪X ) = H 1 (S X ). Therefore σ (E, s) = σ(E, s) and by Theorem 3.6, we get σ (E, s) = λ(E, s).