by Jonathan Tennenbaum

The issue of *analysis situs* becomes unavoidable, when we are confronted with a relationship of two or more entities A and B (for example, two historical events or principles of experimental physics), which do not admit of any simple consistency or comparability, i.e., such that the concepts and assumptions, underlying our notion of “A,” are formally incompatible with those underlying “B.” In the case where the relationship between A and B is undeniably a causally efficient one, we have no rational choice, but to admit the existence of a higher principle of lawful relationship (a “One”) situated beyond the framework provided by A and B as originally understood “in and of themselves.”

Exactly the stubborn, “dumbed down” refusal to accept the existence of such higher principles of *analysis situs*, lies at the heart of the chronic mental disease of our age. That includes, not least of all, the Baby Boomer’s typical penchant for “least common denominator” approaches to so-called “practical politics.” Antidotes are urgently required.

An elementary access to this problem, as well as a hint at *analysis situs*itself, is provided by the ancient discovery–attributed to the school of Pythagoras–of the relative incommensurability of the diagonal and side of a square. This discovery, a precursor to Nicolaus of Cusa’s “Docta Ignorantia,” could with good reason be characterized as a fundamental pillar of civilization, which ought to be in the possession of every citizen; indeed, the rudiments thereof could readily be taught to school children. Yet, NOWADAYS there are probably only a HANDFUL of people in the whole world, who approach having an adequate understanding of it.

In order to appreciate the Pythagorean discovery, it were better to first elaborate a lower-order hypothesis concerning measurement and proportion, and then see why it is necessary to abandon that hypothesis at a certain well-defined point, in favor of a higher-order conception. The hypothesis in question is connected with the origin of what might be called “lower arithmetic”–as contrasted to Gauss’ “higher (geometrical) arithmetic”–which however is not to deny the eminent usefulness and even indispensibility of the lower form within a certain, strictly delimited domain. On the other hand, the discoveries of the Pythagorean school put an end to what might otherwise have become a debilitating intoxication with simple, linear arithmetic, one not unsimilar to the present-day obsession with formal algebra and “information theory.”

**Linear measure**

Already in ancient times, it became traditional to distinguish between three species (or degrees of extension) of geometry within Euclidean geometry itself: so-called linear, plane, and solid geometry. The phenomenon of “incommensurability” bursts most clearly into view, when we attempt to carry over certain notions of measurement and proportion, apparently reasonable and adequate for the comparison of lengths along a line, into the doubly- and triply-extended domains of plane and solid geometry. Actually, the problem is already present in the lower domain; but it takes the transition to the higher domains to “smoke it out” and render it fully intelligible.

The commonplace notion of measurement and proportion, is based on the hypothesis that there exists some basic element or “unit,” common to the entities compared, out of which each of the entities can be derived by some formally describable procedure. In the linear domain of Euclidean geometry–which, incidentally, presupposes the hypothesis, that length is independent of position–this approach to measurement unfolds on the basis of three principles:

First, given two line segments, we preliminarily examine their relations of position, i.e., whether they are disjoint, overlap, or one is contained in the other. Secondly, we superimpose them, by means of so-called “rigid motion” (again, an hypothesis!), to ascertain their relation in terms of “equal length,” “shorter,” or “longer.” And thirdly, we extend or multiply a given line segment, by adjoining to it reproductions of itself, i.e., segments of equal length.

By combining these principles, we arrive at such propositions as “segment B is equal in length to (or shorter or longer than) two times segment A,” or such more complicated cases as “three times segment B is equal to (or shorter or longer than) five times segment A,” and so forth. [Figure 1.] In the case, where a segment B is determined to be equivalent (in length) to a multiple of segment A, it became customary to say, that “A exactly divides (or measures) B,” and to express the relationship by supplying the exact number of times that A must be replicated, in order to fill out a length equivalent to B. Where such a simple relationship does not obtain between A and B, it would be natural to direct our efforts toward finding a smaller segment C, which would exactly divide A and exactly divide B at same time (commensurability!). In case we succeed, the ratio of the corresponding multiples of C, required to produce the lengths of A and B respectively, would seem to perfectly express the relationship between A and B in terms of length. So, the proposition “A is three-fifths of B” or “A is to B as three is to five” would express the case, where we had determined, that A = 3C and B = 5C for some common “unit” C. [Figure 2.] – The paradox of `Euclid’s algorithm’ –

HOW, a practically-minded person would probably ask, might we discover a suitable common divisor C for any given segments A and B? It were natural to first try the shorter of the two lengths, say A, and to seek the largest multiple of A which is not larger than B. If that multiple happens to exactly equal B, we are finished, and can take C = A. Otherwise, we shall have to deal with the occurrence of a “remainder” in the form of a segment R, shorter than A, by which the indicated multiple of A falls short of B’s length. One possible reaction to this would be, to divide A in half, and then if necessary once again in half, and so on, in the hope that one of the resulting series of sub-segments might be found to exactly divide B. Those skillful in these matters will see, however, why such an approach must often lead to a dead end–as for example when the lengths of A and B happen to stand in the ratio 3 to 5, in which case successive halving of A or B could never produce a common divisor. [Figure 3.]

A much more successful approach, which (at this stage of the problem) represents a “least action” solution, became known in later times as “Euclid’s algorithm”: In case the shorter segment, A, does not divide B exactly, we take as next “candidate” the remainder R itself. If R divides A exactly, then R is evidently a common divisor of both A and B. Otherwise, take the remainder of A upon division by R–call it R’–as the next “candidate.” Again, if R’ exactly divides R, then (by working the series of steps backwards) R’ will also divide A and B. If not, we carry the process another step further, producing a new, even smaller remainder R”, and so forth. This approach has the great advantage that, ASSUMING A COMMON DIVISOR of A and B ACTUALLY EXISTS, we shall certainly find one. In such a case, in fact, as the reader can confirm by direct experiments, the indicated process leads with rather extraordinary rapidity, to the greatest common divisor of the segments A and B. [Figure 4.]

The discussion so far, however, leaves us with a rather considerable paradox. For the case, that there exists a segment dividing A and B exactly, the indicated approach to measurement and proportion, provides us with an efficient means to find the largest such common divisor, as well as to derive an EXACT characterization of the relationship of A to B in terms of a ratio of whole numbers. At the same time, however, some of us might have caught a glimpse of a potential “disaster” looming on the horizon: What if the “Euclid algorithm,” sketched above, fails to come to an end? It were at least conceivable, that for some pairs A, B, the successive remainders R, R’, R”…, while rapidly becoming smaller and smaller, might each differ sensibly from zero.

Within the limits of the ideas we have developed up to this point, we find the means neither to rule out such a “disaster” (“bad infinity”), nor to devise a unique experiment which might demonstrate the failure of “Euclid’s algorithm,” while at the same time providing a superior approach.

Evidently, it were folly to search for an answer within the “virtual reality” of linear Euclidean geometry per se. We need a flanking maneuver, to catapult the whole matter into a higher domain. [To be continued.]

EXCERPTS FROM A REPLY BY JONATHAN TENNENBAUM TO QUESTIONS ON HIS PEDAGOGICAL DISCUSSIONS

Dear Reader,

Pardon my delay in responding to your queries concerning the pedagogical discussions.

Let me first address the last point in your letter, which is the most significant. I mean the following passage:

“On the notion that the rate of change, or change in the rate of change is alien to Euclid, needing to be imported from our higher vantage point: A number of us just do not see the revolutionary ‘axiom-busting’ nature of this concept…”

Judging from your report, the problem which came to the surface during your discussions, is fundamental. I am very happy that the problem surfaced, although it tells me that my pedagogical tactic failed, at least in some cases. No matter. We often learn more from our failures, than from our successes!

What I think is going wrong, in part, is that many (probably most) people haven’t broken through yet, or are still resisting, to grasp in a really SENSUOUS way, what Lyn is trying to get at with his discussions of theorem-lattices and changes of axioms. People have a kind of abstract understanding of these matters, which they can present formally, can cite examples and so on, and even apply the concept in a certain way; but it’s still skin-deep, somewhat superficial learning. Above all, there is an emotional problem, a problem of INDIFFERENTISM or “decoupling” of mental activity from passion, which was induced from very early on in school, in university studies, and actually by our whole cultural environment. All of us of our generation — I would not exclude myself — have to struggle with this problem to one extent or another.

In order to function properly, the pedagogical discussions must be composed and read, not like sections of a textbook, but rather as miniature DRAMAS of the most rigorous sort. A drama involves powerful emotion. It is not just an “intellectual exercise.” In a well-composed and well-acted tragedy, the achievement of the desired effect on the audience, requires, that the individuals in the audience actually TAKE INTO THEIR OWN MINDS, by a powerful sort of “resonance” (empathy) the thought-processes projected by the dramatist with the aid of the characters. Under such conditions, the dramatist can operate DIRECTLY on the inner mental processes of the audience.

The simplest form of pedagogical discussion presents a TYPE of physically-demonstrable, valid transition from a hypothesis “A,” to a superior hypothesis “B,” such that the theorem-lattices, corresponding to “A” and “B” respectively, are separated from each other by an absolute mathematical discontinuity. In other words, although “B” subsumes (albeit in reworked form) that aspect of “A” which has not been invalidated by the experimental discovery, there is no way to get from “A” to “B” by deductive methods.

In some cases, an experimental demonstration directly refutes an explicit prediction of “A.” Thus, we demonstrate, that an event, which a theorem of “A” says must occur in a certain way, does NOT occur in that way. But very often, the most prominent characteristic of an experimental demonstration, is that it reveals an implicit LIMITATION in the original hypothesis “A,” rather than, so to speak, an explicit error. Something is demonstrated to occur in the real universe, which COULD NOT EXIST in the “mental world” circumscribed by hypothesis “A.” It is not necessary, that the event AS SUCH be EXPLICITLY FORBIDDEN by “A.” In fact, “A” will generally have NO CONCEPT for the event: “A” cannot account for its existence; it presents an insoluable paradox; it is “unimaginable.” And yet, the human mind (though perhaps not the mind of a radical positivist) is forced to acknowledge its existence as experimentally demonstrated.

Actually, the two cases are not so different, as might appear at first glance, if we understand the concept of “hypothesis” to mean, not just an assumption about this or that specialized area, but (at least, implicitly) a WAY OF THINKING about the ENTIRETY OF THE UNIVERSE. For, THE MIND IS ONE. In fact, our mind tends to extrapolate or “project” the underlying limitations of a given hypothesis, upon the entirety of the universe, in such a way that those limitations become “invisible” to us. So, the fish considers the fishbowl to be the entire universe, until something is demonstrated to exist outside the fishbowl. Only then, do the limits of the fishbowl become apparent.

I suspect that people miss the Earth-shaking implications of the pedagogical demonstration in question, because they are holding the hypotheses involved safely at arm’s length, rather than letting them really sink in. In other words, not really getting involved. You really have to become accustomed to the mental world of hypothesis “A” for a certain time, internalizing the corresponding mode of thinking, in order then to experience FROM “INSIDE,” so-to-speak, a crucial moment of physically demonstrable FAILURE of the mode. This requires a kind of mental dexterity and playfulness, to “forget” or “unlearn” the existence of the superior hypothesis “B” (in this case, connected with the necessary introduction of notions of “rate of change”), even though that has long become a part of our general culture. We have to use our imagination in order to place ourselves mentally, in a sense, back into the period BEFORE the discovery in question was made. In the same way, we should be able to imagine, on the basis of higher hypothesis, a future world embodying experimental refutations of hypotheses which we today regard as self-evident.

Were the Greeks and others, who developed their physical science in terms of “Euclidean geometry,” all stupid or evil? Certainly not! Although an adequate history has yet to be assembled, it is certain, that what we now call “Euclidean geometry” BEGAN as a series of REVOLUTIONARY BREAKTHROUGHS in physics, associated with the discovery and elaboration of certain general principles of CONSTRUCTION. The highest point of this development, as stressed by Kepler, was embodied in the treatment of the five regular solids, formally summarized by Euclid in the famous Tenth Book of Euclid’s {Elements}. The Greek constructive geometry, reworked by Euclid as a prototype of a formal theorem-lattice, embodied a kind of technology of thinking, far superior to what had existed prior to that (for example in the Egyptian or ancient Chinese science, as far as we know).

Thus, it were useful, before proceeding to my pedagogical discussion of the circle, to first get back into the mode of Euclidean geometry. For example by doing constructions such as: constructing perpendiculars and parallels, constructing divisions of the circle (equilateral triangle, square, pentagon, hexagon), constructing the golden section, bisecting any given angle, dividing a line segment into any given number of equal segments, constructing the tangent to a circle at any point, constructing a demonstration of Pythagoras’ theorem, etc. Allow yourselves to get into the “mind set” of this type of approach to problems. This is the same thing I tried to do in the earlier discussion of incommensurability, where I introduced the “Euclid’s algorithm” in one-dimensional geometry, not so much its own sake, but as characteristic of a kind of approach to the problem of measurement.

Of course, the concept of CHANGE is central to the every positive development of human civilization. The constructive geometry of the Greeks itself represents an attempt to deal with that. Of course, the notion of change and rate of change is “always there,” in a certain way, within higher hypothesis (see Plato’s {Timaeus}, for example). But the elaboration of a constructive geometry based explicitly on the notion of variable rate of change, came much later. Just compare the physics of Archimedes, with the physics launched in Nicolaus of Cusa’s {Docta Ignorantia} and brought to full development through the non-algebraic function theory of Huygens, Leibniz and Bernoulli. The turning-point, as far as we can see, came with the revolutionary shift in conception, embodied in Nicolaus of Cusa’s treatment of the circle and related topics, relative to the Euclidean approach of Archimedes.

Thus, you will not find the notion of “variable rate of change,” as that is understood by Leibniz, in Euclidean geometry. It’s not there. It is certainly implicit in the higher hypothesis guiding the development of Greek geometry, in Plato and so forth; but it was not yet actualized as an elaborated hypothesis. Thus, there is a constant TENSION between hypothesis and higher hypothesis, which constantly drives knowledge forward, employing a succession of unique experiments.

I hope these remarks will be helpful to you and your colleagues….

Concerning your reference to “solving” equations for the ratio of diagonal to side of an isoceles triangle, I would caution as follows: When an algebraicist says “the square root of two,” he is usually only slapping a label onto an UNFILLED GAP in his knowledge. He has not thereby developed a CONCEPT. Whereas by contrast, the paradoxical result of the geometrical construction evokes — in the mode of metaphor, and not merely pasting formal labels on things — an actual concept of a precisely-characterized, yet linearly inexpressible magnitude.

Concerning your query on light, I intend to develop some pedagogical discussions on exactly this subject, which requires a certain amount of elaboration. But from the way you expressed your question, I suspect that people have been boxing themselves a bit into a too constricted, literal, “mathematical” way of thinking about these matters. What is worthwhile to reflect about in a broad way — without necessarily expecting to come up with a “final answer” — is the question: What kind of Universe are we living in, in which such phenomena as refraction and diffraction of light can take place? Then, compare that with the “mental world” associated with the Euclidean approach to geometry.

Keep up the good work. I will be happy to help if you have any further queries.

Best wishes,

Jonathan Tennenbaum

Incommensurability and Analysis Situs Pedagogical Discussion Part II: Experimental demonstration of incommensurability

CAN YOU SOLVE THIS PARADOX

by Jonathan Tennenbaum

Moving from singly-extended, linear geometry, to doubly-extended (plane) geometry, provides us with a relatively unique experiment for the solution of the paradox presented above.

Synthetic plane geometry excels over singly-extended linear geometry in virtue of the principle of angular extension (rotation), as embodied by the generation of the circle and its lawful divisions. Among the latter, the square (via the array of its four vertices) is most simply constructed, after the straight line itself, by twice folding or reflecting the circle onto itself.

Having constructed a square by these or related means, designate its corners (running around counterclockwise) P, Q, R, and S. {(Figure 1)} Our experiment consists in “unfolding” the relationship between the two characteristic lengths associated with the square: side PQ and diagonal PR. These two shall play the role of the segments “A” and “B” in our previous discussion. (Note: the following constructions are much easier to actually carry out, than to describe in words. The reader should actually cut out a square and do the indicated constructions.)

For our purposes it is convenient to focus, not on the whole square, but on the right triangle PQR obtained by cutting the square in half along the diagonal PR. {(Figure 2)} Note, that the sides PQ and QR have equal length (PQR is a so-called isoceles right triangle); furthermore, the angle at Q is a right angle and the angles at P and R are each half a right angle.

To compare A (= PQ) with B (= PR), fold the triangle in such a way, that PQ is folded exactly onto (part of) the line PR. Since PQ is shorter than PR, the point Q will not fold to R, but will fold to a point T, located between P and R. {(Figure 3)} By the construction, PQ and PT are equal in length. Next, note that the axis of folding, which divides the angle at P in half, intersects the side QR at some point V, between Q and R. Observe, that the indicated operation of folding brings the segment QV exactly onto the segment TV.

Observe also, that through the indicated folding of the triangle, the triangular region PVT is exactly “covered” by the region PVQ, while the smaller triangle portion VTR is left “uncovered,” as a kind of higher-order “remainder.”

Focus on the significance of that smaller triangle. Note, that in virtue of the construction itself, VTR has the same angles and shape (i.e., is similar to) the original triangle PQR.

**Euclid’s Algorithm Again**

Comparing the original triangle to the smaller “remainder” triangle VTR, we can easily see that the former’s sides are derived from the latter’s by relationships very similar to, though slightly different from, the steps of the so-called Euclid algorithm! (See Part I, in our issue dated June 2, 1997.)

First, in fact, the side RT results from subtracting the segment PT, equal in length to the original triangle’s side PQ, from the original triangle’s hypotenuse PR. Second, the hypotenuse VR of the small triangle derives from the side QR of the original triangle, by subtracting the segment QV, while the latter (in virtue of the folding operation and the similarity of triangles) is in turn equal to TV, which again is equal to RT. In summary: if the side and hypotenuse of the original triangle are A and B, respectively, then the corresponding values for the smaller triangle will be A? = B – A and B? = A – A?. {(Figure 4)}

**Lurking Paradox**

The reader might already notice an extraordinary paradox lurking behind these relationships: Were A and B to have a common divisor C, then that same C would–in virtue of the just-mentioned relationships–also have to divide A? and B?. What is paradoxical about that? Well, the smaller triangle is similar to the larger one, so we could carry out the same construction upon it, as we did to derive it from the original triangle. The result would be a third, much smaller triangle of the same proportions, whose leg and hypotenuse, A? and B?, would thereby also have to be divisible by the same unit C. And yet, continuing the process, we would rapidly arrive at a triangle whose dimensions would be smaller than C itself!

We are thus faced with the inescapable conclusion, that A and B cannot have a common divisor in the sense of linear Euclidean geometry. The relationship between A and B cannot be expressed as a simple ratio of whole numbers. As Kepler puts it in his “World Harmony,” the ratio of A to B is *Unaussprechbar*–it cannot be “spoken”; by which Kepler means, it is not communicable in the literal, linear domain. But Kepler emphasizes at the same time, that it is {knowable} ( *wissbar*), and is precisely communicable {by other means.}

Evidently, the cognition of such linearly incommensurable relationships, requires that we abandon the notion, that simple linear magnitudes (so-called scalar magnitudes) are ontologically primary. Our experiment demonstrates, that such magnitudes as the ratio of the diagonal to the side of a square (commonly referred to algebraically as the square root of two) are not really linear magnitudes at all, but are “multiply extended,” geometrical magnitudes. They call for a different kind of mathematics. What we lay out on the textbook “number line” are only shadows of the real process, occuring in a “curved” universe. This coheres, of course, with Johannes Kepler’s reading of the significance of Golden Mean-centered spherical harmonics in the ordering of the solar system, and in microphysics as well.

**Analysis Situs Relationship**

The relevant relationship for *analysis situs*, in the preceeding discussion, is not between the diagonal and side of a square; but rather that between the hypotheses underlying the linear domain, sketched in Part I of our discussion, and the superior standpoint implied in Part II.

A final note: Observe the rotation and change of scale of the smaller triangle relative to the larger. Our experimental {transformation} of the larger triangle into the smaller, similar triangle, as an {inherent feature} of the relationship of A to B, already points in the direction of Gauss’ complex domain, and the preliminary conclusion, that the complex numbers are ontologically primary–more real–than the so-called “real numbers.”

*(Anticipating what might be developed in other locations: The transformation constructed above, belongs to the so-called “modular group” of complex transformations, which are key to Gauss’ theory of elliptic functions, quadratic forms, and related topics. Gauss, in effect, reworks the central motifs of Greek geometry, from the higher standpoint of the complex domain.)*