Transfinite Principle of Light, Part I: Prologue

by Jonathan Tennenbaum

Last week my esteemed colleague Bruce Director poked into a real hornets’ nest, when he asked: What makes people so susceptible to the kinds of frauds now perpetrated routinely by the mass media? Is there something {sinister} involved, a vulnerability inside the minds of our fellow-citizens, that leads them to desire a world {uncomplicated} by the primacy of {nonlinear curvature in the small}?

What, {sinister}? You surely don’t mean the ordinary, simple folk, do you? The poor innocent people who are being lied to, abused, ripped off, tormented, destroyed by the oligarchy? The ones who are “just trying get along and raise their families?” The “noble savages” of modern times, those honest, unassuming folk who nobly desire nothing more than to eat and sleep and watch their favorite TV sports, undisturbed by the world’s problems — the which, after all, they did not create? Aren’t they so homely and nice? Don’t they have legitimate grievances? Their lives are dull, boring, oppressive, even unbearable. And yet if you try to organize them, if you try to {change} them, you find they can become {very unpleasant}, very nasty indeed! Beneath their anarchistic, individualist exteriors, they are often pathologically, fanatically attached to their identity as “simple-minded, ordinary folk.” Their minds seem to repel the effort at thinking outside the tight circles of so-called “practical life.”

“Explain it in terms I can understand.” “Give me the bottom line.” “Don’t make things complicated.” “Don’t bother me with history and all that other fancy stuff.” “I know what you are saying. But don’t you realize I have to make a living?”

And yet, after hundreds of millenia of human development, can there be any excuse to remain “simple folk”? To be ignorant of the work of past thinkers, to be indifferent to the great drama of history and the fate of entire civilizations, nations and cultures?

A beautiful thing is, that oligarchism is {doomed}. Why doomed? Because oligarchism is implicitly a type of {physics}; and as physics, oligarchism is {demonstrably false}. The demonstration is at the same time proof of the anti-entropic character of our Universe, a Universe which has no more place for inert “hard balls” of Newton’s fancy, than it could long tolerate such abominations as the “sleepy South” where “each person knows his place” and “it’s always been like this and always will be.”

The following series is designed quite literally to cast light on this problem. We shall focus on a celebrated experimental discovery by Ampre’s closest friend and collaborator, Auguste Fresnel, which overthrew once and for all the attempts by LaPlace and others to impose Newtonianism on all of natural science. Fresnel demonstrated that the propagation of light, while strictly lawful, is not “simple” at all. Following Huygens and anticipating Ampre’s closely-related demonstration of the so-called “angular force” in electrodynamics, Fresnel showed conclusively that the notion of a straight-line propagation of light breaks down in the “very small” — at the level of definite, irreducible wavelengths of the order of thousandths of a millimeter. In fact, there is no smooth, “straight-line” action anywhere to be found in the propagation of light! Behind the gross appearance of (approximately) straight light-rays, is a multiply-connected, spherically-bounded rotational process which is everywhere dense in singularities. What a wealth of activity, concealed beneath a “simple” exterior!

Fresnel’s demonstrations at the same time became the basis for a revolution in machine-tool design. In anticipation of what we shall rediscover in the following couple of weeks, the reader should ponder the following question, for example: How is it possible, using instruments machined to a precision of, say, millimeters, to carry out precise measurements at scales more than a thousand times smaller? Not in a linear Universe!

By juxtaposing Fresnel’s work to the preceeding optical discoveries of Leonardo, Kepler, Fermat and Huygens, we obtain a glimpse of the transfinite nature of physical action — a nature which is incomprehensible to the simple-minded, because it embodies not only already-discovered physical principles, but also those which are yet-to-be-discovered and yet in a sense already “present”. Those principles are not predicates of light as an isolated, supposed “objective” physical entity, but pertain to Man’s relationship with the Universe as a whole.

And so our study may illuminate some secrets of the human mind itself, and suggest joyful means by which “simple folk” might be uplifted from oligarchical darkness.

The Transfinite Principle of Light, Part II – The Saga of the “Poisson spot”

by Jonathan Tennenbaum

We are in Paris, at the highpoint of the oligarchical restoration in Europe, the period leading up to and following the infamous, mass-syphilitic Congress of Vienna. Under the control of LaPlace, the educational curriculum of the famous Ecole Polytechnique is being turned upside-down, virtually eliminating the geometrical-experimental method cultivated by Gaspard Monge and Lazard Carnot and emphasizing mathematical formalism in its place. The political campaign to crush what remained of the republican faction at the Ecole Polytechnique reaches its highpoint with the appointment of the royalist Auguste Cauchy in 1816, but the methodological war had been raging since the early days of the Ecole.

With Napoleon’s rise to power and the ensuing militarization of the Ecole in 1799, Laplace’s power in the Ecole was greatly strengthened. At the same time, Laplace consolidated a system of patronage with which he and his friends could exercize increasing control over the scientific community. An important instrument was created with the Societe d’Arcueil, which was founded in 1803 by Laplace and his friend Berthollet and financed in significant part from the pair’s own private fortunes. Although the Societe d’Arcueil supported some useful scientific work, and its members included Chaptal, Arago, Humboldt and others in addition to Laplace and his immediate collaborators (such as Poisson and Biot), Laplace made it the center of an effort to perfect a neo-Newtonian form of mathematical physics in direct opposition to the tradition of Fermat, Huygens and Leibniz. In contrast to the British followers of Newton, whose efforts were crippled by their own stubborn rejection of Leibniz’ calculus, Laplace and his friends chose a more tricky, delphic tactic: use the superior mathematics developed from Leibniz and the Bernoullis, to “make Newtonianism work.”

Poisson, whose appointment to the Ecole Polytechnique had been sponsored by Laplace and Lagrange, worked as a kind of mathematical lackey in support of this program. He was totally unfamiliar with experimental research, and had been judged incompetent as a draftsman in the Ecole Polytechnique. But he possessed considerable virtuosity in mathematics, and there is a famous quote attributed to him: “Life is good for only two things: doing mathematics and teaching it.” An 1840 eulogy of Poisson gives a relevant glimpse of his personality:

“Poisson never wished to occupy himself with two things at the same time; when, in the course of his labors, a research project crossed his mind that did not form any immediate connection with what he was doing at the time, he contented himself with writing a few words in his little wallet. The persons to whom he used to communicate his scientific ideas know that as soon as he had finished one memoir, he passed without interruption to another subject, and that he customarily selected from his wallet the questions with which he should occupy himself.”

In the context of Laplace’s program, Poisson was put to work to elaborate a comprehensive mathematical theory of electricity on the model of Newton’s Principia. Coulomb had already proposed to adapt Newton’s “inverse square law” to the interaction of hypothetical “electrical particles”, adding only the modification, that like charges repel and opposite charges attract — the scheme which is preserved in today’s physics textbook as “the Coulomb law of electrostatics”. Poisson’s 1812 Memoire on the distribution of electricity in conducting bodies, was hailed as a great triumph for Laplace’s program and a model for related efforts in optics.

Indeed, between 1805 and 1815 Laplace, Biot and (in part) Malus created an elaborate mathematical theory of light, based on the notion that light rays are streams of particles that interact with the particles of matter by short-range forces. By suitably modifing Newton’s original “emission theory” of light and applying superior mathematical methods, they were able to “explain” most of the known optical phenomena, including the effect of double refraction which had been the focus of Huygen’s work. In 1817, expecting to soon celebrate the “final triumph” of their neo-Newtonian optics, Laplace and Biot arranged for the physics prize of French Academy of Science to be proposed for the best work on the theme of <diffraction> — the apparent bending of light rays at the boundaries between different media.

In the meantime, however, Augustin Fresnel, supported by his close friend Ampere, had enriched Huygens’ conception of the propagation of light by the addition of a <new physical principle>. Guided by that principle — which we shall discover in due course –, Fresnel reworked Huygens’ envelop construction for the self-propagation of light, taking account of distinct <phases> within each wavelength of propagational action, and the everywhere-dense interaction (“interference”) of different phases at each locus of the propagation process.

In 1818, on the occasion of Fresnel’s defense of his thesis submitted for the Academy prize, a celebrated “show-down” occurred between Fresnel and the Laplacians. Poisson got up to raise a seemingly devastating objection to Fresnel’s construction: If that construction were valid, a <bright spot> would have to appear in the middle of the shadow cast by a spherical or disk-shaped object, when illuminated by a suitable light source. But such a result is completely absurd and unimaginable. Therefore Fresnel’s theory must be wrong!

Soon after the tumultuous meeting, however, one of the judges, Francois Arago, actually did the experiment. And there it was — the “impossible” bright spot in the middle of the shadow! Much to the dismay of Laplace, Biot and Poisson, Fresnel was awarded the prize in the competition. The subsequent work of Fresnel and Ampere sealed the fate of Laplace’s neo-Newtonian program once and for all. The phenomenon confirmed by Arago goes down in history with the name “Poisson’s spot,” like a curse.

We shall work through the essentials of these matters in subsequent pedagogical discussions and demonstrations. But before proceeding further it is necessary to insist on some deeper points, which some may find uncomfortable or even shocking. Without attending to those deeper matters, most readers are bound to misunderstand everything we have said and intend to say.

It is difficult or even virtually impossible, in today’s dominant culture, to relive a scientific discovery, without first clearing away the cognitive obstacles reflected in the tendency to reject, or run away from, the essential <subjectivity> of science. Accordingly, as a “cognitive IQ test” in the spirit of Lyn’s recent provocations on economics, challenge yourself with the following interconnected questions:

1) Identify the devastating, fundamental fallacies behind the following, typical textbook account:

“There were two different opinions about the nature of light: the particle theory and wave theory. Fresnel and others carried out experiments which proved that the particle theory was wrong and the wave theory was right.”

2) Asked to explain the meaning of “hypothesis” a student responds:

“An hypothesis is a kind of guess we make in trying to explain something whose actual cause we do not know.”

Is this your concept? Is it right?

3) What is the difference between what we think of as a property of some object, and a physical principle? Why must a physical principle, insofar as it has any claim to validity, necessarily apply to all processes in the Universe, <without exception>?

If you encounter any difficulty in answering the above, reread Lyn’s “Project A.”

Next week: Leonardo and the paradox of the “camera oscura.”

Transfinite Principle of Light, Part III: The Phantom of Linearity

By Jonathan Tennenbaum

Look at Leonardo’s drawings of rays of light reflected in a curved mirror. Leonardo draws the incoming rays as parallel straight lines. Reflected off the mirror, the rays form an envelope — a curve that Leibniz’s friend Tschirnhaus later called a {caustic}. Looking at the drawing, we might think to ourselves: “Here Leonardo has shown how the complex is generated by the simple. See how this beautiful curve, the caustic, is created from the simple, straight-line rays, which are the natural, the elementary form of light propagation.”

But, stop to think: Did Leonardo really think that way? Did he believe that straight-line action is primary, and curved forms are secondary? Was Leonardo a Newtonian?

Or have we gotten it backwards? That Leonardo saw, in the production of the caustic, a characteristic manifestation of the {fundamentally non-linear, high-order process} underlying light, and which generates the appearance of straight-line rays as a mere {effect}?

Looking more carefully at Leonardo’s manuscripts with our mind’s eye wide open, the evidence jumps out at us. Indeed, Leonardo even states it explicitly: The propagation of water waves, sound and light alike are based on a {common principle of action}; that principle is not straight-line action, but curved, (to a first approximation) circular action!

Leonardo implies, in fact — as he demonstrated for the case of water waves — that the {action} which generates the outward propagation of light from a source, is {not} basically directed in the “forward” direction, i.e., outward from the source, but essentially perpendicularly, {transverse} to the apparent direction of propagation!

Now let’s turn to the contrary, so-called “emission theory” which is commonly attributed to Newton (although much older), and which he elaborated in Book III of his famous “Opticks”. Newton writes, for example: “Are not the rays of light (streams of) very small bodies emitted from luminous substances? For such bodies will pass through uniform media in straight lines without bending into the shadow, which is the nature of rays of light.” Newton adds many other arguments, which I shall not reproduce here.

Doesn’t this picture indeed seem very agreeable to our naive imagination? Indeed, someone might plausibly argue that: 1) since light evidently moves outward from the source in straight lines and 2) since no motion is possible without some material bodies which are moving, therefore 3) the light rays must consist either of material particles (photons?) or maybe a continuous fluid emitted from the source and moving outward from it.

And how to account for the {bending} or change of direction (diffraction) of light rays, when they pass from one medium to another (i.e., from air to water) or through a medium of changing density? Simple! Since the “natural” or elementary motion is straight-line motion, the bending of the trajectories of the particles forming the rays, must be due to some “forces”, which are pulling the rays (or the particles making up the rays) out of that straight motion, into curved trajectories. What could be more self-evident than that?

Newton actually provides a program for elaborating this emission theory more and more: By studying the laws of diffraction of light rays, and other aspects of their behavior in passing through various materials, we should {deduce}, by mathematics, the microscopic forces which must be acting upon the light particles in interaction with the medium. And then from those “force laws”, once established, we will in turn be able to calculate the behavior of light rays under arbitrary conditions.

Newton puts his own work on gravitation and planetary motion forward as the model for this, stating, in the famous “General Scholium” from Philophiae Naturalis Principia Mathematica:

“Hitherto we have explained the phenomena of the heavens and of our sea by the power of gravity, but we have not yet assigned the cause of this power…. I have not been able to discover the cause of those properties of gravity from phenomena, and I frame no hypotheses; for whatever is not deduced from the phenomena is to be called a hypothesis, and hypotheses, whether metaphysical or physical, whether occult qualities or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena and afterwards rendered general by induction. Thus it was that the impenetrability, the mobility and the impulsive force of bodies, and the laws of motion and of gravitation, were discovered. And to us it is enough that gravity does really exist and act according to the laws that we have explained…”

This same argument was repeated by the Marquis de Laplace, the self-proclaimed high priest of Napoleon’s “orthodox Newtonianism”, in an 1815 attack on the early work of Fresnel. Laplace said that in view of the “success” of Newton’s emission theory, he greatly regretted that anyone would presume

“to substitute for it another, purely hypothetical one, and which, so to speak, can be manipulated at will: that of Huygens’ ondulations. One must limit oneself to repeating and varying experiments and deducing laws from them, that is, coordinating facts, and avoid any undemonstrated hypothesis.”

But did you pick up the “big lie” which Newton told in the passage cited above? Don’t let him get away with it!

Newton claims, among other things, that his law of gravitation was “deduced from the phenomena”, without the use of hypothesis. That is a bald-faced lie. As even Laplace admits, Newton obtained his “force law” by inverting Kepler’s construction for the elliptical orbital motion of the planets. But Kepler’s construction was by no means deduced from the visible motion of the planets; indeed, what could anyone “deduce” from the wild, tangled mass of looping motions of the planets, as seen from the Earth? Rather, Kepler arrived at his results step-by-step through a series of {creative hypotheses} — by cognition! — as documented by Kepler himself in his works, from the Mysterium Cosmographicum through to the New Astronomy. Even Newton’s so-called force law is no deduction from Kepler’s work, but was obtained only by imposing a whole array of {arbitrary assumptions} which are neither in Kepler, nor “deduced from the phenomena”, nor otherwise demonstrated in any way. So, for example, the hypothesis that space has the form of a simple Cartesian manifold, and that straight-line action is elementary.

Now, step back from the specifics of this “big lie” and ask yourself: Why are so many people, even scientists, fooled so much of the time? Could it be, because the supposed elementarity of straight-line action is merely a lawfully-generated, externalized {image} or artifact of a defective form of mental processes?

Exclude {cognition} from mental processes. What is the typical form of action in the “mental vacuum” so created? The characteristic of deduction, as the “elementary” form of non-cognitive reasoning, is that no cognitive considerations are permitted to disturb the “perfect vacuum” in which the deductive chains of logical premises and conclusions are unfolded. No “leakage” of reality from outside the system, which could call its basic assumptions into question, is permitted to interfere with the growth of the theorem-lattice.

Now look, from this standpoint, at what Riemann had to say about Newton’s famous “First Law of Motion”:

“I find the distinction that Newton makes, between laws of motion, axioms and hypotheses, untenable. The law of inertia is an hypothesis: If a material point were all alone in the Universe, and if it were moving with a certain velocity, then it would keep moving with the same unchanged velocity”.

Now here comes a simple-minded fellow, and says to himself: “Well, isn’t that First Law self-evident? After all, {if there were nothing around in the Universe} to interfere with the particle’s motion, then nothing would change that motion, either in direction or in speed. Since there would be no reason for it to bend in one direction rather than another, or to slow down or speed up, the particle would keep moving at a constant velocity in a straight line.” So, in particular, straight-line motion is elementary!

What happened? With his logical premise of a Universe consisting of nothing but a single particle alone in an infinitely extended empty space, our simple-minded fellow has thrown cognition (and the real Universe!) out of the window. He has put himself into a wildly arbitrary phantasy-world; and now proposes, as Newton did, to make that phantasy-world into his yardstick for the real Universe!

If we dig a bit deeper, our fellow might come up with another logical idea: the simple precedes the complex, so to understand the complicated real Universe, we have to break it down into simple parts, into simple hypothetical situations. Then we can deduce the complex situations from the simpler ones. But what if the supposed “simple parts” don’t exist and could not exist in and of themselves? What if the only “simple” existence were the indivisible unity of the Universe as a whole, a Universe graspable only by cognition? But cognition is not simple in the way our vacuum-headed fellow imagines rational thinking to be.

From this it should be obvious, that the issue fought out by Fresnel and Ampere against Laplace, by Kepler against Galileo, by Leibniz against Newton and so forth, is not one of this or that theory or doctrine. It is emphatically not the so-called wave theory versus the particle theory. The issue, as emphasized in Plato’s Parmenides, is the human mind.

Ask yourself: what is the transverse nature of the action, upon which the physical growth of any economy is based?

Transfinite Principle of Light, Part IV: Least Time

by Jonathan Tennenbaum

In last week’s pedagogical discussion, Phil Rubenstein provoked us with a beautiful glimpse into Leibniz’s notion of physical space-time, observing that:

“[T]he totality of space is altered when an action introduces something incompatible to the previous ordering, and that is what introduces real time as changed space. Thus, all of the space-time is truly changed and the primacy of facts is altered.”

Most of us have been trained or otherwise induced to think of events in terms of an implicitly fixed ordering of the Universe. When an event occurs, we too often only ask ourselves: “Where does this event fit into the scheme of the world as I know it?” or “What category does it belong to?” Whereas Phil (following Leibniz) wanted to get us to look out for the anomalous characteristics of an event, and to ask ourselves, instead: “What is the change in ordering of the world, which this anomaly implies?” Or even better: “How does this event open up a potential flank, by which I might change the current ordering of the world into a better one?”

As Phil also pointed out, the two modes of thought are associated with two very different notions of causality. In the first, we put our noses close to the ground and follow events one at a time, in chains of “cause-and-effect.” So, A causes B, B causes C, C causes D and so on like a chain of dominoes, each falling over and pushing the next one in turn. If someone asks, “Why did event X occur?”, our answer will be: “Because W occurred, and W caused X.” And W occurred because of V, V because of U and so forth ad infinitum (or until we find the guy who pushed over the first domino, Aristotle’s “Prime Mover”!). But the platonic mind would rather ask: “Who arranged the dominoes that way, so that the trajectory of apparent cause-and-effect took that particular form?”

When we raise ourselves to the second, higher level, we look for those crucial actions and events, that define the {total geometry} (i.e. ordering) within which entire ranges of other events occur, take a certain form, and tend toward a pre-determinable array of outcomes. This latter standpoint is congruent with Kepler’s conception of a planetary orbit and brings us to Leibniz’ notion of {sufficient reason}. So, referring in his “Principles of Nature” to the higher (transfinite) ordering of the Universe as a whole, Leibniz said:

“The sufficient reason for the Universe cannot be found in the sequence of contingent events…. Since the present motion of matter comes from the preceeding, and that one from an earlier still, one never comes closer to the answer, however far one goes, because the question always remains. Thus it is necessary that the sufficient reason, which does not require another reason, {lies outside this series of contingent events}, and this must be sought in a substance which is the cause, and is a necessary being … this last reason of things is God.”

A beautiful example for the two conflicting outlooks is provided by Pierre Fermat’s discovery of the Principle of Least Time on the basis of he called “my method of maxima and minima.” [fn1] This example is all the more notable, as Leibniz himself used it repeatedly in his polemics against Descartes and the Cartesians.

To set the stage, I should report that around 1621 the Dutch astronomer Snell (who also made major contributions to geodesy) studied the bending of light rays when passing from one medium (for example, air) into another medium (say, water). In each of the two media, insofar as they are relatively homogenous, the propagation of light appears to occur along straight-line pathways. But it had long been recognized, that light entering from air into water at a certain angle, propagates at a different, much steeper angle inside the water. Now Snell studied the functional relationship between the angle (call it X) which the ray makes to the vertical {before} entering the water, and the angle (Y) which is formed with the vertical by the direction of the ray {after} it has passed into the water. He discovered a very simple relationship, which holds quite precisely within certain limits: namely that the {sines} of the two angles are {proportional} to each other. To make these relationships clear, draw the following “classical” diagram, which Leibniz, Fermat et al. employed in their discussions of these matters.

Let a line segment AB represent the surface of the water and let point C represent the locus on AB where the ray of light enters the water. Draw a circle around C. Mark by “D” the point on the upper half of the circle (the part in the air), at which the light ray enters the circle on the way to C, and mark by “E” the locus at which the ray, now propagating in water, crosses the lower half of the circle. The line segments DC and CE represent the directions of the light ray before and after passing from air into water. Now draw the vertical line L through C. The angle between DC and L, is what we called X above, and the angle between CE and L is Y.

Finally, project D and E horizontally (i.e. perpendicularly) onto L, defining two points F and G which are the projections of D and E, respectively, onto the vertical L. (DF and EG are proportional to the {sines} of the angles X and Y.)

Now imagine we vary the angle at which the ray enters the water, while keeping the entry point C fixed. In other words, D moves along the upper part of the circle and the angle X changes correspondingly. What happens to angle Y and the position of E?

Snell found that in the course of these changes, {the ratio of DF to EG remains constant}. For the case of air and water, it turns out that DF:EG = (approximately) 1.33 : 1. From this, we can determine the angle Y corresponding to any given angle X, by a simple geometrical construction.

But what is the explanation of this relationship, its “sufficient reason”? Leibniz himself was convinced that Snell did not find his law by mere empirical trial-and-error, but that he worked from an {hypothesis} derived from the work of the ancient Greek scientists who had discovered an analogous (but simpler) law for the {reflection} of light over 1500 years earlier. While Snell’s original train of thought seems to have been lost, Rene Descartes later (1637) restated the same law, which he claimed to have discovered by himself, and offered an explanation or “proof” based on his own special notion of physics and the nature of light.

Descartes’ argument, as published for example in his “Dioptrique,” is somewhat muddled and difficult to present in a few words. Essentially, Descartes likened the motion of light to that of a small ball or other object which encounters greater or lesser resistance along the path of its motion. The circumstance, that the light ray is bent toward the vertical direction on passing into the water — i.e. becomes “steeper” in its passage through the water — Descartes took as evidence that the {light moves more easily through the water} and is less retarded in its motion, than in the air. At the point of transition into the “easier” medium of the water, Descartes thought, it is as if the ball (the light) would pick up an extra “kick”, continuing at a steeper direction.

Now, disregarding the vagueness and confusing nature of Descartes’ argument, his thinking is clearly trapped in what we referred to above as the first mode: namely to follow a process from one step to the next within a fixed notion of ordering, which is (in Descartes’ case) essentially the naive housewives’ “common sense” notion of the motion of material bodies.

Now in closing, let us listen to what Fermat has to say, in his “Method for the Research of the Maximum and Minimum”:

“The learned Descartes proposed a law for refractions which is, as he says, in accordance with experience; but in order to demonstrate it he employed a postulate, absolutely indispensible to his reasoning, namely that the propagation of light takes place more easily and faster in more dense media than more rarefied media; however, this postulate seems contrary to natural light.”

[“Natural light” was a common expression for “Reason”. Fermat is poking fun at Descartes. He continues:]

“While seeking to establish the true law of defraction on the basis of the contrary principle — namely that the movement of light is easier and faster in the less dense medium than in the more dense one — we arrived at exactly the law that Descartes had announced. Whether it is possible to arrive at the same truth by two absolutely opposing methods, that is a question we will leave to those geometers to consider, who are subtle enough to resolve it rigorously; for, without entering into vain discussions, it is enough for us to have certain possession of the truth, and we consider that preferable to a further continuation of useless and illusory quarrels.

“Our demonstration is based on the single postulate, that Nature operates by the most easy and convenient methods and pathways — as it is in this way that we think the postulate should be stated, and not, as usually is done, by saying that Nature always operates by the shortest lines … We do not look for the shortest spaces or lines, but rather those that can be traversed in the easiest way, most conveniently and in the shortest time.”

Next week we shall look more closely, through the eyes of Leibniz, at Fermat’s discovery and the error of Descartes.

— ————————————————————

1. Here is a deliberately challenging quote from a 1636 letter by Fermat to Roberval, in which he boasts about the scope of his method:

“On the subject of the method of maxima and minima … you have not seen the most beautiful applications; because I make it work by diversifying it a bit. Firstly, in order to invent propositions similar to that of the (parabolic) conoid which I told you about last; 2) In order to find the tangents of curved lines…; 3) To find the centers of gravity for all sorts of figures…; 4) To solve number theoretic problems … it is in this… that I found an infinity of numbers which do the same thing as 220 and 284, namely that the sum of the divisors of the first equals the second and the sum of the divisors of the second equals the first; and if you want another example to give you a taste of the question, take 17296 and 18416. I am sure you will admit that this question and those of the same sort are very difficult…. And so you see four kinds of questions which my method embraces, which you probably didn’t know about.”

Transfinite Principle of Light, Part V: Time To See the Light

by Bruce Director

Last week, you were introduced to a paradigmatic case of a discovery of a universal principle, Fermat’s principle of “Least Time.” Contrary to textbook-educated commentators, Fermat’s Least Time principle, is not a property of light. Rather, it is a characteristic of the Universe, from which light’s properties unfold. The irony is, that this universal characteristic of Least Time, is discovered in its unfolded form, but only KNOWN as a universal principle. For that reason, it epitomizes the discovery of a principle that corresponds to a change in hypothesis from an n- to an n+1-fold manifold, connected with a corresponding change from an m- to an m+1-fold manifold. Consequently, it deserves your careful attention and study.

To summarize: the Classical Greeks had already discovered a special case of this principle, through the investigation of reflected light (catoptics)/1. The Greeks found that the angle at which light is reflected from a shiny surface, is equal to the angle at which the light strikes that surface. Simply stated, the angle of incidence equals the angle of reflection. The equality of these angles, minimizes the length of the path from the source of the light, to the reflecting surface, to the eye. However, this principle is NOT a property of light. It is a manifestation of a universal characteristic: that nature always acts along the shortest path.

The phenomenon of refraction (the change in direction of light when it travels from one medium to another, such as from air to water), appears, at first, to contradict this universal characteristic, as the change in direction at the boundary between the two media, makes the path of the light longer, than were it to continue in the same direction across the boundary.

More than one and one-half millennia after the Classical Greek period, Willibrod Snell showed that when light is refracted, the change in direction is such that the sine of the angle of incidence and the sine of the angle of refraction are always in constant proportion. (See last week’s pedagogical.) The Greek principle of reflection (in which this proportion is one, as equal angles will have equal sines), can thus be seen as a special case, or boundary, of Snell’s more universal principle. Yet, the length of the path of the light under refraction, is still not the shortest path, as in the case of reflection.

While the details of Snell’s reasoning are not entirely known to us, it had been conjectured that the observed refraction resulted from a change in the velocity when light travels through different media./2 Under this idea, it can be shown, that the different velocities are in the same proportion as the sines of the angles of incidence and refraction. Or, in other words, Snell’s law of refraction is, itself a reflection of a physical principle, that velocity of light changes when traveling in different media. (In his “Treatise on Light,” Huygens has a simple and direct geometrical demonstration of this concept, to which the reader is referred.)

Descartes, believing that light was a stream of particles, adopted the conjecture that such particles would travel faster in denser media. From this, he reformulated Snell’s law and claimed it as his own, a fraud so blatant that even Descartes’ apologists no longer can defend it.

Pierre de Fermat adopted the opposite view, that light traveled slower in denser media. But, much more importantly, Fermat came to this idea, not by conjecturing on the properties of light, as Descartes did, but from the standpoint of a new universal principle that he hypothesized: to wit, that nature always acts according to the least time. That is, that the longer path the light travels when refracted, is actually the path that takes the shortest time. From the standpoint of the earlier Greek discovery of reflection, the universal principle that nature seeks the shortest path in space, has been transformed into the principle of shortest path in space-time. A transformation from a universal hypothesis of n dimensions, to a universal hypothesis of n+1 dimensions. (Hypothesis is used here in the rigorous Socratic terms defined by LaRouche, not the banalized general usage concept more closely equated with the verb “to guess.”)

Or, in the words of Fermat, quoted in last week’s pedagogical discussion:

“Our demonstration is based on the single postulate, that Nature operates by the most easy and convenient methods and pathways — as it is in this way that we think the postulate should be stated, and not, as usually is done, by saying that Nature always operates by the shortest lines…. We do not look for the shortest spaces or lines, but rather those that can be traversed in the easiest way, most conveniently and in the shortest time.”

Leibniz in his “Discourse on Metaphysics,” addresses this question this way:

“The method of final causes is more easy and can often be used to divine important and useful truths, which one would have to seek for a long time by a more physical approach, for which anatomy provides major examples. Thus I believe that Snell, who is the first discoverer of the laws of refraction, would have had to spend a long time finding them, if he had started by first trying to find out how light is formed. Rather, he apparently followed the method which the ancients used in catoptics, which is based on final causes. By looking for the easiest pathway to pass light from a given point to another given point by reflection on a given plane (supposing this is the intention of Nature), the ancients found the equality of the angles of incidence and reflection, as one can see from a little treatise of Heliodor of Lariss, and elsewhere. Which is what Snell as I believe, and after him (although without knowing from his) Fermat applied very ingeniously to refraction…. And the proof which Descartes’ wanted to give for the same theorem, by the method of efficient causes, would need much improvement to be as good. At least, there is reason to suppose that Descartes would never have discovered the law in that way, unless he had learned something in Holland about Snell’s discovery.”

“Descartes thought the opposite of what we think concerning the resistance of various media (to the propagation of light). That is why the very illustrious Spleissius, a man well versed in these matters, has no doubt that Descartes, when he was in Holland, saw Snell’s theorem; and in fact he remarks that Descartes had the habit of omitting mention of authors, and takes as an example the vortices in the Universe which Giordano Bruno and Johannes Kepler pointed to, in such a way that only the word itself was missing in their work. It happens that Descartes, in order to prove his theorem by his own efforts… From which Fermat correctly concluded that Descartes had not given the real reason for his theorem.”

The Cartesians, Galileans, and the whole plethora of Aristotelian-Manichean sects, squealed with rage, at Fermat’s principle of Least Time. How could Fermat say that light sought the shortest time? Why, that would mean that either, light would have to have some “intelligence” by which to “decide” whether its choice of path was using up the shortest time, or, there would have to be some pre-arranged “track,” like Ptolemy’s solid orbs, that guided the light along the shortest path.

These objections are identical to those raised against Kepler, who demonstrated that the elliptical planetary orbits, rather than uniform circular ones, are the pathways that correspond to the universal space-time characteristic of the solar system. Kepler dethroned Ptolemy’s demi-gods and solid orbs, along with the poly-copulating Olympians, from whom Ptolemy and his fellow Bogomils, drew their authority.

Taking up the defense of Fermat’s principle, Leibniz dealt the decisive blow to the Cartesians:

“…Thus we have reduced to pure Geometry all of the laws which confirm experimentally the behavior of light rays, and have established their calculus on the basis of a unique principle, that you can grasp following a specific causality, but providing you consider appropriately the case in point: indeed, neither can the ray coming from C make a decision [1] about how to arrive, by the easiest way possible, at points E, D, or G, nor is this ray self-moving towards them [2]; on the contrary, the Architect of all things created light in such a way that this most beautiful result is born from its very nature. That is the reason why those who, like Descartes, reject the existence of Final Causes in Physics, commit a very big mistake, to say the least; because aside from revealing the wonders of divine wisdom, such final causes make us discover a very beautiful principle, along with the properties of such things whose intimate nature is not yet that clearly perceived by us, that we can have the power to explain them, and make use of their efficient causes, along with their artifacts, such as the Creator employed them in order to produce their results, and to determine their ends. It must be further understood from this that the meditations of the ancients on such matters are not to be taken lightly, as certain people think nowadays.”

Reflect on that, until next week.

1/ The history of these Greek investigations deserves careful study by us, as its development in textbooks is vague and confusing. For pedagogical purposes, and for posterity’s sake, it needs to be pulled together by someone wanting to do a service to humanity.

2/ This is also an area of historical research which is necessary for us to fill out.

Transfinite Principle of Light, Part VI: Passion and Hypothesis

by Jonathan Tennenbaum

There is a tendency for people to misconstrue and banalize ad absurdum the polemic Lyn has developed about the need to change fundamental assumptions. Some think to themselves: “Lyn says that assumptions are bad. So I’ll play it safe. I won’t make any assumptions at all.”

This wimpy attitude, already strong among baby-boomers, is even more pronounced among Generations X and Y. These people have resolved never to commit themselves fully to anything, never to make a strong emotional investment, never to make a decision which might irreversibly change their lives: “No, no I don’t go there” is the motto. Their policy is to “keep all the doors open,” particularly the hind doors through which to escape when the going gets too tough.

Ironically, no behavior demonstrates the influence of hidden ontological assumptions more clearly, than the obsessive, schlmiehl-like behavior of people trying to “play it safe,” hiding behind an illusion of “objectivity,” “sticking to the facts,” and “playing according to the rules.” Whereas today the very survival of the world depends on {strong hypotheses} — hypotheses discovered, transmitted, and executed with the most impassioned quality of moral commitment.

So, Schiller said, he who would not give up his life, will not gain it. It is impossible to make or relive a scientific or equivalent quality of creative discovery without risk, without sacrificing some cherished thing inside oneself and even confronting something akin to the fear of death.

As an example, let us listen to Brahms’ student Gustav Jenner, as he describes how Brahms forced him through the agonizing process of knowing, as opposed to superficial learning. Jenner recounts his first encounter with Brahms. Personally, Brahms was very kind and friendly to the budding young composer. But when it came to criticizing the compositions Jenner had put in front of him — naturally the ones Jenner was most proud of — Brahms’ remarks were devastating:

“After it was all over, I felt like someone who, after wandering long on a false path, thinks his goal is near, but suddenly realizes his error and now sees his goal vanish into the distance…. Despite the mercilessly strict judgement which my labors elicited from him, not a single ironical or even an angry word fell from his lips…. He simply demonstrated to me, relentlessly and without brooking any contradiction, that I didn’t know how to do anything … After a stringent examination concerning what I had been doing with my life up to then, Brahms said: `You see, in music you have not yet learned anything in an orderly fashion; for, everything you’ve been telling me about the theory of harmony, your attempts to compose, instrumentation, and so forth, I count as nothing.'”

That was only the beginning. After Jenner had moved to Vienna to study under Brahms, the old master became more still more strict and rigorous with him than before.

“I never again heard from Brahms an encouraging word — let alone praise — about my works…. It took a long time before I truly learned how to work … Only a full year later did Brahms say to me on one occasion, `You will never hear a word of praise from me; if you cannot tolerate that, then everything within you is only of value by virtue of the fact that it will fail.'”

But what did Brahms teach Jenner? For that I advise everyone to read all of Jenner’s short book. Here I just want to quote from one passage, especially relevant to the point at hand:

“I learned the most not by him pointing out my mistakes per se, but by his revealing to me how they had come about in the first place…. From his experience he told me: `Whenever ideas come to you, go take a walk; then you’ll find that what you had thought was a finished idea, was only the beginnings of one.’ He would repeatedly seek to sharpen my distrust in my own ideas. I have often had the experience that precisely such thoughts which become lodged (in the mind) like an idee fixe, pose a natural barrier to creativity, because one has fallen in love with them and, instead of mastering them, has become their slave. `Pens exist not only to write, but also to cross things out,’ said Brahms, `but be careful, because once something has been set down, it is hard to take it away again. But once you realize, that, good though it (a passage) may be in itself, it is not appropriate here (at a given place), don’t mull it over any longer, but simply cross it out.’ And how often we do we not try to save a passage, only to ruin the whole!… When Brahms, with his impartial criticism, reproached me for precisely those passages, I felt surprised and hurt at the beginning, because these had been my favorite passages — until I saw that I hadn’t found the disrupting element because I had unconciously proceeded from the idea, that this passage must stay in, no matter what. I have had to feel the bite of those pronouncements by Brahms in my own flesh; they are the result of his long experience and unbending self-criticism.”

Helped by Brahms to become aware of and correct his own weaknesses of thinking, Jenner wages a war against his own tendency toward superficiality, his frequent infatuation with his own “pet” ideas at the expense of truth, his tendency to be distracted by unimportant particularities instead of concentrating on what is really essential. Does that sound familiar to anyone?

But is the conclusion from this teaching, to avoid having ideas, to not risk putting forward hypotheses, for fear they might turn out to be wrong? Hardly! Nothing could be more boring, more totally useless, than a composer who writes “according to the rules,” and who is unwilling to “live dangerously” by making bold and daring (but true!) hypotheses.

The difficulty Jenner describes — to overcome one’s attachment to strongly-held ideas and habits of thought in a rigorous search for truth — arises in essentially identical fashion in science and every other field of creative endeavor.

But in this regard, unfortunately, people in our organization sometimes fall into a trap: Our ideas are (generally speaking) far superior to those predominating in society nowadays; and thus it appears very easy (or should be) to attack and ridicule the “obviously” silly ideas of ordinary people, without feeling the need to go through {in ourselves} the agony Jenner experienced. Yet, Brahms’ authority as a teacher came from {exactly that}: from Brahms’ own agonizing struggle for rigor and truth vis-a-vis his own mind, and not merely from his superior ideas, knowledge and experience as a composer.

Thus, the main points of reference for ridiculing and refuting wrong or “silly” ideas and habits in others, are the successes one has had in confronting and overcoming one’s own imperfections. That includes insight into the {lawful nature} of human imperfections and the powerful attachments people often form to them. Thereby, one can put one’s own past errors and imperfections to good use, demonstrating once more Leibniz’s profound principle of “the best of all possible worlds.”

Turning now to physical science proper, it is too cheap, and we cheat ourselves if we would do this, to merely ridicule as “obviously wrong” the theories and hypotheses which a given discovery refutes, overthrows, or supersedes. True, in history to date, science has hardly existed except in a constant state of war against oligarchism; and as we have repeatedly documented (as in the case of Fresnel and Ampere), the oligarchical faction (embodying a “{negative} higher hypothesis”) is commonly the active promoter of the inferior hypotheses against which significant discoveries were explicitly or implicitly directed, as means to overcome what had been transformed into the “prevailing public opinion” among scientists and others.

However, to the extent we might tend, too quickly and cheaply, to divide ideas and hypotheses into {self-evidently} good and true on the one hand, and {self-evidently} false and bad on the other, we trivialize the struggle inside the mind of the creative scientist and cheat ourselves out of the possibility of really reliving a discovery. For, the oligarchical element lies not in the inferior idea pe se, but in the deliberate clinging to it, in the satanic {assertion} of backwardness and regression as a {principle} opposed to the principle of perfection. An animal is not an evil thing; but a man who behaves like an animal, is.

The immediate point I wish to stress, is this: the strength of belief in certain assumptions and hypotheses, which the creative scientist must confront in the process of discovery, is (in many if not most cases) not {simply} a product of oligarchical tampering. To a greater or lesser extent those assumptions and hypotheses arose as the product of earlier discoveries, and their relative adequacy was supported by vast arrays of corroborating evidence and by the positive economic impact (increase in Man’s per-capita power over Nature) of technological developments based upon them. In the light of such impressive, even overwhelming grounds to believe in the validity of the relevant assumptions and theories, the psychological difficulty facing the discoverer is qualitatively greater than that of merely refuting an “obviously wrong” idea.

Think of a classical tragedy where the final curtain falls on a stage littered with dead bodies. If the audience had developed no strong and justified engagement, admiration for, or sympathy with the tragic hero or others among the characters whose lives thus ended, what would happen to the tragic effect of the play? So, in the course of scientific discovery, as in the composition of music and drama, some ideas must “die” in order that higher ideas might be expressed. The greater the apparent attractiveness, validity and comprehensiveness of the ideas successfully superceeded, the greater the power embodied by the creative discovery.

– An Inferior, but Fruitful Hypothesis –

For these reasons, before proceeding further with the discoveries of Fermat, Bernoulli, Leibniz, Huygens, and Fresnel, we should look a bit closer at the notion which these discoveries, culminating with Fresnel, finally refuted: The notion that light propagates in the form of “rays” projected outward from the luminous or illuminated object; and that to a very high degree of precision these rays take, in a uniform medium, the form of straight lines.

Before rushing to reject this notion out-of-hand (i.e. simply because of the occurrence of straight lines), let us for a moment reflect on the theorems which flow from it. We shall find, in fact, that this descriptive notion of light rays is {extremely useful and fruitful}, as Leonardo himself and many others demonstrated in countless ways. Its eventual rejection by Huygens and Fresnel is by no means so easy and self-evident, as might appear after-the-fact.

Among other things, the principles of so-called “ray optics” was the basis of perspective, and (supplemented by Fermat’s principle) for the analysis and development of lenses. It is still employed on a large scale today in the design of optical instruments, even though the notion of “ray” itself — as something supposedly self-evident and elementary — was decisively refuted by Fresnel and superseded by an entirely different principle.

– Ray Optics and the Camera Oscura –

The idea of resolving light propagation into “rays” is a not a self-evident idea simply drawn from sense-perception, but an {hypothesis}. True, Nature sometimes provides rare circumstances, such as sunlight shining through a break in clouds, where we seem to “see” straight-line rays. However, it is a big step to go from that mere spectacle to a general conception, and indeed the gateway to that conception is guarded by many paradoxes. For example: if every point of every illuminated object emits rays of light in all directions, so that the entire space is filled with an infinity of crisscrossing rays, then how can we ever see anything clearly? And won’t the rays constantly be colliding into each other?

Leonardo said every illuminated object “fills space with pictures of itself.” But if we stand in the middle of a room and hold up a piece of blank paper, we certainly don’t see any pictures projected on it! The reason is not hard to imagine: the light arriving at any given location on the paper arrives from all objects and comes from all directions at the same time; it is consequently mixed up and jumbled together, and no image can result.

How, then, are we able to see anything at all? How do our eyes manage to organize and untangle the light? Renaissance experiments with the so-called “camera oscura” provide a preliminary hypothesis. Build a closed chamber without windows (a closed box) whose walls and ceiling are completely opaque to light. Install a screen on one of the inside vertical walls of the room, and make a small hole in the middle of the opposite wall. An observer sitting inside the room will see, projected onto the screne, an image of the world outside the chamber! In fact, the image on the screen corresponds to what the observer would see, if he were to look outside directly through the hole — except that the image on the screen is upside-down!

Do the experiment, or an equivalent one. What is the difference between the two situations: A) holding up a piece of paper in the middle of a room, and finding no image at all B) putting up the same piece of paper on the wall of the “camera oscura” (or equivalently, imposing an opaque barrier with a small hole, between illuminated objects and a screen)?

Evidently, the hole in the wall fulfills the function of a {lens}, organizing the propagation of light in such a way, that the image appears on the screen. But note, that if we move the screen directly up to the hole, the images disappear, and we get nothing but an undifferentiated spot of light. Not the hole itself, but the total arrangement of hole and the screen held at a significant distance away, provides the relevant organizing function.

Now, account for the function of the “camera oscura” as a {theorem} based on the hypothesis, that light propagates in (approximately) straight-line rays. Account also for the circumstance, that the images on the screen are slightly blurred, depending on the size of the hole.

Related to this, derive as a theorem another, apparently anomalous phenomenon known to the Greeks and discussed at length by Leonardo: The shadow of any object placed in the rays of the Sun, and projected onto a screen at a suitable distance, is not simple and sharp, but consists of a dark interior region (the “core shadow”) outside of which the light gradually increases. Determine the geometrical law by which the relative sizes of the core shadow and the “blurred” partial shadow change, as the distance between object and screen is varied. The analysis is brilliantly confirmed by such phenomena as eclipses of the Sun.

Examine, thus, these and other fruitful consequences of “ray optics” without the oligarchical admixture of Newton, et al.

Now begin to appreciate the shocking, jarring impact of Fresnel’s hypothesis, that shadows are produced “holographically”, i.e., by {interference} of active wave processes inside and around the shadow area itself, and not merely through the blocking-out of linear rays of light by the object.

Transfinite Principle of Light, Part VII: From Appearance to Knowledge

by Jonathan Tennenbaum

In the latter section of last week’s discussion, I gave arguments in support of the notion, that light propagates in straight-line rays. Indeed, by imagining to ourselves that light is a “something” which propagates outward from each point of a luminous object, in all directions along straight-line trajectories, we can account very well for the functioning of the “camera oscura,” for the main features of the shadows cast by objects, for the changes in apparent size of objects according to their distance from us (and other laws of perspective), and many other things. Furthermore, this idea seems to conform well to our sense experience. Cover a sunlit window by a black shade, and put some holes in the shade. In the darkened chamber we can “see” the straight-line rays of light coming through the hole, just as we can directly “see” the rays coming out of a movie projector, especially in a smoke or dust-filled room. Let yourself become so accustomed to this way of conceiving the propagation of light, that it seems perfectly self-evident.

Now take this notion as a model for {any} sort of {apparently successful} opinion or belief. What attitude should we take to it? A critical attitude, of course. But shall we simply reject the notion of straight-line rays of light out of hand, because it doesn’t fit with some ideological doctrine or metaphysical prejudice of ours? Shall we deny that Leonardo da Vinci, Brunelleschi, Kepler, and other great men drew rays of light as straight lines, or that thousands of practical activities, such as in surveying, in technical drawing, etc. seem based on this notion? Shall we simply deny or ignore the evidence just cited?

Or should we not rather admit that there {does} exist a very wide-spread phenomenon, an {effect}, which corresponds at least approximately to what we have described as “straight-line propagation of light?” If so, then so what? An effect or phenomenon is one thing; the axiomatic assumptions, in terms of which we interpret and judge the {significance} of a given array of phenomena, is something completely different.

We fall into a trap, when we jump from a mere description of appearances — or a limited, simple hypothesis — to imputing or superimposing upon the phenomena certain fundamental, axiomatic qualities of assumption, which are by no means called for by the phenomena themselves. Watch out when anybody points with his finger and says: “See this? It proves X,Y,Z.” The expression “evidence of the senses” is defective, because in reality a process of {judgment} based on certain assumptions is always implicit, albeit preconsciously, in any report of such “evidence”.

Indeed, it is common experience (we confront it daily!) that different people, put in front of one and the same array of phenomena, draw radically different, even completely opposite, conclusions. Sometimes we can even witness two or more individuals in such a debate, pointing to one and the same phenomenon as “definitive proof” for their mutually contradictory opinions!

These observations suggest a very big question. Somebody comes along and challenges us: “If you say your interpretation of evidence is determined by your axiomatic assumptions, then how could you ever {know} whether those basic assumptions are true? Aren’t you caught in a vicious circle? How can you reject self-evident assumptions on the one hand, and at the same time claim there is no purely `objective’ evidence which does not involve assumptions of some kind? You can’t have your cake and eat it, too. If you want to be consistent, you have to finally make up your mind: either 1) to reject all fundamental axioms and assumptions, and accept only empirical experience (sense perceptions) as real, `objective’ knowledge of fact; or 2) admit that your fundamental axioms and assumptions can never be scientifically tested or proved in terms of evidence — that they must therefore either be self-evident, or based on some sort of faith or belief, as in revealed religion. Or would you agree with my opinion, that fundamental assumptions are ultimately a matter of arbitrary choice, so that conflicts of opinion can ultimately only be resolved by people killing each other?”

Leaving the reader to ponder his or her answer to this paradox, let’s go back to our concrete case, the supposed straight-line propagation of light rays.

One person (Newton, for example) draws a light ray, and thinks of it as a self-evident, axiomatically linear entity, an entity obeying the formal axioms of “Euclidean geometry.” A second person (Leonardo Da Vinci, for example) sees the same ray as the trace of an intrinsically {nonlinear} process. The objective appearance of the phenomenon is the same. How can we decide between the two interpretations, the two ways of thinking? Here we get to the issue that Fresnel and Ampere were addressing, as Fermat and Huygens before them. A unique experiment signifies more than simply evoking a new “objective phenomenon” from the Universe. The problem is to evoke and communicate a true, validated change in how human beings {think} about the Universe.

Let us go back to the time of Fermat. We do not yet have the demonstrations of interference and diffraction, which Fresnel used to finally demolish Newton’s linear theory of light. But we do have an anomaly called {refraction} that was the focus of Fermat’s elaboration of the {principle of least time}.

Note, for example, that the size and appearance of the Sun and Moon, and the apparent angular motions of the stars, are changed when they get near the horizon — a phenomenon which is commonly explained by the notion, that the rays of light coming from these objects, are {bent} as they pass obliquely through atmospheric layers of changing density. Compare this with the bending of light rays in passing from air to water, or vice-versa, which we can demonstrate in any classroom. With the aid of a simple apparatus we can make the sharp change of angle of the rays at the surface of the water clearly visible. With a bit more effort, we can produce media of varying density and show clearly how the rays follow {curved} trajectories. Let’s try to take on a Newtonian with this:

“So you see, light does {not} travel in straight lines!”

“Yes it does, if you do not disturb it. But by interposing matter, an inhomogenous medium, you deflected the rays from their natural, straight-line paths.”

“How do you know that straight-line paths are `natural’?”

“If a light ray were allowed to propagate unhindered, in a pure vacuum or perfectly homogeneous medium, then it would propagate precisely along a straight line. It is just like the motion of material bodies in space according to Newton’s first law: `a material body remains in its state of rest or uniform motion along a straight line, unless compelled by forces acting upon it to change its state.’ No one could deny that.”

“Does a `pure vacuum’ exist anywhere in nature? Does a `perfectly homogenous medium’ exist in nature?”

“Well no, of course. There is always a bit of dirt around, or inhomogeneities that disturb the perfectly straight pathways.”

“So the presence of what you call `dirt’ is natural, right?”

“Yes.”

“So then it is natural that light never travels in straight-line paths.”

“Wait a minute. You are mixing everything up. I am talking about the natural propagation of light, quite apart from matter.”

“What do you mean, `quite apart from matter’? Do you assume that the existence of light is something that can be separated from the existence of matter?”

“Yes, certainly. The natural state of light is that of light propagating in a Universe that is completely empty of matter.”

“And a completely empty Universe is a natural thing? Do you claim such a think could ever exist?”

“I could imagine one. Sometimes I get that feeling inside my head.”

“Maybe that is because you are not thinking in the real world.”

“Don’t blame me for that. I am a professional physicist.”

“Well then, fill the vacuum in your mind with the following thought: Light and matter do not exist as separate entities, nor does matter act to bend rays of light from what you imagine in your fantasy-universe to be perfectly straight-line rays. Rather, the existence of what we call matter, the existence of light and the fact that light never propagates in straight lines — except in mere appearance — are both interrelated manifestations of the fundamental curvature of physical space-time, which Fermat began to address with his principle of least time.”

Transfinite Principle of Light, Part VIII: When Long Is Short

by Bruce Director

It is a continuous source of happiness, for men and women who have cultivated a capacity for scientific thinking, that Nature acts along the shortest pathways, and those are always curved. Not so, however, for the petty and small minded. For them, such principles are a constant vexation. There is no better example of this, than Pierre de Fermat’s fight with Descartes.

In 1637, Fermat received a copy of Descartes’ Dioptrics. In that work, Descartes considered light to be an impulse of particles travelling instantaneously. From this conception, Descartes presented a mathematical construct of reflection and refraction, by treating these particles, as if they were hard bodies moving in empty space. This was an obvious absurdity, since refraction is the phenomena that occurs when light travels through two different media, not empty space. Into Galileo’s mathematics of moving bodies, Descartes fitted the observed phenomena of the refraction and reflection of light.

Fermat found the work deeply flawed, and said so to Descartes’ epigone Marin Mersenne. First, Fermat said, Descartes erred by relying solely on mathematical reasoning, which, according to Fermat, could not lead to the discovery of physical truths. Furthermore, Fermat attacked Descartes’ mathematics, “of all the infinite ways of dividing the determination to motion, the author (Descartes) has taken only that one which serves him for his conclusion; he has thereby accommodated his means to his end, and we know as little about the subject as we did before.”

Such insolence from an unknown upstart in Toulouse offended Descartes no end. He wrote to Mersenne, “… I would be happy to know what he will say, both about the letter attached to this one, where I respond to his paper on maxima and minima, and about the one preceding, where I replied to his demonstration against my Dioptrics. For I have written the one and the other for him to see, if you please; I did not even want to name him, so that he will feel less shame at the errors that I have found there and because my intention is not to insult anyone but merely to defend myself. And, because I feel that he will not have failed to vaunt himself to my prejudice in many of his writings, I think it is appropriate that many people also see my defense. That is why I ask you not to send them to him without retaining copies of them. And if, even after this he speaks of wanting to send you still more papers, I beg of you to ask him to think them out more carefully than those preceding, otherwise ask you not to accept the commission of forwarding them to me. For, between you and me, if when he wants to do me the honor of proposing objections, he does not want to take more trouble than he did the first time, I should be ashamed if it were necessary for me to take the trouble to reply to such a small thing, though I could not honestly avoid it if he knew that you had sent them to me.”

There the matter rested for 20 years, until, in 1658, one of Descartes’ zealots, Claude Clerselier, asked Fermat for copies of his earlier correspondence to include in a volume of Descartes letters. In the intervening period, Fermat had done his own original work on light, taking off from the work written by Marin Cureau de la Chambre. In August 1657, Fermat wrote Cureau, “you and I are largely of the same mind, and I venture to assure you in advance that if you will permit me to link a little of my mathematics to your physics, we will achieve by our common effort a work that will immediately put Mr. Descartes and all his friends on the defensive.”

Instead of Descartes’ resort to the mythical hard bodies traveling in empty space, Fermat conceived of light as travelling at a finite velocity, that changed depending on the density of the medium through which it travelled. (This was more than 50 years before Ole Roemer conclusively determined the finite velocity of light, in his observations of the moons of Jupiter.) But, more importantly, Fermat proceeded from the standpoint of a universal physical principle, that nature always acts along the shortest paths. The path, in the case of refraction, was not the simple geometrical length of the path, but the path that covered the distance in the least time. “We must still find the point which accomplishes the process in less time than any other …” Fermat wrote to Cureau in January 1662.

Upon receiving Fermat’s letter, Clerselier. In a letter dated May 1662, (translated here by Irene Beaudry) Clerselier wrote:

“Do not think that I am answering you today because you think you have obtained the objective of troubling the peace of the Cartesians…Permit me just to tell you here the reasons that a zealous Cartesian could allege to preserve the honor and the right of his master, but not to give up his own advantage or to give you the initiative.

“1. The principle that you consider as the foundation of your demonstration, that is, that nature always acts along the shortest and simplest pathways, is nothing but a moral principle and not at all physical, that is, no and could not be the cause of any effect of nature.

“It is not, because it is not this principle that makes nature act, but rather, the secret force and the virtue that is in every thing, that is never determined by such or such an effect of this principle, but by the force that is in all causes that come together into one single action, and by the disposition that is actually found in all bodies upon which this force acts.

“And it could not be otherwise, or else, we would presume nature to have knowledge: and here, by nature, we mean only this order and this law established in the world as it is, which acts without foreknowledge, without choice and by a necessary determination…..”

Clerselier objects not to Fermat’s discovery that light travels the path of shortest time, but to the idea that such a universal principle exists at all. Without a universal principle, there is no shortest path, only the arbitrariness of empty space.

This is a matter that confronts all of us directly each day. If civilization’s survival depends on increasing the quality of human cognition, then the shortest path to that survival is the seemingly long and curved route of curing the population of their insanity through mass outreach. Let the petty Clerselier’s take the short-cuts on that long road of destruction.

Transfinite Principle of Light, Epilogue

LEAST ACTION — PRINCIPLE OF NATURE OR PRINCIPLE OF DISCOVERY?

by Jonathan Tennenbaum

What was it about Fermat’s “principle of least time” and Leibniz’s generalized “principle of least action” that so upset the Cartesians and Newtonians, and continues to upset people up to this very day? In reaction to the beating Fermat and Leibniz administered to Descartes, in the 18th century a heated and very confused debate was whipped up concerning so-called “teleological principles in Nature” — a debate which reached its pinnacle of absurdity when Maupertuis claimed priority over the long-dead Leibniz in concocting his own, incompetent version of the least action principle! Behind the diversionary antics of the buffoon Maupertuis, Euler and Lagrange launched their more sophisticated attack on Leibniz. Euler and Lagrange worked to eliminate the self-conscious {principle of discovery} which Leibniz placed at the center of his conception of the physical universe, and thereby to drive a wedge between “Naturwissenschaft” and “Geisteswissenschaft.” can find the trace of these events in our minds, in own struggles to grasp the central conception of Leibniz’ Monadology, or even the seemingly simple “principle of least time” put forward by Fermat in the 1630s.

Build a simple apparatus to demonstrate how a beam of light changes its direction when passing from air into water. Note how the rate of change of direction itself changes as you change the angle at which the light beam strikes the surface of the water. When the beam enters the water perpendicularly to the surface, no change is apparent: the beam continues onward in the same, perpendicular direction. But as we gradually tilt the beam away from the perpendicular direction, we find that the beam is “bent” more and more at the water surface; the direction of the beam inside the water is steeper, i.e., its angle to the vertical is smaller than that of the original beam in the air. (Readers must perform the experiment!). How can we account for the shape of the pathway, and in particular for the lawful relationship of the angles which describe the deflection of the beam at the surface of the water?

Now, the Newtonian-Cartesian way of thinking about this problem will appear natural and even self-evident to most people, comparing Fermat’s and Leibniz’s, because the former corresponds to axioms which have become deeply embedded in our culture. Let’s look at it for a moment. What, indeed, could be more self-evident, than the idea that the pathway of the light beam is created by the light itself in propagating out from the source?

Just so, in Newton’s mechanics, the orbit of a planet exists only as an imaginary trace of its successive positions; those positions being created by the planet’s motion. To Newton, the orbit itself doesn’t exist as an efficient physical entity; what {exists}, at any given time, is only the planet, its momentary position, its state of motion and the momentary gravitational force acting upon it from the Sun. So according to Newton, the fact that a planet traces an elliptical pathway in the course of its motion is just a mathematical accident, a derived theorem of the Newtonian theorem-lattice. So, today, the student is taught to say: “When you solve the equations for motion of the planet under the force of gravity, it just happens to come out to an ellipse.”

Imagine the precocious child who, caught with his hand in the cookie jar, explains: “I couldn’t help it. My body was just obeying the laws of motion.”

Similarly, according to this way of thinking, the pathway of the light beam is just the trace of a “something” or large number of tiny “somethings”, which travel through space from one moment to the next and one point to the another. They would “naturally” travel in straight lines, except insofar as some “external forces” deflect them from a straight-line path. Analyzing the bending of a beam of light going from air to water in this manner, we divide the process into three phases: A) the light propagates undisturbed in a straight line through the air, until B) the beam suddenly “collides” with the water surface, where the light particles are acted upon by some unknown force causing them to change their direction of motion, and from that point on C) they continue travelling in the water in a straight line in the new direction. This is exactly the thinking of Descartes, Newton, Laplace, Biot et al.

Not so Fermat! To come onto his footsteps, let us start from the well-grounded assumption, that Fermat followed Kepler in these matters. Kepler, as we know, regarded the system of planetary orbits and the orbits themselves as real and their determination as {primary} relative to the motions of the planets. An orbit is determined by a characteristic “curvature in the infinitesimally small”, such that any however-small interval of planetary motion already expresses the efficient principle which predetermines the future course of the planet in that orbit.

Could we say, then, that the light follows a predetermined {orbit}? Or should we be more cautious and merely propose, that the pathway of the light beam is merely a visible expression or characteristic of an {underlying physical process}, whose course is {predetermined} in the same sense that a planet’s motion is predetermined by its Keplerian orbit. Either way, we cannot avoid the implication, that all {three} phases A, B, C defined above, and the sequence of all three taken together, embody {one and the same} characteristic infinitesimal curvature!

At this point the formalist-minded will freak out:

“A and C are straight lines, not curved at all; whereas B is where the beam is “bent”! So how can you talk about the same curvature?”

Well, maybe you ought to conclude that the straight-line propagation in A and C is only an {apparently} linear envelop of a nonlinear process.

“Don’t make things so complicated. After all, so long as the light is travelling in phase A through the air, before it comes to the surface of the water, there is no force to divert it; the light doesn’t yet “know” it is going to hit the water, so it will travel in a perfect straight line. Or do you suggest, that the light can look ahead to see the approaching surface of the water?”

Our interlocutor here is trapped in the Newtonian-Cartesian assumption, that time is a self-evident, linearly ordered succession of “moments,” where only the preceding moment can influence the “next” one; just as space were a triply-linear ordering of “places.” This insistence on a trivial, linear ordering of a supposedly empty space-time, rejecting the idea of “nonlinearity in the small”, is key to the freak-out which Fermat caused by his principle of least time.

To shed more light on this question, let us modify our experiment slightly: Install a small light source shining in all directions (e.g., a light bulb) at some position O in the air above the surface of the water. Now take an arbitrary position X in the water, which is illuminated by the light. {What is the pathway by which that result was accomplished?}

We might investigate as follows: Find the positions Y, both in air and water, at which an opaque object, placed at Y, causes the illumination of X to be interrupted. (Do the experiment!) We find, in fact, that those positions lie along a clearly-defined pathway going from O to X. That pathway in fact runs in an apparent straight-line from O to a certain location, L, at the surface of the water; and there, abruptly changing its direction, it continues on in an apparently straight trajectory to X. We can also verify, that if we now replace the light bulb at O by a device which produces a directed beam, and point the beam in the direction toward L, then it will continue along the entire pathway we just determined, and illuminate X. If we point the beam in a different direction, then (leaving aside extraneous reflections and so forth), it does {not} arrive at X. Our conclusion: this is the {unique} trajectory, by which light, emitted at O, can and does arrive at X.

Now what do we do, striving to follow Kepler in this matter? Instead of trying to concoct some Newtonian-like “law of motion” by which the light supposedly proceeds blindly, step-by-step from one moment to the next, consider instead at the {space-time process as a whole}. How is it that a unique trajectory (or “tube of trajectories” appearing to our senses as a single one) is determined, among all other conceivable other paths running from O to X, as the one which is actually {realized} by light? What is the sufficient and necessary reason? Evidently, not some property of light in and of itself. Ah! Don’t forget the rest of the Universe! Don’t forget that our experiment is part of the ongoing {history of the Universe}, and what we call “light” is just a localized manifestation of the {entire Universe} acting upon itself in that specific historical interval. If so, then shall we not regard the observed pathway of light as a {projection} of the Universe’s ongoing historical orbit, its “world line”?

Now, perhaps, we can begin to appreciate the significance of the Fermat-Leibniz principle and the freak-out it evoked among the followers of Aristotle.

Incommensurability and {Analysis Situs}, Part I

by Jonathan Tennenbaum

The issue of analysis situs becomes unavoidable, when we are confronted with a relationship of two or more entities A and B (for example, two historical events or principles of experimental physics), which do not admit of any simple consistency or comparability, i.e., such that the concepts and assumptions, underlying our notion of “A,” are formally incompatible with those underlying “B.” In the case where the relationship between A and B is undeniably a causally efficient one, we have no rational choice, but to admit the existence of a higher principle of lawful relationship (a “One”) situated beyond the framework provided by A and B as originally understood “in and of themselves.”

Exactly the stubborn, “dumbed down” refusal to accept the existence of such higher principles of analysis situs, lies at the heart of the chronic mental disease of our age. That includes, not least of all, the Baby Boomer’s typical penchant for “least common denominator” approaches to so-called “practical politics.” Antidotes are urgently required.

An elementary access to this problem, as well as a hint at analysis situsitself, is provided by the ancient discovery–attributed to the school of Pythagoras–of the relative incommensurability of the diagonal and side of a square. This discovery, a precursor to Nicolaus of Cusa’s “Docta Ignorantia,” could with good reason be characterized as a fundamental pillar of civilization, which ought to be in the possession of every citizen; indeed, the rudiments thereof could readily be taught to school children. Yet, NOWADAYS there are probably only a HANDFUL of people in the whole world, who approach having an adequate understanding of it.

In order to appreciate the Pythagorean discovery, it were better to first elaborate a lower-order hypothesis concerning measurement and proportion, and then see why it is necessary to abandon that hypothesis at a certain well-defined point, in favor of a higher-order conception. The hypothesis in question is connected with the origin of what might be called “lower arithmetic”–as contrasted to Gauss’ “higher (geometrical) arithmetic”–which however is not to deny the eminent usefulness and even indispensibility of the lower form within a certain, strictly delimited domain. On the other hand, the discoveries of the Pythagorean school put an end to what might otherwise have become a debilitating intoxication with simple, linear arithmetic, one not unsimilar to the present-day obsession with formal algebra and “information theory.”

Linear measure

Already in ancient times, it became traditional to distinguish between three species (or degrees of extension) of geometry within Euclidean geometry itself: so-called linear, plane, and solid geometry. The phenomenon of “incommensurability” bursts most clearly into view, when we attempt to carry over certain notions of measurement and proportion, apparently reasonable and adequate for the comparison of lengths along a line, into the doubly- and triply-extended domains of plane and solid geometry. Actually, the problem is already present in the lower domain; but it takes the transition to the higher domains to “smoke it out” and render it fully intelligible.

The commonplace notion of measurement and proportion, is based on the hypothesis that there exists some basic element or “unit,” common to the entities compared, out of which each of the entities can be derived by some formally describable procedure. In the linear domain of Euclidean geometry–which, incidentally, presupposes the hypothesis, that length is independent of position–this approach to measurement unfolds on the basis of three principles:

First, given two line segments, we preliminarily examine their relations of position, i.e., whether they are disjoint, overlap, or one is contained in the other. Secondly, we superimpose them, by means of so-called “rigid motion” (again, an hypothesis!), to ascertain their relation in terms of “equal length,” “shorter,” or “longer.” And thirdly, we extend or multiply a given line segment, by adjoining to it reproductions of itself, i.e., segments of equal length.

By combining these principles, we arrive at such propositions as “segment B is equal in length to (or shorter or longer than) two times segment A,” or such more complicated cases as “three times segment B is equal to (or shorter or longer than) five times segment A,” and so forth. [Figure 1.] In the case, where a segment B is determined to be equivalent (in length) to a multiple of segment A, it became customary to say, that “A exactly divides (or measures) B,” and to express the relationship by supplying the exact number of times that A must be replicated, in order to fill out a length equivalent to B. Where such a simple relationship does not obtain between A and B, it would be natural to direct our efforts toward finding a smaller segment C, which would exactly divide A and exactly divide B at same time (commensurability!). In case we succeed, the ratio of the corresponding multiples of C, required to produce the lengths of A and B respectively, would seem to perfectly express the relationship between A and B in terms of length. So, the proposition “A is three-fifths of B” or “A is to B as three is to five” would express the case, where we had determined, that A = 3C and B = 5C for some common “unit” C. [Figure 2.] – The paradox of `Euclid’s algorithm’ –

HOW, a practically-minded person would probably ask, might we discover a suitable common divisor C for any given segments A and B? It were natural to first try the shorter of the two lengths, say A, and to seek the largest multiple of A which is not larger than B. If that multiple happens to exactly equal B, we are finished, and can take C = A. Otherwise, we shall have to deal with the occurrence of a “remainder” in the form of a segment R, shorter than A, by which the indicated multiple of A falls short of B’s length. One possible reaction to this would be, to divide A in half, and then if necessary once again in half, and so on, in the hope that one of the resulting series of sub-segments might be found to exactly divide B. Those skillful in these matters will see, however, why such an approach must often lead to a dead end–as for example when the lengths of A and B happen to stand in the ratio 3 to 5, in which case successive halving of A or B could never produce a common divisor. [Figure 3.]

A much more successful approach, which (at this stage of the problem) represents a “least action” solution, became known in later times as “Euclid’s algorithm”: In case the shorter segment, A, does not divide B exactly, we take as next “candidate” the remainder R itself. If R divides A exactly, then R is evidently a common divisor of both A and B. Otherwise, take the remainder of A upon division by R–call it R’–as the next “candidate.” Again, if R’ exactly divides R, then (by working the series of steps backwards) R’ will also divide A and B. If not, we carry the process another step further, producing a new, even smaller remainder R”, and so forth. This approach has the great advantage that, ASSUMING A COMMON DIVISOR of A and B ACTUALLY EXISTS, we shall certainly find one. In such a case, in fact, as the reader can confirm by direct experiments, the indicated process leads with rather extraordinary rapidity, to the greatest common divisor of the segments A and B. [Figure 4.]

The discussion so far, however, leaves us with a rather considerable paradox. For the case, that there exists a segment dividing A and B exactly, the indicated approach to measurement and proportion, provides us with an efficient means to find the largest such common divisor, as well as to derive an EXACT characterization of the relationship of A to B in terms of a ratio of whole numbers. At the same time, however, some of us might have caught a glimpse of a potential “disaster” looming on the horizon: What if the “Euclid algorithm,” sketched above, fails to come to an end? It were at least conceivable, that for some pairs A, B, the successive remainders R, R’, R”…, while rapidly becoming smaller and smaller, might each differ sensibly from zero.

Within the limits of the ideas we have developed up to this point, we find the means neither to rule out such a “disaster” (“bad infinity”), nor to devise a unique experiment which might demonstrate the failure of “Euclid’s algorithm,” while at the same time providing a superior approach.

Evidently, it were folly to search for an answer within the “virtual reality” of linear Euclidean geometry per se. We need a flanking maneuver, to catapult the whole matter into a higher domain. [To be continued.]

EXCERPTS FROM A REPLY BY JONATHAN TENNENBAUM TO QUESTIONS ON HIS PEDAGOGICAL DISCUSSIONS

Dear Reader,

Pardon my delay in responding to your queries concerning the pedagogical discussions.

Let me first address the last point in your letter, which is the most significant. I mean the following passage:

“On the notion that the rate of change, or change in the rate of change is alien to Euclid, needing to be imported from our higher vantage point: A number of us just do not see the revolutionary ‘axiom-busting’ nature of this concept…”

Judging from your report, the problem which came to the surface during your discussions, is fundamental. I am very happy that the problem surfaced, although it tells me that my pedagogical tactic failed, at least in some cases. No matter. We often learn more from our failures, than from our successes!

What I think is going wrong, in part, is that many (probably most) people haven’t broken through yet, or are still resisting, to grasp in a really SENSUOUS way, what Lyn is trying to get at with his discussions of theorem-lattices and changes of axioms. People have a kind of abstract understanding of these matters, which they can present formally, can cite examples and so on, and even apply the concept in a certain way; but it’s still skin-deep, somewhat superficial learning. Above all, there is an emotional problem, a problem of INDIFFERENTISM or “decoupling” of mental activity from passion, which was induced from very early on in school, in university studies, and actually by our whole cultural environment. All of us of our generation — I would not exclude myself — have to struggle with this problem to one extent or another.

In order to function properly, the pedagogical discussions must be composed and read, not like sections of a textbook, but rather as miniature DRAMAS of the most rigorous sort. A drama involves powerful emotion. It is not just an “intellectual exercise.” In a well-composed and well-acted tragedy, the achievement of the desired effect on the audience, requires, that the individuals in the audience actually TAKE INTO THEIR OWN MINDS, by a powerful sort of “resonance” (empathy) the thought-processes projected by the dramatist with the aid of the characters. Under such conditions, the dramatist can operate DIRECTLY on the inner mental processes of the audience.

The simplest form of pedagogical discussion presents a TYPE of physically-demonstrable, valid transition from a hypothesis “A,” to a superior hypothesis “B,” such that the theorem-lattices, corresponding to “A” and “B” respectively, are separated from each other by an absolute mathematical discontinuity. In other words, although “B” subsumes (albeit in reworked form) that aspect of “A” which has not been invalidated by the experimental discovery, there is no way to get from “A” to “B” by deductive methods.

In some cases, an experimental demonstration directly refutes an explicit prediction of “A.” Thus, we demonstrate, that an event, which a theorem of “A” says must occur in a certain way, does NOT occur in that way. But very often, the most prominent characteristic of an experimental demonstration, is that it reveals an implicit LIMITATION in the original hypothesis “A,” rather than, so to speak, an explicit error. Something is demonstrated to occur in the real universe, which COULD NOT EXIST in the “mental world” circumscribed by hypothesis “A.” It is not necessary, that the event AS SUCH be EXPLICITLY FORBIDDEN by “A.” In fact, “A” will generally have NO CONCEPT for the event: “A” cannot account for its existence; it presents an insoluable paradox; it is “unimaginable.” And yet, the human mind (though perhaps not the mind of a radical positivist) is forced to acknowledge its existence as experimentally demonstrated.

Actually, the two cases are not so different, as might appear at first glance, if we understand the concept of “hypothesis” to mean, not just an assumption about this or that specialized area, but (at least, implicitly) a WAY OF THINKING about the ENTIRETY OF THE UNIVERSE. For, THE MIND IS ONE. In fact, our mind tends to extrapolate or “project” the underlying limitations of a given hypothesis, upon the entirety of the universe, in such a way that those limitations become “invisible” to us. So, the fish considers the fishbowl to be the entire universe, until something is demonstrated to exist outside the fishbowl. Only then, do the limits of the fishbowl become apparent.

I suspect that people miss the Earth-shaking implications of the pedagogical demonstration in question, because they are holding the hypotheses involved safely at arm’s length, rather than letting them really sink in. In other words, not really getting involved. You really have to become accustomed to the mental world of hypothesis “A” for a certain time, internalizing the corresponding mode of thinking, in order then to experience FROM “INSIDE,” so-to-speak, a crucial moment of physically demonstrable FAILURE of the mode. This requires a kind of mental dexterity and playfulness, to “forget” or “unlearn” the existence of the superior hypothesis “B” (in this case, connected with the necessary introduction of notions of “rate of change”), even though that has long become a part of our general culture. We have to use our imagination in order to place ourselves mentally, in a sense, back into the period BEFORE the discovery in question was made. In the same way, we should be able to imagine, on the basis of higher hypothesis, a future world embodying experimental refutations of hypotheses which we today regard as self-evident.

Were the Greeks and others, who developed their physical science in terms of “Euclidean geometry,” all stupid or evil? Certainly not! Although an adequate history has yet to be assembled, it is certain, that what we now call “Euclidean geometry” BEGAN as a series of REVOLUTIONARY BREAKTHROUGHS in physics, associated with the discovery and elaboration of certain general principles of CONSTRUCTION. The highest point of this development, as stressed by Kepler, was embodied in the treatment of the five regular solids, formally summarized by Euclid in the famous Tenth Book of Euclid’s {Elements}. The Greek constructive geometry, reworked by Euclid as a prototype of a formal theorem-lattice, embodied a kind of technology of thinking, far superior to what had existed prior to that (for example in the Egyptian or ancient Chinese science, as far as we know).

Thus, it were useful, before proceeding to my pedagogical discussion of the circle, to first get back into the mode of Euclidean geometry. For example by doing constructions such as: constructing perpendiculars and parallels, constructing divisions of the circle (equilateral triangle, square, pentagon, hexagon), constructing the golden section, bisecting any given angle, dividing a line segment into any given number of equal segments, constructing the tangent to a circle at any point, constructing a demonstration of Pythagoras’ theorem, etc. Allow yourselves to get into the “mind set” of this type of approach to problems. This is the same thing I tried to do in the earlier discussion of incommensurability, where I introduced the “Euclid’s algorithm” in one-dimensional geometry, not so much its own sake, but as characteristic of a kind of approach to the problem of measurement.

Of course, the concept of CHANGE is central to the every positive development of human civilization. The constructive geometry of the Greeks itself represents an attempt to deal with that. Of course, the notion of change and rate of change is “always there,” in a certain way, within higher hypothesis (see Plato’s {Timaeus}, for example). But the elaboration of a constructive geometry based explicitly on the notion of variable rate of change, came much later. Just compare the physics of Archimedes, with the physics launched in Nicolaus of Cusa’s {Docta Ignorantia} and brought to full development through the non-algebraic function theory of Huygens, Leibniz and Bernoulli. The turning-point, as far as we can see, came with the revolutionary shift in conception, embodied in Nicolaus of Cusa’s treatment of the circle and related topics, relative to the Euclidean approach of Archimedes.

Thus, you will not find the notion of “variable rate of change,” as that is understood by Leibniz, in Euclidean geometry. It’s not there. It is certainly implicit in the higher hypothesis guiding the development of Greek geometry, in Plato and so forth; but it was not yet actualized as an elaborated hypothesis. Thus, there is a constant TENSION between hypothesis and higher hypothesis, which constantly drives knowledge forward, employing a succession of unique experiments.

I hope these remarks will be helpful to you and your colleagues….

Concerning your reference to “solving” equations for the ratio of diagonal to side of an isoceles triangle, I would caution as follows: When an algebraicist says “the square root of two,” he is usually only slapping a label onto an UNFILLED GAP in his knowledge. He has not thereby developed a CONCEPT. Whereas by contrast, the paradoxical result of the geometrical construction evokes — in the mode of metaphor, and not merely pasting formal labels on things — an actual concept of a precisely-characterized, yet linearly inexpressible magnitude.

Concerning your query on light, I intend to develop some pedagogical discussions on exactly this subject, which requires a certain amount of elaboration. But from the way you expressed your question, I suspect that people have been boxing themselves a bit into a too constricted, literal, “mathematical” way of thinking about these matters. What is worthwhile to reflect about in a broad way — without necessarily expecting to come up with a “final answer” — is the question: What kind of Universe are we living in, in which such phenomena as refraction and diffraction of light can take place? Then, compare that with the “mental world” associated with the Euclidean approach to geometry.

Keep up the good work. I will be happy to help if you have any further queries.

Best wishes,

Jonathan Tennenbaum

Incommensurability and Analysis Situs Pedagogical Discussion Part II: Experimental demonstration of incommensurability

CAN YOU SOLVE THIS PARADOX

by Jonathan Tennenbaum

Moving from singly-extended, linear geometry, to doubly-extended (plane) geometry, provides us with a relatively unique experiment for the solution of the paradox presented above.

Synthetic plane geometry excels over singly-extended linear geometry in virtue of the principle of angular extension (rotation), as embodied by the generation of the circle and its lawful divisions. Among the latter, the square (via the array of its four vertices) is most simply constructed, after the straight line itself, by twice folding or reflecting the circle onto itself.

Having constructed a square by these or related means, designate its corners (running around counterclockwise) P, Q, R, and S. {(Figure 1)} Our experiment consists in “unfolding” the relationship between the two characteristic lengths associated with the square: side PQ and diagonal PR. These two shall play the role of the segments “A” and “B” in our previous discussion. (Note: the following constructions are much easier to actually carry out, than to describe in words. The reader should actually cut out a square and do the indicated constructions.)

For our purposes it is convenient to focus, not on the whole square, but on the right triangle PQR obtained by cutting the square in half along the diagonal PR. {(Figure 2)} Note, that the sides PQ and QR have equal length (PQR is a so-called isoceles right triangle); furthermore, the angle at Q is a right angle and the angles at P and R are each half a right angle.

To compare A (= PQ) with B (= PR), fold the triangle in such a way, that PQ is folded exactly onto (part of) the line PR. Since PQ is shorter than PR, the point Q will not fold to R, but will fold to a point T, located between P and R. {(Figure 3)} By the construction, PQ and PT are equal in length. Next, note that the axis of folding, which divides the angle at P in half, intersects the side QR at some point V, between Q and R. Observe, that the indicated operation of folding brings the segment QV exactly onto the segment TV.

Observe also, that through the indicated folding of the triangle, the triangular region PVT is exactly “covered” by the region PVQ, while the smaller triangle portion VTR is left “uncovered,” as a kind of higher-order “remainder.”

Focus on the significance of that smaller triangle. Note, that in virtue of the construction itself, VTR has the same angles and shape (i.e., is similar to) the original triangle PQR.

Euclid’s Algorithm Again

Comparing the original triangle to the smaller “remainder” triangle VTR, we can easily see that the former’s sides are derived from the latter’s by relationships very similar to, though slightly different from, the steps of the so-called Euclid algorithm! (See Part I, in our issue dated June 2, 1997.)

First, in fact, the side RT results from subtracting the segment PT, equal in length to the original triangle’s side PQ, from the original triangle’s hypotenuse PR. Second, the hypotenuse VR of the small triangle derives from the side QR of the original triangle, by subtracting the segment QV, while the latter (in virtue of the folding operation and the similarity of triangles) is in turn equal to TV, which again is equal to RT. In summary: if the side and hypotenuse of the original triangle are A and B, respectively, then the corresponding values for the smaller triangle will be A? = B – A and B? = A – A?. {(Figure 4)}

Lurking Paradox

The reader might already notice an extraordinary paradox lurking behind these relationships: Were A and B to have a common divisor C, then that same C would–in virtue of the just-mentioned relationships–also have to divide A? and B?. What is paradoxical about that? Well, the smaller triangle is similar to the larger one, so we could carry out the same construction upon it, as we did to derive it from the original triangle. The result would be a third, much smaller triangle of the same proportions, whose leg and hypotenuse, A? and B?, would thereby also have to be divisible by the same unit C. And yet, continuing the process, we would rapidly arrive at a triangle whose dimensions would be smaller than C itself!

We are thus faced with the inescapable conclusion, that A and B cannot have a common divisor in the sense of linear Euclidean geometry. The relationship between A and B cannot be expressed as a simple ratio of whole numbers. As Kepler puts it in his “World Harmony,” the ratio of A to B is Unaussprechbar–it cannot be “spoken”; by which Kepler means, it is not communicable in the literal, linear domain. But Kepler emphasizes at the same time, that it is {knowable} ( wissbar), and is precisely communicable {by other means.}

Evidently, the cognition of such linearly incommensurable relationships, requires that we abandon the notion, that simple linear magnitudes (so-called scalar magnitudes) are ontologically primary. Our experiment demonstrates, that such magnitudes as the ratio of the diagonal to the side of a square (commonly referred to algebraically as the square root of two) are not really linear magnitudes at all, but are “multiply extended,” geometrical magnitudes. They call for a different kind of mathematics. What we lay out on the textbook “number line” are only shadows of the real process, occuring in a “curved” universe. This coheres, of course, with Johannes Kepler’s reading of the significance of Golden Mean-centered spherical harmonics in the ordering of the solar system, and in microphysics as well.

Analysis Situs Relationship

The relevant relationship for analysis situs, in the preceeding discussion, is not between the diagonal and side of a square; but rather that between the hypotheses underlying the linear domain, sketched in Part I of our discussion, and the superior standpoint implied in Part II.

A final note: Observe the rotation and change of scale of the smaller triangle relative to the larger. Our experimental {transformation} of the larger triangle into the smaller, similar triangle, as an {inherent feature} of the relationship of A to B, already points in the direction of Gauss’ complex domain, and the preliminary conclusion, that the complex numbers are ontologically primary–more real–than the so-called “real numbers.”

(Anticipating what might be developed in other locations: The transformation constructed above, belongs to the so-called “modular group” of complex transformations, which are key to Gauss’ theory of elliptic functions, quadratic forms, and related topics. Gauss, in effect, reworks the central motifs of Greek geometry, from the higher standpoint of the complex domain.)

Demystify The Golden Section!

by Jonathan Tennenbaum

Last week’s inquiry concerning Leonardo da Vinci’s principles of machine-tool design, brought us face-to-face with an old friend: the significance of the so-called Golden Section. Although this topic has been discussed many times in our organization, I think there still exists a residue of mystification, remaining to be cleared away. Often enough in the past, mere mention of the Golden Section was liable to evoke fits of embarrassed hand-waving and numerological free-association from supposed experts, while the issues which really bothered people and have to be worked through in a rigorous way, were not adequately addressed.

Lyn, of course, has dealt with the Golden Section repeatedly and from the highest standpoint. For those who are resolved to break through on this issue, I would particularly recommend rereading Lyn’s essay “On the Subject of Metaphor” (Fidelio, Fall 1992) and his book-length study, “Cold Fusion: Challenge to U.S. Science Policy” (Schiller Institute, August 1992), particularly section II entitled “Six of the Crucial Discoveries in Modern Science”. The following pedagogical discussions are intended to provide some useful geometrical “homework” on these matters, while adding a fresh view of the subject, thereby assisting the reader in “triangulating” the essential points to be mastered.

On the most elementary geometrical level, we have the problem, that only a very few people (with the exception of Chuck Stevens and a few others, perhaps), have actually worked through the geometrical constructions which characterize the <relationship> of the regular solids to their inscribed and circumscribed spheres. Yet, the essential discoveries of Leonardo and Kepler were all referenced to Euclid’s original treatment of <exactly that issue>, as later reworked by Leonardo and Pacioli in the book, “Divine Proportion” and “read” from the standpoint of Nicolaus of Cusa’s concept of an “evolutionary” ordering of “species” (analysis situs).

To present the “Golden Section” merely as a ratio derived from the regular pentagon, were a wild fallacy of composition. Even on elementary geometrical grounds, the only admissible approach to the “Golden Mean” is one which defines it as the <unifying characteristic> of the way in which the higher species (sphere) <bounds> the lower species (regular solids or spherical divisions).

The really crucial problem, however, lies in the way people “read” (or misread!) the ontological significance of such elementary geometrical topics.

Having witnessed many a member’s more or less frustrated attempts to master the Golden Section, I am reminded of an often-cited anecdote from Russia: One evening, a man lost a ring in a dark corner of a park. Instead of looking for it there, the man spent hours carefully searching under a nearby street-lamp. When a passerby asked the man, why he kept looking in the wrong place, the man replied: “I am looking here, because here is plenty of light!”

People, who (consciously or unconsciously) are wont to stay away from the “dark, uncomfortable” area of rigorous creative thinking, won’t find an “answer” for the Golden Section, no matter how exhaustively they search for it. There does not, nor could there ever exist an explanation of the sort which would be acceptable to the aristotelean “norms” of contemporary classroom education. The Golden Section, as Leonardo and Kepler understood it, and as Lyn develops it further, is an <idea>. It does not exist in the “objective” world of geometrical forms per se. Nor does it arise from any amount of empirical evidence taken by itself. In his piece on Cold Fusion, Lyn demonstrates, in rigorous, step-by-step fashion, the inseverable relationship between Leonardo and Kepler’s “reading” of the Golden Section, and Plato’s notion of “hypothesizing the higher hypothesis”, as that connection becomes uniquely intelligible from the standpoint of physical economy. Lyn adds an admonishment which the present author found most helpful:

“Look at the Golden Section from the standpoint of what Plato and Cusa knew before Leonardo and Kepler. Do not attempt to read it as if Leonardo and Kepler were such fools as not to have studied intently the work of Plato, Archimedes and Cusa…”

A few lines further, Lyn adds:

“Until we have grasped so the fact that true science is <subjective> in this way, that its validity is located essentially in that <anthropocentric subjectivity>, we do not have the means to read intelligibly the crucial argument of any among the founders of modern science.”

The following discussion should cast some further light on the cited point, which is crucial to any adequate understanding of the <physical significance> of the Golden Section.

I would not be surprised, if there were a considerable number of persons, who routinely skip over Lyn’s written discussions of certain scientific topics, rationalizing that practice to themselves by the argument, that Lyn intends these as merely “optional” illustrations of general points. The assumption is, that the same concepts could be communicated just as well without recourse to such difficult and “specialized” topics. In an extreme case, we might encounter the following train of thought: “Oh, yeah, Lyn is really just talking about the higher hypothesis, or negentropy or something like that. I already have an idea what those are. So why make such a big deal out of circular action, the Golden Mean, Riemann, Cantor and so forth, which just confuse me and cause me mental suffering?”

Apart from exhibiting the typical intellectual sloppiness and lazyness of our “baby-boomer” generation, the quoted folly might usefully provoke us to consider quite the opposite thesis, namely:

That the universe is constructed in such a way, that it were <impossible> to master the notion of the higher hypothesis, <except> through the included means of certain, <uniquely-defined> series of geometrical discoveries!

By “geometrical discoveries”, I do not mean to imply that the discoveries emerge from the domain of mathematics per se. Rather, it is the “provocation” provided by otherwise unresolvable physical anomalies, which causes us to evolve new species of geometry, in such a way as to permit us to “integrate” those anomalies as new “dimensionalities” in a revised, unified conception of multiply-connected physical action.

It is quite remarkable, that up to the present time those fundamental geometrical discoveries, so defined, <all> have the direct or indirect effect of <redefining> — or “unfolding” as it were –, the significance of the circle (circular action) and the sphere within geometries of ever higher “order”. So it is with the pythagorean discovery on incommensurability, the proof of “transcendence” of circular action as conceived by Nicolaus of Cusa, the reworking of the universal significance of the Golden Section by Leonardo da Vinci and Kepler, and C.F. Gauss’ introduction of the complex domain’s “anti-euclidean geometry”. Through all these transformations and revolutions, the circle and sphere remain “the same” as visible forms; the crucial thing that <changes>, is how we “read” them.

Might not the key, the <physical significance> of the Golden Mean lie in that, profoundly <subjective> condition of human existence? Could it be, that the circle, sphere and Golden Section-derived harmonics are <embedded in the structure of the Universe itself>, in the form of “transfinites” undergoing a continuous process of conceptual “redefinition” through validated experimental discoveries, as necessary characteristics of any pathway for the survival and development of the human race?

With these issues of method in mind, turn now to some geometrical experiments! If carried out thoughtfully, the insights from those experiments can help to sharpen our appreciation of the same points, when we return to them later.

As Kepler himself emphasized, his own “reading” of the significance of the sphere and regular solids, was based on Nicolaus of Cusa’s development of the concept of lawful ordering of axiomatically-separated “species” (analysis situs). The sphere bounds, as a higher species, the regular solids with all their mutual relationships and quasi-regular “offspring”. The “Golden Mean” should signify, first of all, a unifying characteristic of the relationship between the lower (regular solid) and higher (spherical) species.

That simple remark already points to something wrong with the commonplace approach, in which the solids are constructed as isolated entities — typically out of sticks fastened at the ends, by gluing together regular polygonal faces, or similar means) –, without reference to inscribed and circumscribed spheres. Constructions with circular “hoops” have the same drawback, except insofar as we observe the way the curvature of those hoops and their mutual relations are determined by an invisible spherical bounding.

Why is it not permissible, to first construct a regular solid, and then create the inscribed and circumscribed spheres, as it were, by “spinning” the solid? Because, I say, that would misrepresent the true ordering of the species. The very existence of the regular solid, and virtually each step of any proposed construction, presupposes and embodies circular action applied to results of circular action. What, after all, is an angle? Just look at the constellation of angular displacements which accompanies the “birth” of each and every singularity (vertices, edges and faces) of a solid!

Accordingly, I propose that the following task be explored:

To inscribe into any given sphere, by construction, each of the five regular solids. Similarly, to circumscribe a given sphere by each of the five regular solids. (In the first case the solid is to be so constructed in the sphere’s interior, that its vertices touch the inner surface of the sphere; in the second case, the solid is to be constructed around the sphere, in such a way that the midpoints of its vertices touch the sphere’s outer surface.)

Try to do the constructions in two ways: (1) by the means of classical euclidean geometry “in three dimensions”; (2) by rotations of the sphere. In the latter case, the task is to construct the great-circle division of the spherical surface, corresponding to each of the regular solids. (For the purposes of this exploration, we may consider that a given rotation generates, as singularities, the corresponding poles and equatorial great circle on the sphere, as well as the arcs traced by already-generated singularities.)

Observe the relationship between the two modes of construction.

The point of this exercise, is not necessarily to complete all the proposed constructions immediately. Indeed, people will observe, that the case of the duodecahedron and icosahedron gives rise to rather extraordinary difficulties! Those difficulties are very much connected with the unique role of the Golden Section. The important thing is to explore the terrain, and to pick up and conceptualize the paradoxes which arise in any given approach.

Readers will probably find that the inscription of an octahedron presents itself as the simplest case, and also opens a pathway of approach for the cube and tetrahedron (in that order). Note, that the constructions involved (at least, the most direct ones) all share certain common features. Observe also, that analogous methods <fail> for the duodecahedron and icosahedron, although the latter provide a pathway for constructing the first three.

DEMYSTIFY THE GOLDEN SECTION! Part II

By Jonathan Tennenbaum –

The task laid out in the first part of this discussion, opens up numerous avenues of fruitful exploration. What we shall attempt to do now, is to go as directly as possible to something {essential}.

For this purpose, focus on constructions on the sphere (1). As always, it is imperative that the reader work through the necessary constructions, including making drawings on spheres, making sketches as well as consulting the familiar models of the regular solids in their mutual relationships.

Start with a sphere, considered as a featureless space. By rotating, generate a first great circle, G1. That action is our first singularity. Next, rotate G1 on itself, i.e. rotate the sphere on an axis through G1. This second singularity generates a second great circle, G2, which intersects G1 at right angles. Then, rotate the sphere a third time, around the axis defined by the intersections of G1 and G2. This third singularity generates a great circle G3, which is perpendicular to both G1 and G2. The constellation of G1, G2 and G3, their points of intersection and the curvilinear triangular areas bounded by them on the spherical surface, constitutes a spherical octahedron.

By joining the intersection-points of G1, G2 and G3 in the obvious fashion by straight lines through the interior of the sphere, we obtain the “skeleton” (i.e., edges and vertices only) of a regular octahedral solid inscribed in the given sphere. Projected outward to the sphere’s surface from the center of the sphere, the faces and edges of the octahedron project to the corresponding elements of the spherical octahedron.

(Observe the paradoxical feature of the equilateral curvilinear triangles forming the spherical octahedron, compared with the faces of the solid octahedron. If you want to torment someone with a linear mind, ask them how it is possible to construct a triangle having three right angles!)

So far, we didn’t need to invent much of anything; G1, G2 and G3 seem nearly self-evident predicates of the sphere. What is more, once the spherical octahedron has been formed in that way, the spherical cube and spherical tetrahedron practically fall into our lap!

We have only to generate the great circles which bisect the right angles formed by the each pair of the circles G1, G2 and G3 already constructed. (With a bit of thought, the reader will readily discover how to bisect angles in spherical geometry, for example by analogy to the familiar method of plane geometry.) The result is a “net” of great circles whose intersection generates the mid-points of the spherical octahedron’s faces. The constellation of those mid-points defines a spherical cube. If we connect those points by straight lines inside the sphere, we obtain the “skeleton” of a solid cube inscribed in the sphere, in the same manner as the octahedron earlier. Note, however, that the net of great circles just constructed, divides each “face” of the spherical cube along its “diagonals” into four isoceles right triangles; in the world of spherical geometry, the sides of any given square “face,” when continued further, automatically form the diagonals of the four adjacent faces. Observe, that the smaller angles of the isoceles right spherical triangles are each 60 degrees (one-sixth of a complete circle) instead of the 45 degrees (one-eight of a circle), which we would get in plane geometry. Note, again, the paradoxical relationships defined by the projection of the inscribed, solid cube onto its spherical “father.”

As for the spherical tetrahedron, it is already “there” (in fact, two times). It jumps into view, for example, when we “color” every other face of the spherical octahedron, in checker-board fashion. The mid-points of the colored faces (i.e. four out of eight) form the vertices of the spherical tetrahedron, whose “faces” are equilateral spherical triangles, each of whose angles are 120 degrees. Taking the non-colored faces instead gives us a “twin” tetrahedron. We get inscribed solid tetrahedrons by the same proceedure as earlier for the octahedron and cube.

Easy going! But what about the dodecahedron and icosahedron? Now the real fun starts.

Let us go for the spherical dodecahedron (the construction of the icosahedron is essentially equivalent). Looking at the network of circles we have created on the sphere, there is no lack of angles to bisect and vertices to connect. We think to ourselves: try to find pentagons, pentagons! We construct more great circles. The thing just becomes more complicated and confusing. A “bad infinity”! We realize that the regular solids embody a form of “closure,” but beyond the octahedron-cube-tetrahedron, we never seem to get it. Frustration sets it, then rage. Why can’t I find the trick? Soon many of us are in a fit, just connecting things at random (the famous “connectoes”), and generating garbage. Others have drawn back into the secrecy of their rooms, scribbling equations in the hope that the “secret” will emerge by some sort of magic.

Enough of this! Only fools allow their approach to a problem to be defined in terms of the so-called “givens” (as most of us were drilled to do in school)! Rather, successful survival depends on being able to think {backwards} from the point in the future where we know need to go, and to define our approach to the present {from that vantage-point}.

So, let us start afresh. Juxtaposing the dodecahedon-icosahedron to the {species} cube, octahedron and tetrahedron defines a metaphor. Looking to the “future,” imagine spherically-bounded geometry, within which the existence and relationship of {both} is predetermined, as a unified concept. But, wait! The dodecahedron, as the “maximum” of the polyhedra, generates the rest. In other words, the “top-down” ordering is from sphere, to dodecahedron-icosahedron, to octahedron-cube-tetrahedron. In construction, on the other hand, we appear to build “from the bottom up,” even though the “top” is in a sense already “immanent” in the circular action which is the “minimum” of the construction process. The solution? You have to look “from the top-down,” in order to define the pathway to realize, “from the bottom up,” what is already there in potential.

Examine, accordingly, how the octahedron, cube and tetrahedron are contained in the dodecahedron as derived entities. The simplest relationship is with the cube, and is most easily visualized, perhaps, in the solids. The cube is “inscribed” in the dodecahedron, in such a way, that its vertices coincide with 8 of the dodecahedron’s 20 vertices. Observe the relationship of each square face of the cube passing into the interior of the dodecahedron, and the configuration of four pentagons which share vertices with that face of the cube. Note the two vertices of the dodecahedron, which lie “on top of” the said face.

Aha! How do get from the cube to the dodecahedron? What singularity must be added. How is the just-mentioned pair of vertices lawfully related to the cube? Observe, that each of those vertices is joined, by pentagonal edges, to three other vertices. The corresponding triangle is equilateral, and one of the sides coincides with an edge of the cube. Suddenly, we see the pathway to construct the dodecahedron from the cube, as follows:

Start with the spherical cube, and one of the “faces” of that cube. For purposes of discussion, identify the vertices of the given face by A, B, C, and D, going around the face in clockwise order. Now, rotate the cube around the axis defined by A and its antipode on the oppostive side of the sphere. Under that rotation, the edge AB of the cube (i.e. the great-circle segment AB) with its endpoint B, describes a circle. Next, rotate the cube around B, letting the endpoint A trace a circle of equal radius to the first. The intersection of those two circles, constructed on the surface of the sphere, defines two new points, of which one of them lies inside the curvilinear square ABCD, and the other outside. Call the point inside the square “E.”

Now repeat the same construction, but with C and D instead of A and B. Let “F” denote the interior point, which results from the intersection of circles with “curvilinear radius” CD (= AB) around C and D respectively.

For reasons which {could only be made intelligible from the standpoint of the “finished” dodecahedron, with all its relationships}, the just-constructed points E and F, constitute precisely the “missing singularities” required to transform the cube into a spherical dodecahedron! The great-circle segments EF, EC, ED, FA, FB are all edges of the spherical dodecahedron, forming sides of two adjacent curvilinear pentagons. To construct the rest of the vertices and edges, we merely repeat the same proceedure with each of the remaining faces of the cube, in proper orientation (2). The vertices of the dodecahedron consist of the 8 vertices of the cube, together with 2 additional vertices for each face; 8 + (2×6) = 20. Having obtained the spherical dodecahedron in this way, the spherical icosahedron, as well as the corresponding inscribed solids, can easily be derived.

Now stand back a moment and think over what we have done. Leave aside the details of the constructions, which admit of many variations and alternative pathways. What is significant, is the following: Although we constructed all five spherical polyhedra by rotation of the sphere, the {species of idea}, which we needed in order to {devise} the construction, was {not} the same in each case. The octahedron, cube and tetrahedron came out almost as a linear series, through a process of successive bisections. Relative to that sort of process, the dodecahedron and icosahedron are utterly inaccessible. We were able to jump over that apparently “unbridgeable chasm” — how? By introducing a new principle into the process — a principle which we adduced from the “end result,” {before we had that result}!! Cheating? No. Time-reversal.

Now think back to the starting-point of this whole discussion: the work of Leonardo da Vinci and Kepler on the Golden Section. Keep in mind the essential subjectivity of science. Is not “time-reversal” a determining characteristic of any negentropic process? Including living processes? And did we not just experience {in our own minds} the requirement of “time-reversal” as a {necessary} characteristics of any pathway to construction of the dodecahedron? As opposed to the relatively linear (“inorganic”) octahedron, cube, tetrahedron. Compare Leonardo’s studies of the morphology of living processes, and Kepler’s discussion of the monad principle in his Snowflake paper.

Much more could be said here. But let me end with a little paradox: If what we have said is not far from the mark, then where does the functions of {growth} come in, which we connect with the idea of self-similar spiral action? Or, to put it another way: How could it be that the sphere, which in itself appears bounded and finite, could embody a principle of unlimited growth?

————————————————-

(1) I will address the construction of inscribed and circumsscribed solids, “by the means of Euclidean geometry,” in a future pedagogical discussion. The task, in the Euclidean sense, is not so much to physically build the solids with the spheres; rather, given the radius of the sphere as “unit” the task is to determine the sides, angles and other parameters of the inscribed and circumscribed solids; not as algebraic values, but in terms of the geometrical constructions involved. On a higher level, these constructions take us to the threshhold of Monge and Carnot’s “descriptive geometry.”

(2) For each face the construction can be done in two possible ways, depending on which set of opposite sides are used — AB, CD or alternatively AD, BC, for the case just described. To complete the dodecahedron, the choice of pairs of sides must alternate, so that the constructions in adjacent faces are at right angles to each other, i.e., in such a way each edge of the cube is used exactly once. The reasons for this will be obvious to those who work through the construction. [jbt]

Curvature: True Versus Apparent

by Jonathan Tennenbaum

In the course of our discussion of “ the first measurement of the Universe” the concept of curvature arose at first in a {negative way}: the {impossibility} of representing the visible arrangement of stars in the heavens on a flat surface. Any attempt to create such a star map inevitably distorts the constellations and the angular relations between the constellations; and the distortion becomes ever greater, the larger the portion of the heavens we attempt to map. Our study of the characteristic singularities connected with this mapping problem, led us to the regular solids. With the discovery of those solids, the concept of curvature, at first a purely negative one, took on a definite form.

Now the concept of curvature, so developed, is something entirely different from the idea of “curvedness” associated with our sense-perception. Unlike the latter, true curvature involves an ontological singularity and can be grasped only by the cognitive powers of the mind. Carl Gauss’ 1827 “General investigations of curved surfaces” focussed on that crucial difference. Taking the case of simple geometrical surfaces as his pedagogical starting-point, Gauss developed the concept of so-called intrinsic or internal curvature of a manifold as an analysis-situs notion, independent of the manner in which such a manifold might happen to be represented in visual or other formal terms.

The significance of this problem should be clear enough. For example: How can how can we overcome the “spin” which our naive sense-perception tends to impose upon any portion of physical reality? How do we distinguish the true characteristics of a process, from those which merely reflect the effect of arbitrary, extraneous assumptions and other distorting factors dragged in “from the outside”? Gauss’ work on the orbit of Ceres, his work on geodesy and his collaboration with Wilhelm Weber on so-called absolute electrodynamic measurements, all depend upon his approach to this critical issue.

To start off with a very simple illustration, compare the following three surfaces: 1) a flat, plane surface 2) the surface of a cylinder 3) the surface of a sphere. Here is the question: We have begun to elaborate a concept of (non-zero) curvature as a characteristic which absolutely distinguishes the spherical surface from the flat one. Now, what should we say about the cylinder? As a form in visible space, the cylindrical surface certainly seems to possess a curvature. But according to the conception adopted by Gauss, the internal or intrinsic curvature of the cylindrical surface is {zero}: it is essentially flat and indistinguishable from the plane surface “in the small”! A paradox? Let us look into the matter more closely.

Gauss related his notion of internal geometry and internal curvature to the characteristics of the minimum pathways (geodesics) in the given manifold. In the plane, these turn out to be straight lines, while on a spherical surface they are portions of great circles. What are they on the cylindrical surface?

Take a smooth, rigid cylinder (preferably from wood and not too small diameter), cut a rectangle out of smooth-surfaced paper so that it wraps once around the cylinder, and fix it tightly to cylinder by tape or tacks. Now take a piece of thread and stretch it tight along the surface between any two points. This defines a geodesic or “shortest curve” on the surface.

In two special cases the form of these lines is obvious: if the direction between the two points is parallel to the cylinder’s axis, then the shortest curve connecting them on the surface is a straight line; while if the two points are at the same “height” along the cylindrical axis, the geodesic is an arc of the circle which results from cutting the cylinder perpendicular to the axis at the height of the two points. In other cases, however, the form of the geodesic appears more complicated.

To get an overview of the geodesics at any given point, construct a “geodesic circle” as follows. Fix any position on the paper-covered cylinder by a pin or nail. Tie one end of a piece of thread around the nail and the other around the tip of a pen or pencil, and stretch it tight along the surface (or alternatively, use a loop of thread). Trace the curve which results from moving the tip of the pen on the cylindrical surface with the thread kept tight. Also, trace the form of the thread on the cylinder at several positions during that process.

Now, what happens to these curves if we unwrap the paper from the cylinder and lay it flat? The crucial observation to be made is, that the lengths of the curves, traced on the paper, are not sensibly changed by the unwrapping process! In consequence, the minimal curves on the cylinder become minimal curves on the flattened surface — i.e. straight lines –, and the geodesic circle on the cylinder becomes an ordinary plane circle.

What does this mean? Insofar as the internal metrical properties of the surface are embodied in the array of minimum pathways on that surface, there is no discernible difference between the cylindrical surface and the flat surface which results from unwrapping it. In terms of internal geometry, the flattened surface is a perfect map of the cylindrical original. Hence, the cylindrical surface must have the same internal curvature as the flat one — zero curvature!

Of course, we should not completely overlook two points: Taken as a whole, the complete cylindrical surface contains {closed cycles} — e.g. a circle enclosing the axis — which are not present in the flat surface. To unwrap the cylindrical surface, we must cut or tear it lengthwise, introducing a discontinuity. So the apparent equivalence applies only to smaller, local regions of the cylindrical surface, which do not wrap fully around the axis. Secondly, an “infinitely thin” mathematical surface, completely unchanged by the unwrapping process, does not exist in the physical universe in a literal sense — any more does a {purely} internal metric, which would not react {in some way} to a change in the relationship between the given manifold and the rest of the Universe.

These points, however, in no way obviate the methodological issue Gauss is addressing. We are rather impelled to pose once more the question which originally launched our whole investigation, but in somewhat different terms than before:

How could we {know}, by measurements taken entirely “inside” a spherical surface (or in any arbitrarily small portion of that surface) — i.e. by measurements made without explicit reference to the sphere’s apparent form in visual space — that no flattening-out of a spherical manifold is possible?

Circular Action And the Fallacy of “Linearity in the Small”–Part I

Can You Solve This Paradox?

by Jonathan Tennenbaum

In some of his letters concerning the “Characteristica Universalis,” Leibniz notably refers to the virtues of rational methods of entrepreneurial bookkeeping and budget-allocation, as such were originally introduced (according to some credible accounts) by Leonardo da Vinci’s collaborator Luca Pacioli. Leibniz remarks, that rational deliberation and discourse should emulate Pacioli’s example, in the sense that everything essential to the judgment of any given matter must be accounted for in an ordered fashion, and no steps left out of the argument.

Now, some readers might jump to the conclusion, that Leibniz was advocating some sort of formal, deductive logic. But, stop to consider the following. In any situation, whether in science or war-fighting, the most important aspects that should occupy our attention are the things we don’t know, as well as things we do. It would be folly, in attempting to account for any situation, to include only those aspects (so-called “facts”) of which we have positive knowledge, leaving no room for the singular areas of potential discovery (or surprise) which are the locus of efficient action (change). Those singular areas, on the other hand, are by no means formless or indeterminate. More than 500 years ago, Nicolaus of Cusa gave a most powerful demonstration, after Plato, of how it is possible to know a great deal about what we don’t know.

Omission from Reality

Thus, if we omit what Nicolaus of Cusa identified, from our “accounting” of reality, then we are falsifying the books. Which is exactly what the Club of Rome did in its “systems analysis” model of the world economy, in which scientific and technological progress were brazenly left out. This is the error of those who define “reality” solely in terms of their present system of hypotheses, leaving no mental room for the efficient reality of higher hypothesis, which generates an increasing density of singularities in every interval and, in a sense, embodies future discoveries within the present. That act of omission of higher hypothesis, is the plunge downward toward the infinite banality of “linearity in the small,” and fascist economics.

To cast some light upon this topic, and upon the fallacy of “linearity in the small,” I propose to carry the last two parts’ (see New Federalist issues dated June 9 and June 16, 1997) discussion of “incommensurability and analysis situs,” a step further. Fresh from the Pythagorean discovery of the relative incommensurability of the diagonal and side of a square, let us turn our attention now to the relationship between the circumference and diameter of a circle. We shall find, that the tactic of folding, which served us so well in the previous case, leads to a rather spectacular failure in the present one. By reflecting upon the deeper (axiomatic) reasons for that failure, we are led to a completely new set of physical ideas, which go far beyond the bounds of Euclidean geometry.

Self-Reflexive Relationship

Recall, that our experimental demonstration of the incommensurability of the side and diagonal of a square, was by no means simply a negative result. The transformation of the larger triangle into the smaller similar triangle as a “remainder,” in our construction, seems to provide an exact characterization of the relationship in question, as a self-reflexive relationship of a rather simple type. In a sense, we measured the incommensurability.

Attempting to apply that tactic now to the relationship of the diameter to the circumference of a circle, we might proceed as follows. (Here the same remark as before, is again obligatory: Readers must jump in and work through the constructions themselves.)

First, observe that the diameter of the circle is obtained by folding the circle against itself. Looking at only one of the half-circles defined by that folding, we have a special case of what is sometimes termed a “lune”–i.e., the figure constituted by any chord of a circle, together with the portion of the circumference enclosed between the endpoints of that chord. For convenience of discussion, I will use the expression “arc PQ” (or any other two letters) to designate the circular arc between the endpoints of any given chord of a circle. If we designate the endpoints of the diameter by A and B, we have the lune constituted by the diameter AB and by the circular arc (upper half-circle) arc AB. {(Figure 1.)}

Next, fold the circle once again upon itself. The result is a second diameter, perpendicular to AB, which intersects AB at the circle’s midpoint P. The same second diameter also bisects the circular arc from A to B at a point we shall designate by B?, and which at the same time is one of the endpoints of that second diameter. {(Figure 1.)}

The figure consisting of the two segments AP and PB?, together with the circular arc AB?, we might perhaps regard as an analogue to the right isoceles triangle in our earlier discussion of the Pythagorean discovery. But now the fun begins.

Fold the arc AB? in toward the interior of the circle, creating, as axis of the fold, the segment AB?. Look at the configuration formed between the triangle APB? and the lune consisting of segment AB? and arc AB?. {(Figure 2.)} The triangle APB? is of a type we have met before–an isoceles right triangle. The relationship of AP to AB? is that of the diagonal to side of a square. Note, that AP is one-half of the original diameter AB. To the extent our previous discussion of the Pythagorean discovery could be regarded as satisfactory, we could say that we “know” the incommensurable relationship of AB to AB?. But what about the relationship of segment AB? to arc AB??

Lunes Not Commensurable

In a sense, the lune formed by AB? and arc AB? is the “remainder” which is left when the triangle APB? is removed from the curvilinear figure AP, PB?, arc AB?. Now, consider the transformation from the lune AB, arc AB, and the lune AB?, arc AB?. Consider the relationship of that transformation, to the transformation we developed in our earlier reconstruction of the Pythagorean discovery. A rather crucial difference comes to light: in our present case, the smaller, “remainder” lune is {not} similar to the original one! While the circular portion arc AB? is half of arc AB, the segment AB? is longer than half of AB, and in fact forms an incommensurable relationship to the same.

To get a clearer insight into what is happening here, carry the construction a step further. Fold the circle a third time onto itself (i.e., fold into a half, a fourth, and now an eighth) to create a diameter which divides arc AB? in half, at a point we shall designate B?. The same diameter bisects the segment AB? at a point P?. {(Figure 3.)} Now fold arc AB? toward the interior of the circle, creating as axis the segment AB?. Now, examine the right triangle AP?B?, and the “remainder” when that triangle is removed from the figure formed by the segments AP?, P?B? together with arc AB?. That “remainder” is the lune consisting of segment AB? and arc AB?. {(Figure 4.)}

Examining the circumstances of this second transformation, note that the triangles APB? and AP?B?, while lawfully related, are {not} similar. Nor, of course, is the lune AB?, arc AB? similar to either the lune AB?, arc AB? or the original lune AB, arc AB. The reader might take a look at Leonardo da Vinci’s explorations of this sort of problem, in an elaborate series of drawings.

A Bad Infinity

Those zealous and skillful in this sort of geometry, will find ways to characterize the relationship between AB? and AB?, which are incommensurable, just as AB and AB? were incommensurable, but with a somewhat different relationship. They may suspect, perhaps not without a twinge of horror, that as we continue the series AB, AB?, AB?, AB??, the “degree of incommensurability” between the original diameter and the “Nth” segment in the series, keeps building up!

It is clear, that the segments AB?, AB? are nothing but sides of a square, octagon, 16-gon, 32-gon, etc., inscribed in the circle. What we are doing could be seen, in one respect, as carrying out Archimedes’ “exhaustion principle,” trying to approximate the circle’s area and circumference by polygons of exponentially increasing number of sides. However, it seems fair to say, that our tactic for overcoming the “bad infinity”–a tactic which in a sense succeeded for the case of the diagonal and side of the square–has ended in a spectacular failure. We don’t get “closure,” but instead a bewildering array of increasingly complex, incommensurable relationships.

What is the source of the problem? Could it be, for example, that the action of “folding” fails to capture the essence of the circle, or what is behind the circle? What have we left out?

{P.S.} To get a sensuous notion of some of the physical ramifications of the problem discussed here, it is necessary to abandon the armchair. I recommend, as a bare starter, the following “field” experiment. While extremely simple, it should provide a first insight into some of issues which Gauss dealt with in his approach to geodesy and measurement in general.

Use a wire, or other means suitably devised, to draw a small arc (say, about 20 cm long) of a circle of radius 10 meters or more. Examine the arc so drawn. If done with precision, the difference of the arc from a straight line-segment is practically imperceptible. How do we KNOW that a discrepancy exists at all, and how might its magnitude be characterized and estimated?

{(To be continued.)}

Circular Action and the Fallacy of `Linearity in the Small’–Part II

CAN YOU SOLVE THIS PARADOX?

by Jonathan Tennenbaum

Very often, the greatest obstacle to progress in a given domain, is the tendency to linger within the axiomatics of a failed approach. To get to the heart of the paradoxes presented last week, let us attempt a fresh look at the original problem. The following considerations are “childishly simple,” but are no less profound in their implications.

Rather than fixate on the special case of the relationship between the diameter and circumference of a circle, I propose to examine, more broadly, the relationship of any circular arc, to any straight line segment. Consider the proposition, that {no} circular arc, no matter how small, could ever coincide with a straight line segment. By reflecting on the evidence for such a proposition, we might gain some new insight into the inner nature of the “creature,” whose existence is suggested by our difficulties in reconciling the diameter with the circumference of a circle.

Geometry in the Small

To this purpose, construct a circle with center P and radius r, and imagine an “extremely small” circular arc with endpoints A and B. How small? Consider, for example, the tiny arc obtained by successively folding the circle upon itself (successive halving) 100, or even 1000 times! {(Figure 1.)} Reflecting on the nearly unimaginable smallness of the angle and arc length involved, the question should pose itself: Do the constructions of Euclidean geometry remain valid and applicable at such extraordinarily small length- (or angle-) scales? At what point do entirely different physical principles confront us, when we pursue the ordering of our Universe (the “Cosmos”) down toward the “infinitely small?”

Without attempting to address that issue directly at this point, let us first assume, that the Euclidean constructions preserve at least a certain degree of relative adequacy for the length-scale we are dealing with. In that case, we can easily evoke the necessary existence of a tiny discrepancy between the circular arc AB and the line segment AB, as follows.

By an additional act of folding, generate a diameter which cuts the circular arc in half, while at the same time halving the line segment AB, at a point we shall call C. {(Figure 2.)} Note, that triangles PAC and PBC are both right triangles; in fact, they are superimposed under the indicated act of folding. Assuming the constructions of Euclidean geometry are applicable at this scale, the sides of these triangles, or rather the squares on those sides, are related by Pythagoras’ famous theorem: The square on the hypotenuse PA is equal to the sum of the squares on the sides AC and PC. Note, that PA has a length equal to the radius of the circle, while PC must necessarily be smaller by some small, but distinct “quantum.”

Close, But Not Quite

Why? The length AC, which is half of AB, while extremely small, is still distinctly greater than zero. Hence, the square on AC also has a non-vanishing magnitude, and since the square on PA is the sum of that tiny square on AC and the square on PC, we must acknowledge that the square on PA is slightly larger in area that the square on PC. The inescapable conclusion is, that PC is shorter than PA–by an exceedingly small, but nevertheless distinct and implicitly calculable quantum. Thus C cannot lie on the circle’s circumference, but rather is slightly separated from it on the inside of the circle.

Since C lies on the line segment AB, that separation at the same time represents a distinct “gap” between the straight-line segment AB and the circular arc, even when the “gap” is hardly perceptible to sense perception. Evidently, the existence of that “gap” is a persistent, irreducible feature of the relationship between circle and straight line.

But, what should we say if the assumptions of Euclidean geometry were demonstrated to break down at the given, relatively microscopic scale? The existence of diffraction and refraction of light, for example, might be taken as strong evidence to the effect, that such a breakdown cannot be avoided. In that case, the very existence of such a singularity (the breakdown) were sufficient to establish the non-linear character of the circular arc!

On several accounts, however, the discussion so far hardly suffices to dispel a certain uneasiness. Indeed, we have rather increased it.

Limits of Euclidean Geometry

Does not the character of our attempts to characterize the relationship of circle to straight line, in a sense, display the conceptual limits of Euclidean geometry itself? In other words, although the circle and straight line are acknowledged as forms in Euclidean geometry, their existence and their relationship cannot be accounted for within Euclidean geometry, except in a negative way.

The mere determination of discrepancies or gaps, even at a potentially “everywhere dense” array of locations, does not define the relationship positively. No array of singularities in and of themselves, no matter how densely we try to “pack” them, could ever “add up” to the process which is generating them. From that standpoint, the proposition, that “the circle is a polygon with infinitely many sides” might well be suspected of being nothing but a sophistical trick, a brazen attempt to evade the issue posed by Parmenides’ paradox. Evidently, in order to account for what lies behind the circle, we have to go outside the domain of Euclidean geometry–not to “non-Euclidean geometry” in the usual mathematicians’ sense, but to something very different.

Anticipating that event, let us return once more to our “super-small” circular arc, and look at the matter from another flank. What is the change, when we go along a circular arc from point A to point B? Recall Eratosthenes’ method to estimate the circumference of the Earth. That method was based on observation of a change of angle of sighting, when we observe the Sun from two different points on the Earth’s surface. In our present case, suppose, for example, that a very distant star happens to be located at a certain time “directly overhead” at A (i.e., along the continuation of the ray from P to A). That same star would appear slightly off the zenith (overhead direction) as seen from B at the same moment. {(Figure 3.)} Off by how much? As Eratosthenes noted, the angular displacement from the zenith would be equal to the angle which PA makes with PB at the center of the circle (or the Earth).

Now consider an arbitrary observation point C on the arc AB. Changing the position of C, we see that the star’s displacement from the zenith increases at a constant rate as we move from A to B along the circular arc. {(Figure 3.)} Does this not suggest a completely different approach to the comparison of a circular arc to straight line, than we have taken up to now?

At any point C on the circular arc, construct the perpendicular to the radial line PC, otherwise known as the “tangent.” Compare the direction of the tangent at A with that of the tangent at B. Evidently, the angular change in direction is again equal to the angle formed by PA and PB at the center of the circle. However small that latter angle might be, as long as it is has a distinct non-zero magnitude, the same angle will be re-emerge as a change in direction of the tangent at A as compared with the tangent at B (for example, as determined by sightings along the tangents onto the celestial sphere). {(Figure 4.)} Note, that for the case of a straight-line segment, as opposed to a circular arc, the “horizon” or direction of motion does {not} change.

Aha! Are we not close to a much more direct, more fundamental characterization of the discrepancy between the circular arc and any line segment?

`Rate of Change'<cm

Consider, the implications of the idea of a variable displacement along the circular arc. Insofar as the tangent represents a direction of motion, or alternatively a “horizon” for the point C, the tangent at C changes its direction at a {constant rate} as C moves along the circular arc from A to B. Might we not, as a preliminary hypothesis, take that rate of change as the measure of the relationship between the circular arc and any given straight line? And might we not take the notion of “constant rate of change” as an appropriate basis for redefining the existence of the circle itself, and even the entire domain of geometry?

Indeed: The notion of “rate of change” has no existence within Euclidean geometry! Introducing that notion “from outside,” means a fundamental, axiomatic revolution in mathematics. Note, that the act of “redefinition” of geometry in the indicated way–which, of course, remains to be richly explored–has no assignable “length” or other scalar magnitude. We are back to analysis situs.

Consult Nicolaus of Cusa’s “Docta Ignorantia,” particularly section 13 of the first book, where Nicolaus has the beautiful figure of a manifold of circles of varying curvature. {(Figure 5.)} The notion of an “interval” between differing rates of change (curvature), has opened up a new pathway toward an intelligible representation of the relationship between the circumference and diameter of a circle.

How Johannes Kepler Changed the Laws of the Universe, Part I

by Jonathan Tennenbaum

The following discussion begins a long journey, along a pathway of <astronomical paradoxes> leading from our discussion of “the simplest discovery,” via the revolutionary work of Johannes Kepler, to the birth of a physics characterized by non-algebraic, elliptic and hypergeometric functions.

In his “Commentaries on Mars” (also known as “Astronomia Nova”), Kepler locates the origin of astronomy itself, in a paradox going back to the most ancient times:

“The testimony of the ages confirms that the motions of the heavenly bodies are in circular orbs. It is an immediate presumption of reason, reflected in experience, that their gyrations are perfect circles. For among figures it is circles, and among bodies the heavens, that are considered the most perfect. However, when experience is seen to teach something different to those who pay careful attention, namely, that the planets deviate from a simple circular path, it gives rise to a powerful sense of wonder, which at length drives men to look into causes. It is just this from which astronomy arose among men.”

Indeed, in our previous discussion of “the simplest discovery,” the hypothetical prehistoric astronomer, observing the cycle of day and night, came upon the paradox of a growing discrepancy between the Sun’s motion and that of the constellation of stars. While the stars pursue what appear to be perfectly circular orbits, the pathway of the Sun, as recorded (for example) from week-to-week and month-to-month on the surface of a large spherical sundial, has the form of a tightly-wound coil. Each day the Sun completes one loop, making a slightly different loop the next day. In the course of a year, the spiral runs forward and then backward, doubling back on itself. More complicated still than the path of the Sun, are the motions of the Moon and planets. The latter displaying irregular, even bizarre behavior when mapped against the background of the stars. Kepler continues:

“The first adumbration of astronomy explains no causes, but consists solely of the experience of the eyes, extremely slowly acquired. It cannot be explained in figures or numbers, nor can it be extrapolated into the future, since it is always different from itself, to the extent that no spiral is equal to any other in elapsed time … Nevertheless, there are some people today who, riding roughshod over 2,000 years’ work, care, erudition and knowledge, are trying to revive this, gaining admiration of themselves from the mob … Those with more experience consider them with good reason to be incompetent….

“For it was very helpful to astronomers to understand that two simple motions, the first and the second ones, the common and the proper, are mixed together, and that from this confusion there necessarily follows the continuous series of conglomerated motions.”

Indeed, to make some sense out of the motions of the Sun and the planets, it is necessary to disentangle them from the daily apparent rotation of the heavens (“the First Motion”). This is most easily done, by recording the positions of the planets relative to the stars and their constellations, rather than relative to the horizon of the observer on the Earth. In other words, we plot the positions of the planets against a “map” of the stars (the so-called siderial positions). The resulting motions of the planets relative to the background of stars, became known as the “Second Motions.” The first and second motions combine together to give the observed motions.

In the case of the Sun, we have to overcome the difficulty, that its illumination masks the weaker light of the stars, so the Sun’s position among the stars cannot be observed directly. But there are many ways to adduce it indirectly; for example, we can observe the positions of the constellations visible in the still-dark side of the sky opposite to the Sun at the moment of sunrise or sunset, and use the relevant angular measurements to reconstruct the exact position the Sun must have on the stellar map. The result of plotting the Sun’s motion against the “dome” of the stars, is very beautiful: The Sun is found to move along a great circle in the heavens, called the ecliptic, whose circumference is traditionally divided into twelve parts named by stellar constellations (“signs of the zodiac”).

For the planets, however, the siderial motions turn out to be surprisingly complicated, and even bizarre. Kepler explains:

“Now that the first and diurnal motion had thus been set aside, and those motions that are apprehended by comparison over a period of days, and that belong to the planets individually, had been considered in themselves, there appeared in these motions a much greater confusion than before, when the diurnal and common motion was still mixed in. For although this residual confusion was there before, it was less observed, less striking to the eyes, because the diurnal motion was very swift … (In particular) it was apparent that the three superior planets, Saturn, Jupiter, and Mars, attune their motions to their proximity to the Sun. For when the Sun approaches them, they move forward and are swifter than usual, and when the Sun somes to the sign opposite the planets, they retrace with crab-like steps the road they had just covered.”

What could be the reason for this bizarre “crab-like” behavior of planetary motions, even forming doubled-back loops in the case of the planet Mars? Where is the simple circular motion, which would supposedly constitute the elementary, self-evident form of action in the Universe?

Don’t rush to supply answers from what you were taught in the past, thus cheating yourself out of the joy of reliving some earth-shaking discoveries. Let’s stop and think about this.

Remember first Plato’s parable of shadows in the cave. Are we seeing, in the bizarre motions of Mars and other planets, mere shadows of the real process? Assuming, for example, that we are seeing only a projection of the real planetary motions in space, how could we discover the “true motions” of the planets? Reflecting on this challenge, we soon find ourselves confronted with a seemingly formidable array of interconnected paradoxes.

First, given that astronomers were restricted (until recent decades) to observations made only from the Earth, how could we determine the exact location of a planet in space? In particular, how could we even determine its distance from us?

To see the elementary difficulty involved, pose the task in more general terms. Imagine an observer, located at any arbitrary point in space. In respect to distant objects, the Universe appears to that observer as if projected onto the surface of a large sphere centered at the observer — the so-called “celestial sphere.” The principle of the projection is very simple: Imagine a distant object, such as a star, emitting rays of light in all directions. The rays which reach the observer, form a very thin cone, which intersects the sphere in a tiny circle (assuming the star itself has a spherical cross-section). Now, from the standpoint of what the observer sees, the star has the same appearance as if it were a light source of appropriate size, brightness, and color and so forth, fixed to the surface of the sphere. Or, again, if we were to compare the given star to another star, at <twice the distance>, but also <twice as large>, how could the observer tell the difference? Furthermore, in the case of distant stars (and to some extent even planets, when observed by the naked eye), the ratio of the object’s diameter to distance is so small, and the cone of rays so thin, that these objects are seen as hardly more than mere points; evidently their distances could be varied over a considerable range, without the observer being able to detect the difference.

The situation becomes even more complicated, when we consider the effect of motion. First, consider the case of a distant planet moving at constant velocity in a circular orbit around the observer. As seen from the observer, the planet’s motion over any given interval of time will appear to describe a circular arc on the celestial sphere. It is easy to see, that the same apparent motion would be caused by a planet moving twice as fast, on a circular orbit of twice the radius around our observer.

Actually, the ambiguity is much greater! Construct a plane passing through the original circular orbit. That plane passes through the location of the observer, and cuts the sphere in a great circle. Now draw <any> arbitrary curve on that plane, only subject to the condition, that it encloses the observer without folding back on itself. Then it is easy to construct a hypothetical motion of a planet on that curve, which would present exactly the same appearance to the observer as original planet moving in a constant circular orbit! All we have to do is construct a ray from the observer to the location of the original planet on its circular orbit. That ray intersects the arbitrary curve in some point P. As the ray follows the motion of the original planet, rotating at constant speed, the point P moves along our arbitrary curve. If we now attach a hypothetical planet to the moving point P, its motion, as seen from the standpoint of the observer, will seem to coincide with that of the original planet. Note, that although the observed imagine will appear to move always at a constant rate around the observer, the actual speed of the hypothetical planet on the arbitrary curve will be highly variable; in fact, the planet will be accelerating or decelerating at each point where the arbitrary curve deviates from a perfect circle around the observer. Consider, for example, the case where the curve is an elongated ellipse with the observer at one focus.

The problem becomes more complicated still, if we admit the possibility, that the observer himself might be moving. The paradox already hits us with full force, when we observe the nightly motion of the stars. Are the stars orbitting around us, or are the stars fixed and the earth is rotating, in the opposite direction? Or some combination of both? Supposing the stars are fixed, and the Earth is rotating, what about the Sun? When we “clean away” the effect of the Earth’s rotation, by plotting the Sun’s apparent motion against the “sphere of the fixed stars,” the Sun is seen to move on a circle, the ecliptic. Is the center of the Earth fixed relative to the stars, and the Sun orbiting around that center? Or, is the Sun fixed, and the Earth orbitting with the same speed, but the opposite direction around the Sun, on a circle of the same radius? In each case, and in countless other imaginable combinations and variations, the observed phenomena would seem to be the same!

These arguments would appear to demonstrate the complete futility of determining the actual orbit and speed of a planet from its observed motion as seen from the Earth! We seem to be confronting Kant’s famous “Ding an sich” — the pessimistic notion, that Man can never know reality “as it really is.” Can we accept such a standpoint? Were God so cruel, as to create such a hermetic barrier to Reason’s participation in His universe?

During centuries of debate about the motion of the Earth and the celestial bodies, there were those who rejected even the concept of “true motions” as opposed to “apparent” ones, and maintained that <only observations> — i.e., sense perceptions — <are real>. From that sort of radical-positivist standpoint, it makes no difference whether we assume the Earth is fixed and the Sun is moving, or vice-versa; these are merely two among an infinity of mathematically equivalent opinions, none of which have any particular claim to truth.

One of the notable advocates of this kind of indifferentism, sharply and repeatedly denounced by Kepler, was one Petrus Ramus (1515-1572). Ramus was a leading “anti-Aristotelian” of the species of the later Paolo Sarpi. (In other words, he was more Aristotelian than Aristotle!) Ramus held a prestigious Professorship at the College de France and was known for works on philosophy, law and mathematics. In his famous book on elementary mathematics, Ramus banned incommensurables, eliminated the axiomatic approach of Euclid, and rejected the regular solids as insignificant and useless. He went over from the Catholic Church to Calvinism and found his end during the famous “St. Bartholemeus night.” Kepler put his polemic against Ramus on the very first page of the “Astronomia Nova,” quoting Ramus’ demand for an “astronomy without hypotheses,” and then giving his own, devastating reply:

Petrus Ramus, Scholae Mathematica, Book II:

“Thus, the contrivance of hypotheses is absurd; nevertheless, in Eudoxus, Aristotle, and Callippus, the contrivance is simpler, as they supposed the hypotheses to be true — indeed, they have been venerated as if they were the gods of the starless orbs. In later times, on the other hand, the tale is by far the most absurd, the demonstration of the truth of natural phenomena through false causes. For this reason, Logic above all, as well as the Mathematical elements of Arithmetic and Geometry, will provide the greatest assistance in establishing the purity and dignity of the most noble art [Astronomy – JT]. Would that Copernicus had been more inclined towards this idea of establishing an astronomy without hypotheses! For it would have been far easier for him to describe an astronomy corresponding to the truth about the stars, than to move the Earth, a task like the labor of some giant, so that in consequence of the earth’s being moved, we might observe the stars at rest … I will solemnly promise you the Regius Professorship at Paris as a prize for an astronomy constructed without hypotheses, and will fulfill this promise with the greatest pleasure, even by resigning our professorship.”

The author [Kepler – JT] to Ramus:

“Conveniently for you, Ramus, you have abandoned this surety by departing both from life and professorship. Had you still held the latter, I would, in my judgement, have won it indeed, inasmuch as, in this work, I have at length succeeded, even by the judgement of your own logic. As you ask the assistance of Logic and Mathematics for the noblest art, I would only ask you not to exclude the support of Physics, which it can by no means forego … It is a most absurd business, I admit, to demonstrate natural phenomena through false causes, but this is not what is happening in Copernicus. For he too considered his hypotheses true, no less than those whom you mentioned considered their old ones true, but he did not just consider them true, but demonstrates it; as evidence of which I offer this work…. Thus, Copernicus does not mythologize, but seriously presents paradoxes; that is, he philosophizes. Which is what you wish of the astronomer.”

What is wrong with our arguments? Provoked by Kepler’s remarks, reflect for a moment on the paradox of “unknowability” of the true planetary motions, presented above. Is the Universe really unknowable in that way? Or might it not rather be the case, that our reasoning contains some pervasive, false assumption, which is the root of the trouble?

(Note: This discussion begins a longer series, which will not run consecutively, but will nonethless constitute a coherent whole.)

From Cardan’s Paradox To The Complex Domain, Part I

by Jonathan Tennenbaum

Contrary to British-authored mythologies, the intense interest on the part of Greek geometers from Pythagoras to Eratosthenes, in so-called “unsolvable problems” of geometry, had nothing to do with an idle fascination in mathematical puzzles. At issue, in the investigation of such problems as doubling a cube, trisecting an arbitrary angle, constructing a regular 7-sided polygon etc. was nothing less than the notion of Natural Law, as a higher principle subsuming an ordered series of {sets of physical principles}, each embodying a higher per-capita power of Mankind over the Universe.

In fact, there is no absolute “unsolvability” of the above-mentioned and other problems, except relative to a given, fixed set of principles of construction such as the ruler-and-compass constructions of so-called Euclidean geometry. Archimedes, Nichomedes and Eratosthenes and others already developed a whole array of “solutions” based on introducing addition principles, embodied in higher-order curves, constructions in higher dimensionalities, and the use of various physical machanisms and instruments.

The issue was not, that a given problem were “unsolvable” in some absolute sense, but rather that: 1) it could not be solved in the terms in which it had been posed, i.e. in terms of a certain implicitly circumscribed set of principles; 2) it {could} be solved with the help of the discovery and introduction of one or more {new} principles, lying {outside} the given domain, but demonstrated to be physically valid; 3) that the arrays of principles, arising this way, are implicitly ordered by a notion of Man’s increasing {power} over the Universe.

A simple illustration is the realization that a straight line, by and of itself, could never generate a surface. Nor could a surface ever transform itself into a solid. In both cases, a process of (rotational) extension is required, acting upon the line or the plane “from the outside”, and which already embodies the principle of the higher domain. This realization was the basis for the classical differentiation of geometrical problems, between so-called linear, plane and solid species; and for the notion, that the lower-order domain is always derived from the higher one, and not vice-versa.

However, by the time of Plato the Greek geometers had already conclusively established, that the actual ordering of “powers” is {not} that of simple dimensionality in visual space. The former lies beyond the reach of visual geometry per se, but actually determines the characteristics of action as reflected in visual space.

For example, the problem of {trisecting} any given angle in a plane, only {appears} to be a “two-dimensional” or “plane” problem. In actuality, as demonstrated by Nichomedes and others, it belongs to the same domain of “power” as the doubling of a cube!

That relationship is the original focus of the following discussion. It is key to understanding, why attempts by Cardan and others in the 16th century an afterwards, to derive an algebraic formula for the solution of a arbitrary cubic equation, inevitably ran into a devastating anomaly in the emergence of so-called “impossible”, “imaginary quantities”. The second installment of this discussion, will examine Cardan’s anomaly through the eyes of Abraham Kaester, setting the stage for Gauss’ subsequent discovery of the complex domain.

Trisecting an Angle

Since the origin of all angles is rotation, we reference all constructions to a circle whose center (marked “O”) is the vertex of the given angle, and whose radius is taken as “1”. The angle itself corresponds to the circular arc of rotation between the two points P and Q on the circle, where the sides (rays) of the angle intersect the circle.

Nothing is simper in visual geometry, than to double, triple, or multiply a given angle a whole number of times. We have only to set our compass to PQ and mark off a succession of equal arcs on the corresponding circle, starting at Q. Relative to OP, the rays OR, OS, OT, joining the center O to the endpoints of those successive arcs, represent double, triple, quaduple etc. the original angle between OP and OQ.

But what about the {inversion} of that process: to {divide} a given angle into a whole number of equal parts? Bisecting an angle is easily accomplished with the aid of ruler and compass, but the problem of dividing an arbitrary angle into {3} equal parts, presents a whole different kettle of fish. Centuries of attempts to develop a general solution within the domain of the ruler-and-compass constructions of plane geometry, ended in failure. Why? Key to this is the species-relationship to the doubling of the cube, known to the Greek geometers around the time of Plato.

Evidently, dividing the angle into three or any other number of equal angles, is equivalent to dividing the corresponding {circular arc} into the same number of equal arcs.

Now, it is a relatively easy matter to divide a straight line segment (hypothetical “zero curvature”) into three or any other number of equal segments, by ruler-and-compass constructions. Someone might, accordingly, attempt the following “solution” for trisecting a circular arc: First trisect the {chord} of the given arc; then {project} the division-points from the center onto the circular arc (see Figure 1a).

This attempt fails, for reasons Cusa and Leonardo emphasized in their discussions of the {distortion} introduced by any projection between a linear and curved surface. The projected images of the three equal segments on the chord onto the circle, are no longer equal as arcs.

Figure 1

Conversely, if we project the division-points of an already trisected arc onto the chord joining the arc’s endpoints (by drawing radial lines from the circle’s center to the division-points on the circular arc) we get an {unequal} division of the chord (see Figure 1b). Furthermore, the lengths of the resulting segments on the chord, taken in and of themselves, do not manifest any simple proportionality; rather — as it turns out — the attempt to express the convoluted relationships in algebraic form, inevitably leads to what are called “equations of the third (cubic) degree”.

Turn that around in your mind. Might it not be the case, that the appearance of complicated combinatorial-algebraic relationships among ordinary “scalar” magnitudes (whole numbers, “real numbers” measuring the lengths of straight line segments etc.) reflects the fact, that we are dealing, not with self-evident realities, but rather with “shadows”, cast from a higher physical-geometrical domain? Yes indeed, as Gauss demonstrated most conclusively in his work on biquadratic residues! The same principle underlies Gauss’ “fundamental theorem of algebra”, and its pre-history in the celebrated, centuries-long controversy over “imaginary numbers” appearing in the solution of the cubic equation.

That is where we are headed right now, along a trajectory defined by the ancient problem of trisecting a given angle.

Take a closer look, first, at the {inverse process} — {tripling} an angle — and at the lawful functional relationships, which are generated among selected “shadows” cast by that process.

To triple a given angle POQ, mark off points R and S from Q on the circle, such that the arcs QR and RS are equal to PQ. The rays OP, OQ, OR and OS are equally spaced, so that the angle between OR and OP is {double} that between OQ and OP, and the angle between OS and OP is {triple} the original angle.

For purposes of illustration it is best to represent P as the right-hand endpoint on the horizontal diameter of the circle. Rather than only considering a fixed angle, imagine the point Q as moving along the circle, starting at P (angle 0) and going around in a counterclockwise direction. Call the size of the angle POQ, “alpha”. What happens to the points R and S, as the alpha grows?

Clearly, for alpha = 0 all three points Q, R, S coincide with the position P. As alpha grows, Q moves at a proportional rate along the circle, while R runs ahead at twice, S at three times that rate. When Q reaches 90 degrees, R will have reached the position opposite to P at 180 degrees, and S will be at 270 degrees (see Figure 2).

Figure 2

Next, investigate the lawful relationships among the “shadows” cast by that process under various sorts of projections.

The simplest and most characteristic case, is perpendicular projection onto the horizontal diameter (axis) of the circle (the line through O and P).

Denote by q, r, s the perpendicular projections of the points Q, R, S, respectively onto the horizontal axis (see Figure 3a).

Figure 3

How do q, r, s move, as the point Q runs at a uniform rate along the circle starting at P?

Focus first on q. Imagine one end of a string is attached to Q on the circle (the latter oriented in a vertical plane), while the other end is attached to a lead bob, so that the string hangs vertically downward. Q’s projection q is the point where that vertical crosses the horizontal axis through O and P. (This works when Q lies on the upper half of the circle; when Q is on the lower half, we have to project “upward” by extending the direction of the string until we reach the axis.) Note, that the motion of the point q is not uniform like that of Q; rather, q starts very slowly (Q near P), and then accelerates, reaching maximum speed as the angle alpha approaches 90 degrees, and then slowing down again as Q approachs the point opposite P on the left of the circle (180 degrees). At that point, q reverses direction and repeats the process in reverse, as Q runs back to P along the lower half of the circle, and so forth (see Figure 4).

Figure 4

This will be familiar to anyone who knows the so-called trigonometric functions (better termed “circular functions”); The position of the “shadow” point q relative to O, corresponds to the so-called {cosine} of the angle alpha; and q’s motion has the form of a simple “harmonic” vibration whose frequency is the number of revolutions of the moving point Q around the circle, per unit time.

More entertaining, is to watch the {simultaneous} interrelated motions of q, r and s, as the latter two oscillate along the axis with frequencies {twice} and {three times} that of q! (see Figure 5).

Figure 5

Does there exist a mutual relationship among the positions of the three “shadows” q, r, s, taken by themselves, which remains valid and invariant throughout the process?

To answer that, “freeze” the motions for a moment, and pose the question again as follows: what direct relationships among q, r, and s flow from the circumstance, that the corresponding points on the circle — Q, R and S — were generated from P by rotation through the angles {alpha}, {2 x alpha} and {3 x alpha} respectively?

(For the purpose of illustration, it is best to take the case, where Q lies in the upper right-hand quadrant of the circle with the angle POQ less than 45 degrees. The other cases are essentially equivalent.)

Note the three {right triangles} OQq, ORr, OSs, each of which have hypotenus equal to unity (the radius of the circle), and whose angles at O are {alpha}, {2 x alpha} and {3 x alpha} respectively. How are those three triangles related? In effect, the successive rotations through the angle {alpha} on the circle, imply transformations of the triangle OQq into ORr and then into OSs. But generally speaking, the triangles are neither congruent, nor similar to each other in shape (see Figure 3a).

Noting the doubly and triply self-reflexive character of the action involved — i.e. an action applied to itself, and then to the result of that — it should occur to us, that the points R and S bear the same relationship to the axis OQ, as the points Q and R do to the axis OP.

This remark suggests that we consider the perpendicular projections or “shadows” of R and S on the axis OQ — call these r’ and s’, respectively — as well as the projections r and s on original axis OP (see Figure 3b [partial construction]).

Figure 3

Note, that the right triangles ORr’ and OSs’, arising from the new projection, are congruent to OQq and ORr respectively. In fact, the latter are carried into the former by exactly the same circular rotation through {alpha}, that carries Q into R and R into S. For the same reason, Or’ = Oq and Os’ = Or.

These observations now provide the key to unravelling the relationships between the “shadows” q, r and s.

The construction required is a bit laborious, but worth going through in detail, while keeping an eye out for the underlying bounding principle. For behind the following, nested chain of similarity relationships, lurks Gauss’ complex domain.

Algebraic Equations Arise Through Projection of Rotational Action

The point of departure for unravelling these relationships is the first triangle, OQq, and in particular its base and height — the segments Oq and Qq — whose lengths we shall call X and Y, respectively (see Figure 3a). The two are linked to each other through Pythagorus’ relationship: the sum of the squares of X and Y is the square of the radius of the circle, which we took as 1.

Now proceed as follows, concentrating first on q and r.

First project the point r’, lying on the axis OQ, down onto the original axis OP, obtaining r”. This is, so to speak, “the shadow of a shadow” (see Figure 3b).

We can obtain the position of r” quite easily, because the result of projecting from a straight line (in this case OQ) onto another straight line (OP) is to transform the distances along the line by a {constant factor}, as measured from the point of intersection of the two lines.

In our present case, we can determine the factor involved by comparing the length OQ, with the projected length Oq. OQ being of length 1 (Q lies on the circle), the factor is just the value of the length Oq, namely the quantity we have called X.

Since Or” is the projection of Or’, its length is X times that of Or’. The latter, as we already noted, is equal to Oq, whose length is X. So the length of Or” is X times X, or {X squared}!

So much for r”. How do we get from there, to r? We have to compare the {direct} projection of R from the circle down to the axis OP — which gives us r — with the “double” projection, from R onto r’ on OQ, and then from r’ to r” on OP.

The difference between the two arises from the fact, that the first step in the “double” projection occurs at an angle, which is “skewed” relative to the vertical direction of the other projection. I am talking about the angle betwen the vertical line Rr, and the segment Rr’. What is that angle? With a bit of reflection, you can see it is none other than {alpha}. For, Rr’ is perpendicular to the line OQ, which itself is “rotated” by alpha relative to the horizontal line OP. (To put it a different way, the triangle ORr’ is the result of rotating the triangle OQq around O by the angle alpha. In that process, the directionality of each of the triangle’s sides is changed by the same amount.)

The result of the “skew” projection that generated r’, is that its ensuring projection onto the horizontal axis will lie a certain distance to the right of the direct projection r. By how much?

Draw the perpendicular line segment from r’ to the vertical line Rr, and let r* denote the endpoint of that segment. Then r’r* will be parallel, and equal in length, to the segment between r” and r on the horizontal axis. (see Figure 3b)

Now observe that the triangle Rr’r* is {similar} to the original triangle OQq. Indeed: by construction Rr’r* has a right triangle at r*, while the angle at the vertex R, as we just saw, is alpha.

(Note also, that Rr’r* is rotated by 90 degrees relative to OQq. This, as we shall see later, reflects the action of the complex number “i”, lurking in the background of this whole construction!)

The similarity means that the the sides of Rr’r* are proportional to the corresponding sides of OQq, by a common factor of proportionality. Comparing the hypotenuses of the two, note that the hypotenus of OQq — the segment OQ — has length 1; while the hypotenus of Rr’r* — the segment Rr’ — is equal to Qq, the length of which we designated “Y”. The ratio is thus 1:Y, i.e. the factor of proportionality is Y.

In the similarity relationship between the triangles Rr’r* and OQq, length we are looking for — namely r’r* — corresponds to the side Qq of the triangle OQq, which again has length Y. So, to get the length r’r*, apply factor of proportionality Y to length Y. The result is YY, {Y squared}!

To get from r” to r, we thus have to move to the {left} by a distance {Y squared}. On the other hand, we found the length of Or” to be {X squared}.

Our conclusion: {r is located at distance [X squared – Y squared] from O along the horizontal axis}.

Here X stands for the length Oq, Y for the length Qq. Remember, that X and Y are linked to each other, as we noted above, by Pythagorus’ relationship XX + YY = 1 (Q lies on the unit circle). From this YY = 1 – XX, so that XX – YY = XX – (1 – XX) = 2XX – 1.

The result is to express the position of r {directly} as a function of q: The distance Or is equal to 2XX – 1, i.e. twice the {square} of the distance Oq, minus one.

Don’t miss the remarkable implication: the process of {doubling} the angle, by self-reflexively applying the rotation to itself, results in a {quadratic} relationship — i.e. one involving a {square} power — among the scalar “shadows” generated by that process!

Note also certain “topological” features of the relationship of X to 2XX – 1, that reflect the different rates of motion of Q and R along the circle, as the angle alpha grows. (Remember, X measures the segment Oq, not alpha directly). For example, X = 1 corresponds to the case alpha = 0, when P, Q, and R coincide. Sure enough, for this value of X, 2XX – 1 is also equal to 1. On the other hand, X = 0 corresponds to the case where Q lies at the top of the circle (alpha is 90 degrees); in this case, 2XX-1 = -1 and, sure enough, R lies {opposite} to P, at an a angle of 2 x alpha = 180 degrees.

Those skillful in geometry, will find no great difficulty applying entirely analogous methods, to determine the position of the {third shadow}, s — first in terms of r, and then in terms of q. It turns out that the distance Os is equal to 4XXX – 3X. Thus, {tripling} an angle results in {cubic} relationship among the corresponding “shadows” — i.e. one involving the {third power} of X. Hence, by inversion, the implicit relationship between {trisecting an angle} and constructing the {cube root} of a given quantity, which is the general form of the classical problem of doubling a cube.

In the following installment we shall derive the cubic relationships for the trisection of an angle from an improved and simplified standpoint, and then turn to the celebrated paradox of “Cardan’s formula”, as seen through the eyes of Gauss’ predecessor and teacher, Abraham Kaestner.

From Cardan’s Paradox to the Complex Domain, Part II

by Jonathan Tennenbaum

“In this place it will please us to speak of the great advantages of opening up the fountain of Transcendental Magnitudes , and discovering the reasons, why certain problems are neither plane, nor solid, nor of any other degree, but surpass all algebraic equations.” (Gottfried Wilhelm Leibniz, 1686).

We began the first installment of this series, by recalling the physical-geometrical “powers” investigated by the Greeks, and the unexpected emergence of higher “powers” in connection with what appeared to be a straightforward problem of plane geometry: to divide a given angle or circular arc into two, three or a larger number of equal parts. In the following discussion we shall nail down that relationship, in particular deriving the cubic equation corresponding to the trisection of a given angle, and thereby revealing the inseparable relationship with the doubling of a cube.

The same relationship will become clear, when we later examine the implications of the self-similar spiral, itself a reflection of Gauss’s complex domain. On such a spiral, tripling a given angle of rotation, translates into taking the cube (third power) in terms of the corresponding ratio of radial distances. However, the self-similar spiral itself provides neither the means to trisect an arbitrary angle, nor to construct a cube of a given volume. However we shall see later, that both those ancient problems — and countless others — can easily be solved the using the higher principle or “power” embodied in the catenary .

The `Complex’ Composition of Angles

In the last installment we found an algebraic relationship between the scalar “shadows” generated by the doubling of an angle, in terms of the corresponding arcs on a circle of unit radius. We identified the center of the circle as “O” and called the right-hand endpoint of a chosen diameter (taken as the horizontal) “P”; also, we denoted by Q and R the points on the circle, corresponding to the angles {alpha} and {2 x alpha}, respectively, measured as rotations around O relative to the horizontal ray OP. Finally, we denoted the perpendicular projections of Q and R onto the horizontal diameter q and r respectively.

Our analysis showed, that the distance Or is equal to twice the square of the distance Oq, minus one. In other words: the position of r relative to O is given by 2XX – 1, where X measures the corresponding position of q. (See discussion on the meaning of negative values of these parameters, toward the end of this installment.)

Now I want to take on the case of tripling the angle {alpha}!

Recalling last week’s discussion, let S denote the point on the circle, corresponding to the angle {3 x alpha}, and let s be the projection of S onto the horizontal diameter. How is s related to q and r?

As a bit of reflection shows, the answer is already implied by what we did last time, to analyze the relationships for doubling an angle. Reworking the essential steps here again, if possible with an actual physical model or corresponding animated diagram as reference, should cause the principle involved to “leap out” at the reader (see Figure 6).

Figure 6

We started from the right triangle OQq, whose horizontal and vertical sides we called X and Y, respectively. We noted that the point R arises from Q, by rotating Q through the angle {alpha}. Applying that rotation to the whole triangle OQq, yields a congruent triangle ORr’, where r’ marks the perpendicular projection of R onto the axis OQ. At the same time, that rotation generates a new right triangle: ORr, whose third vertex r is the projection of R onto the original horizontal axis OP. Our analysis of the relationship of the triangles, showed that the horizontal side of the latter triangle, Or, had length equal to XX – YY. (Using Pythagorus’ relation between X and Y, we found XX – YY to be equivalent to 2XX – 1.)

Now, to get the point S corresponding to tripling the original angle, it is enough to rotate R — itself obtained by doubling the angle alpha — through the same angle once again. Apply that rotation to the whole right triangle ORr. Observe, that the resulting relationships have essentially the same form as the earlier case, when we rotated the triangle OQq through {alpha}, to obtain the result of doubling the angle. The only difference is, that the angle of the triangle ORr at O is not {alpha}, but {2 x alpha}. But our earlier analysis did not really depend on any special assumptions concerning the shape of the right triangle being rotated, but only the angle of rotation itself ({alpha} and the parameters X and Y connected with alpha).

Accordingly, suppose we have an arbitrary right triangle ORr with hypotenus 1 (i.e. R lying on the given circle) and its side Or lying along the horizontal axis OP. Call its angle at O “{beta}”, and the lengths of its horizontal and vertical sides “A” and “B” respectively (see Figure 7a).

Now rotate ORr by the angle {alpha} around O. The vertex R is carried to a point S, the which, relative to the original point P, corresponds to an angle of {alpha} + {beta}. Imagine a weighted string attached to R and hanging down vertically as the triangle ORr rotates. Observe how the angle, between that vertical and the triangle’s side Rr, grows, as the rotation progresses; in fact, that angle will be equal to the angle of rotation itself (see Figure 7b).

Figure 7

After completing the rotation of the triangle ORr through the full angle {alpha}, R is carried to the point we called S, and r to a point s’, corresponding to the projection of S onto the axis OQ. The vertical string attached to R (now at position S) hits the horizontal diameter at the point s, creating the new right triangle OSs (see Figure 8).

Figure 8

We can now unfold relationships entirely analogous to the ones we found for the doubling of the angle {alpha}, but which now apply to the generalized case of the sum {alpha} + {beta}. To wit:

Let s” and s* be the perpendicular projections of s’ onto the horizontal diameter and onto the vertical line Ss, respectively. As we noted in last week, the first projection changes lengths by a factor X; since the length of the segment Os’ is the same as that of Or, which we called “A”, the projection of Os’ — i.e. Os” — will have length X x A.

Observe, in addition, that in virtue of the process of rotation which generated an angle {alpha} between the vertical at S and the line Ss’, the right triangle Ss”s* will be similar to the original triangle OQq. At the same time, the hypotenus of that triangle, Ss’, is congruent to Rr, whose length we called “B”. Since the original triangle’s hypotenus is 1, the factor of proportionality must be B. As a result, the horizontal side s’s**, which corresponds to the vertical side Qq of OQq, has length B x {length of Qq} = B x Y. On the other hand, that distance is the same as the gap between s and s” on the horizontal axis. Since s” lies to the right of s, we must subtract that distance from Os” — whose length we just found to be X x A) — in order to obtain the length Os.

The result of this chain of relationships is: length Os = {X x A} – {Y x B} That was the horizontal side of the triangle OSs. With a little extra effort, we can also find its vertical side. The latter, Ss, is divided by s* into the two segments Ss* and s*s. The first of them, which coincides with the vertical side of the triangle Os’s*, is proportional to the horizontal side of the original triangle OQq (length X), by factor of proportionality B. So, length of Ss* = B x X The second segment s*s, is parallel to and equal in length to the vertical segment s’s”. Note, that the points O, s’, s” form a right triangle which is slightly smaller than, but similar to the original triangle OQq. The factor of proportionality is the hypothenus of the former triangle, namely Os’, which is equal in length to Or = A. As a result, the length of the side s’s”is A times the corresponding side of the original triangle, namely Qq = Y. So, length of s*s = length of s’s” = A x Y.

Putting these results together, we find: length of Ss = length Ss* + length s*s = {B x X} + {A x Y}. Summing up: the lengths of the horizontal and vertical sides of the right triangle, generated by the angle {alpha} + {beta}, are {X x A} – {Y x B} and {X x B} + {Y x A} respectively, where X,Y and A, B are the sides of the right triangles corresponding to {alpha} and {beta} (see Figure 7b).

Notice, that the horizontal and vertical sides of the triangle for the sum or composition of the two angles {alpha} and {beta}, each involve all of the four values X, Y, A and B. This “complex” interwining of parameters is merely the algebraic “shadow” of the physical process of combining two rotations .

The Cubic Equation for the Trisection of an Angle

At this point we can easily derive the relationships resulting from tripling an angle, and invert these to obtain the third degree algebraic equation corresponding to trisecting an arbitrary angle.

First, what happens when we apply our “composition formula” to doubling a given angle? In this case {beta} is the same as {alpha}, A = X, B = Y, and the horizontal and vertical sides of the triangle for {2 x alpha} come out as {X x X} – {Y x Y} and {X x Y} + {Y x X} respectively. The first one, XX – YY, we had before; and now we have 2XY as the second side.

Now take that triangle, and rotate it by alpha again. In this case {beta} is the double of {alpha}, and A = XX – YY, B = 2XY, as we just found. The result of the composition formula is now a bit more “hairy”, but lawfully so. For our present purposes we only need the horizontal component, which expresses the position of the “shadow”-point s: {X x (XX – YY)} – {Y x (2XY)} = XXX – XYY – 2YXY = XXX – 3XYY. Recalling Pythagorus’ relation XX + YY = 1, YY = 1 – XX, we can express the latter magnitude in terms of X alone: XXX – 3XYY = XXX – 3X(1 – XX) = XXX – 3X – 3XXX = 4XXX – 3X This is the result I announced at the end of last week’s discussion. The position of the “shadow”-point s, resulting from tripling the angle {alpha}, is related to that of the point q, corresponding to {alpha}, as follows: The length Os is equal (in scalar magnitude) to 4 times the cube of Oq, minus three times Oq.

Thus, tripling an angle, is reflected in an essentially cubic or third-power algebraic relationship among the corresponding “shadows”!

Recall the relationships for doubling an angle, involved a square or second-power relationship among the shadows. Thus, the linear, plane and solid geometrical “powers” of Classical geometry, seem to be subsumed within the process of successive transformations of rotation by once, twice, three times an arbitrary angle.

But, what is to prevent us from applying the composition of rotations once again, to obtain analogous relationships for 4, 5 or any other whole-number multiple of an angle? Evidently, each time we apply the rotation {alpha} we increase by one the dimensionality or degree of the corresponding algebraic relationship in the domain of the “shadows”. In this sense, the circular rotation subsumes and transcends all those algebraic dimensionalities. And by the way, didn’t Nicolaus of Cusa refer to the circle as reflecting a higher principle, bounding the linear (algebraic) world of the regular polygons? The latter constitute, of course, special cases of the whole-number division and multiplication of angles.

But returning to our cubic relationship, two remarks are in order.

First, someone who has not been utterly brainwashed by high school or college algebra courses, might rightfully object to subtracting what appears to be a simple one-dimensional magnitude — 3 times the length Oq, i.e. 3X — from the cubic or three-dimensional magnitude 4XXX. Such a subtraction would be plainly absurd; evidently sort of error or foul play has occurred!

Looking a bit closer at what we did, however, reveals, that the multiplier “3” in the above expression does not signify a simple linear magnitude. Indeed, if you check back, you will find that this “3” originated from the “1” in Pythagorus’ relationship XX + YY = 1. That 1, however, signifies the square of the hypotenus of the right triangle OQq, i.e. a two-dimensional magnitude . Thus, the magnitude “3X” is actually a magnitude of “cubic” or third order, while being at the same time proportional to X. (Much more could be said about this matter, under the rubric of the devastating fallacies arising from belief in the supposed self-evidence of “simple numbers”.)

Secondly, we implicitly assumed, in our analysis of the relationships among q, r, and s, that the points Q, R, S all lay in the same, upper right quadrant of the circle; we also spoke of “lengths” always as positive magnitudes. Whereas, as the angle {alpha} grows, the points R and especially S, race ahead of Q, and can come to lie on opposite sides and different quadrants of the circle. In these cases, the layout of the triangles and projections, upon which our derivation depended, change (see Figure 4). At the same time, note that both 2XX-1 and 4XXX-3X can take on nominally negative values, as for example when X = 1/2. What is the significance of those negative values?

Gauss himself, as well as Lazard Carnot in his famous book on the “Geometry of Position”, devoted careful attention to this question, which is closely related to the analysis situs origin, not only of the negative numbers, but also of the so-called “imaginary” numbers. The following should suffice to identify the essential point:

Real physical magnitudes — as opposed to mathematical fictions — are never “indifferent,” but are invariably associated, at least implicitly, with a notion of directionality or orientation in the Universe. “Negative numbers” arise very simply, in connection with the notion of reversal of direction or orientation. Indeed, when for example the point “r” (corresponding to the angle {2 x alpha}) crosses over to the left of the midpoint O — the which occurs at the moment {alpha} hits 45 degrees, and X becomes less than the corresponding value, namely 1/sqrt(2) — the “length” Or reverses its direction. Exactly at that point, indeed, the value of 2XX-1 hits zero and becomes negative .

Thus, the so-called “rules” of algebraic operations with negative numbers, are no mere conventions or arbitrary inventions of so-called “pure mathematics”; on the contrary, they are determined by the geometrical characteristics of rotational action . When those characteristics are taken into account, and when the differentiation of positive and negative values for the “lengths” Oq, Or, Os etc. correspond to the distinction between “right and left” relative to the chosen origin O, then the expressions 2XX – 1 and 4XXX – 3X etc. turn out to be valid for all values of the angle {alpha}.

To explore these relationships, graph the cubic function represented by 4XXX – 3X. Note, that the horizontal coordinate X of the graph corresponds to the position of the point “q”, whereas the vertical coordinate (with value 4XXX – 3X) corresponds to the position of the point “s”. (Keep this separate from the representation of motion on the circle, to avoid confusion!) Imagine the overall form that curve must have, to represent the relative motions of q and s, as {alpha} increases. Next, explore the graph numerically, by calculating the value of 4XXX – 3X for a variety of values of X between -1 and 1, noting the points of reversal of direction and their significance in terms of the interrelation of Q, S and q, s.

So far we have been focussing on tripling a given angle. What about trisecting an angle? For that case, the point S is given, and we have to find the point Q, such that S is the result of tripling the rotation from P to Q. This is evidently equivalent to determining a value of X, such that 4XXX – 3X is equal to the length Os (taking account of +/- sign), where s is the projection of the given point S onto the horizontal diameter. For, once we have the value X, we know the position of Q’s projection q on the diameter; then Q can be constructed as one of the two points of intersection of the perpendicular at q with the circle.

Thus, trisecting an arbitrary angle corresponds to solving the cubic equation 4XXX – 3X = c, where c (corresponding to the position of s on the axis) can assume any value between -1 and 1.

In terms of the graph of 4XXX – 3X, this means finding the intersection-point between that cubic curve, and a horizontal line at height “c” parallel to the X-axis. But, wait a minute! Given the “looping” form of the curve, there will be not just one, but in general three different points of intersection! What do they signify? How could there be more than one solution to trisecting an angle? And what about the doubling of a cube, which corresponds to the cubic equation XXX – 2 = 0? Could there exist three different cubes, having the same volume?

Part III takes us from the birth of “algebra” in the famous “Hisab al-jabr w’al maqalaba” of the 9th-Century Baghdad astronomer ibn Al-Khwarizmi, to Cardan’s paradox and its discussion by Kaestner.

From Cardan’s Paradox to the Complex Domain, Part III

by Jonathan Tennenbaum

As we have emphasized, the subject of Gauss’ “Fundamental Theorem of Algebra” — ostensibly the solution of algebraic equations of arbitrary degree — goes back to very long before the emergence of what came to be known as “algebra”, to the discussions emanating from the Pythagoreans and continued in the circles of Plato, on the general notion of physical magnitude . It is exactly that line of development, which culminated via Gauss’ breakthroughs in Riemann’s conception of magnitude as a self-developing multiply-extended manifold, where “extension” signifies the differential action of generation and integration of a new principle of physical action into the ongoing process.

The very nature of Riemannian physical action is such, that it generates an increasing density of discontinuities or other sorts of anomalies relative to any attempted formal representation “projection” of the action involved.

Look at algebra from this standpoint, and all fearful mysteries dissolve into pure fun. That’s what we shall pursue now, in examining the devastating anomalies which developed within algebra itself in connection with the celebrated “Cardan’s rule” for the solution of cubic equations, and which was played a central role in the disputes which culminated to Gauss’ 1799 refutation of Euler and Lagrange on the so-called “imaginary numbers” and the heredity defects of formal-algebraic method.

The origin of “algebra”

According to available accounts, the term “algebra” derives from the Arabic word “al-jabr”, signifying “completion” or “healing”. The term became current through a famous Arabic mathematical treatise, the “Hisab al-jabr w’al maqalaba”, composed by the astronomer and mathematician Abu Ja’far Muhammad ibn Al-Khwarizmi (approx. 780-850). Al-Khwarizmi worked together with Al-Kindi and others at the “House of Wisdom” in Bagdad, founded by the son of Harun al-Rashid as a center of learning. There, alongside original investigations and writings, classical Greek scientific manuscripts were collected and translated into Arabic. The “Hisab al-jabr w’al maqalaba”, later became widely used in Europe in Latin translation, transmitting the Indian (decimal) system of arithmetic and now-familiar methods for the rearrangement and solution of equations, which Al-Khwarizmi called “completion” (al-jabr) and “balancing” (al-muqabala).

Of particular historical influence was Al-Khwarizmi’s method of solving quadratic equations, such as XX + 10X = 39 . He did not express this with letters, as later became commonplace, but posed the problem instead this way: “a square plus ten of its roots is 39 units. Find such a square.”

How? Draw any square to represent, in hypothetical manner, the square we are looking for, with unknown side (“root”) X. Then “ten of those roots”, constitutes corresponds to a rectangular area with sides 10 and X. Added together the square and rectangular areas are supposed to give a total area of “39”; but there is no evident way to combine the two areas, in such a way, that the value of the square will be evident. Al-Khwarizmi proposes the following “remedy” (al-jabr!):

Divide the rectangle into 4 equal rectangles, by cutting the it parallel to the “X” side, into 4 rectangles of sides X and 10/4 (= 5/2, or 2 1/2). Now, arrange these four rectangles alongside the four sides of the square XX (you have to draw this to follow the argument!). The resulting figure is nearly a square — all that is missing, is four “corners”! “Fix” the defect by adding four squares, each 5/2 by 5/2. The result is a single, bigger square, whose sides are 5/2 + X + 5/2 = X + 10/2 = X + 5. The area of the big square will be the SQUARE of X + 5. On the other hand, the area of each of the four supplementary squares, is (5/2)(5/2) = 25/4, so the total area supplied was 4 time 25/4, i.e. 25.

Adding that same amount to the right side of the original equation, Al-Khwarizmi finds: the area of the big square, namely {X + 5 SQUARED} is 39 + the added area of the four “corners”, i.e. 39 + 25 = 64. Thus the square of X + 5 is 64, and thus X + 5 and X = 8 – 5 = 3. In fact, we can easily verfiy that the value X = 3 does indeed solve the equation XX + 10X = 39. The square Al-Khwarizmi demanded is the 3 x 3 square of area 9.

Some people may recognize in Al-Khwarizmi’s method for “completing the square” the origin of the famous formula for the solution of a general quadratic equation, which students are mechanically drilled in, without ever encountering the simple geometrical idea behind it.

(Indeed, a typical feature of the sadistic “New Math” educational reforms, pushed through in the 1960s, was to systematically suppress the geometrical underpinning of algebra. This included obfuscating the commonplace “rules” for what Abraham Kaestner called “Buchstabenrechnung” — calculation with letters representing unknown or hypothetical magnitudes –, the which became a prominent technical feature of the development of algebra in the 15th century and afterwards. Consider, for example the formula (A + B) x (C + D) = AC + AD + BC + BD.

The “New Math” typically presents this as a deduction from the so-called “associative law” of addition and multiplication. But the origin of the formula is geometrical : it simply describes the division of the rectangular area with sides A+B and C+D, by corresponding perpendiculars to those sides, drawn at the points that divide them into lengths A,B and C,D respectively. The result is four rectangles of sides A,C; A,D; B,C and B,D respectively. Similarly, the equation:

(A + B) squared = A squared + 2AB + B squared describes the special case of the division of a square of side A+B into two squares, with sides A and B respectively, and two rectangles with sides A,B. There do arise some interesting subtleties and paradoxes, when the magnitudes involved take on negative, imaginary or some other species of values, and are no longer assumed to be simple lengths, as we shall discuss below. But by depriving students of even the simplest, visual-geometrical image of these relationships, the perpetrators of the “New Math” fraud also blocked the pathway to the deeper physical principles that underlie both algebra and the geometry of visual space. In fact, Bertrand Russell’s “New Math” represents nothing but a revival of the worst features of Euler and Lagrange’s formalist method — exactly the method Gauss refuted in his first paper on the “Fundamental Theorem of Algebra”!)

Applied to a general quadratic equation of the form

XX + BX + C = 0 where B and C represent any arbitrary choice of parameters, Al-Khwarizmi’s approach yields the following result. First, we make XX + BX into a square, by adding B/2 squared = BB/4, exactly as we did for the particular case of XX + 10X above. The general case takes the form:

XX + BX + BB/4 = (X + B/2) squared

Accordingly, add BB/4 to both sides of the quadratic equation, and apply “al muqabala” — the art of “balancing” or shifting the components of an equation between the sides, in order to “box in” the value of the unknown X. To wit:

XX + BX + C = 0

XX + BX + (BB/4) + C = BB/4

(X + B/2) squared + C = BB/4

(X + B/2) squared = BB/4 – C

X + B/2 = square root of ( BB/4 – C )

X = – B/2 + SQRT( BB/4 – C )

The last in this chain of relationships is essentially nothing but the famous formula for the solution to the general quadratic equation XX + BX + C = 0 (see footnote), and the precursor to Cardan’s attempted solution to the general cubic equation.

Al-Khwarizmi’s quadratic species

But, beware the algebraicist’s sleight-of-hand! A number of subtleties and paradoxes lie buried beneath the apparently routine procedure we just went through, to “solve” the quadratic equation.

Al-Khwarizmi distinguished between at least four different species of “quadratic” problems. For example:

1) XX + 10X = 2

2) XX = 10X + 2

3) XX + 2 = 10X

4) XX + 10X + 2 = 0 each representing a distinct sort of geometrical relationship.

In the first case, we seek “a square which, when combined with the rectangle whose sides are the square’s root and 10, respectively, gives a total area of 2”. We discussed a problem of this sort already above (with 39 instead of 2).

In the second case we seek “a square whose area is 2 units more than the area of the rectangle whose sides are the square’s root and 10, respectively.” At first glance it might not be clear at all, from the geometrical picture, how to apply the method of “completing the square”. On the other hand, changing the “balance” by shifting the rectangle 10X to the other side of the equation, we can put it in a form apparently similar to the first case. In fact, from the standpoint of determining the unknown X, XX = 10X + 2 is equivalent to XX – 10X = 2.

Compared with Al-Khwarizmi’s construction for the first case, we run into a significant difference: this time we have to take away the area of the rectangle 10X from the square, rather than adding it. After cutting the rectangle into four equal subrectangles with sides 10/4 and X and trying to fit them in an analogous way into the square XX — in order to “take them away” — we find that they overlap at the corners (make the drawing, to see what I am getting at!) Thus, taking away the four areas from the square would mean to remove each of the four corners twice . But how can one “take away” an area twice, from the same place? After we have removed it once, it is gone ; and to remove the same area again would mean taking something from nothing !

Al-Kharizmi scrupulously avoided such “impossible” operations. In fact, if you add in four extra little squares of sides 10/4, one at a time, at the appropriate moments, interspersed between removing the four rectangles one at a time, you can circumvent the difficulty. The result of a suitably ordered sequence of adding the four square areas and taking away the four rectangles is a smaller square, whose side is X – 10/2 = X – 5. From this point on, the solution proceeds entirely analogously to the first case.

The third case is more tangled-up still, if we hold to a simplistic visual-geometrical conception. Indeed, when we try to rearrange the equation XX + 2 = 10X in order to be able to “complete the square”, we get XX – 10X = – 2.

The equation demands, explicitly, that we take away a bigger area from a smaller , to get a negative area — clearly an “impossible” proposition! Here again, the difficulty can be circumvented by adding the four squares (10/4 x 10/4) to both sides of the original equation, before proceeding further.

The fourth case, XX + 10X + 2 is truly “impossible” in Al- Khwarizmi’s sense. Areas are by their very nature positive magnitudes . The sum of a square (XX) and a rectangle (10X) cannot be less than zero. Hence XX + 10X + 2 cannot be less than 2, and certainly never equal to zero.

All of this indicates quite clearly, that, to the extent the famous “general formula” for the solution of a quadratic equation XX + BX + C = 0 is valid at all, it must presuppose the existence of a different domain , distinct from that of simple visual geometry, but coherent with the latter.

Kaestner on “negational magnitudes”

At first glance, that domain is distinguished by the introduction of negative numbers , which have no self-evident interpretation in terms of lengths, areas and volumes, and were long branded as “impossible” for the indicated reasons. In fact, this issue was first fully cleared up by Gauss himself, in the context of his conceptualization of the physical principle underlying the “imaginary numbers”.

In his 1758 “Anfangsgrnde der Mathematik”, which was a standard reference for the teaching of mathematics at the time Gauss began his studies at the Carolineum in Braunschweig, Abraham Kaestner introduced negative numbers in the following manner:

“Opposing magnitudes are magnitudes of the sort, that arise through consideration of such conditions, in which one magnitude reduces another — for example assets and liabilities, forward and backward motion, etc. One of the magnitudes, whichever one chooses, is called positive or affirmative; the opposite is called negative or negational.”

Kaestner proceeds to develop the arithmetic of such “opposing magnitudes” as a new domain subsuming simple “positive” magnitudes together with their “opposites”. Keastner demonstrates, among other things, why the product of two “negational magnitudes” must necessarily yield a “positive magnitude”. For example, multiplying by -1 transforms a magnitude into its opposite, so that (-1) x 2 is the opposite of 2, i.e. -2; while (-1) x (-2) is the opposite of the opposite of 2, i.e. 2 itself. As Kaestner emphasizes, these relationships apply only to such magnitudes, as admit of a unique opposite , as for example in the case of forward and backward motion along a pathway.

Relative to such a domain of “opposing magnitudes” — as typified by the so-called “real number line” — Al-Khwarizmi’s four species of equations can all be subsumed under a common form. Indeed, in the new domain the species 1) – 4) above are equivalent to:

1) XX + 10X – 2 = 0

2) XX – 10X – 2 = 0

3) XX – 10X + 2 = 0

4) XX + 10X + 2 = 0 respectively, all of which fall under the form XX + BX + C = 0, where B and C can take on both positive and negative values. At first glance, it would seem that the formula derived above, namely X = – B/2 + SQRT( BB/4 – C ) indeed provides a general solution to the quadratic equation, subsuming each of the four cases distinguished by Al-Khwarizmi. The operations he regarded as “impossible” or absurd from the standpoint of lengths, areas and volumes — such as “subtracting a larger from a smaller” — have a perfectly determinate meaning in the domain of forward and backward motion along a line. For example, subtracting a forward motion of 5 from a forward motion of 2, results in a backward motion of 3, i.e. -3, and so forth. The chain of relationships, by which we derived the general solution to the quadratic equation XX + BX + C, are valid irrespective of the values of B and C.

In particular, the general formula provides a solution to the equation XX + 10X + 2 = 0, which Al Khwarizmi regarded as “impossible”: X = -10/2 + SQRT( 100/4 – 2) = – 5 + SQRT (23) = (approx) -2.0416848, a negative magnitude.

The fallacy of closure

Have we invented a “perfect system”? A closed domain in which the algebraicist, sitting in a room with no windows, can solve all problems at the blackboard, without any need to heed the real universe outside?

On the contrary! In fact, our derivation of the “general solution” glossed over an even more devastating paradox, than the emergence of the negative numbers which Al-Khwarizmi and many of his successors treated as “impossible”. Look back at the next-to-last step in the “general solution”:

(X + B/2) squared = BB/4 – C

X + B/2 = square root of ( BB/4 – C )

Contrary to common misconceptions, the algebraic expression, “square root of,” constitutes a question , not an answer ! We are asked to find a magnitude, whose square is the given magnitude. Just because the algebraicist has invented a formal symbol in place of answering the question, does not make it any less a question! Although there exists a simple ruler-and-compass construction for determining the square root of any positive quantity, the analogous problem for cube roots leads, as we indicated in the preceeding installments, beyond the limits of ruler-and-compass geometry.

What, however, if the magnitude, whose square root is demanded, turns out to be negative ? According to Kaestner’s argument, the required square root could be neither negative nor positive. For, the square of a positive quantity is naturally positive, while result of multiplying a negative magnitude by itself, for the reasons Kaestner indicated, is likewise positive. Thus, extracting the square root of a negative magnitude is “impossible” in the domain of the so-called “real numbers”.

On the other hand, our formula for the solution of the general quadratic equation, demands that we find the square root of BB/4 – C. What if the latter magnitude were to turn out to be negative — as for example in the case, where B and C are both equal to 2, and BB/4 – C = 4/4 – 2 = -1 ? Apparently, to solve the equation XX + 2X + 2 = 0, we would have to determine the value of SQRT( -1) !

To get a notion of the problem involved, explore the values of XX + 2X + 2 for a variety of positive and negative values of X. The values turn out all to be positive, with a minimum value of 1, reached for X = -1. In fact, plotting XX + 2X + 2 (on the “Y-axis”) against the value of X (on the “X-axis”), yields a parabola lying entirely in the positive region above the X-axis; whereas XX + 2X + 2 = 0 would imply a crossing of the X-axis by the corresponding curve, which evidently never occurs.

Another “impossible” problem

A useful, related sort of case was discussed by Cardan and his associates in the context of their attempt to grapple with “imaginary numbers”.

Consider for example the following simple geometrical problem: “Given a line segment of length 2, divide the segment into two subsegments of lengths X and W, such that the product of their lengths is equal to 2.”

A bit of reflection shows, that the task is “impossible”! If we divide the segment exactly in half, then the two segments will each have length 1, and the product will be 1, not 2. If, on the other hand, if the division is not equal, then one of the two segments (X for example) will be shorter than 1, and so XW will be less than W, which in turn cannot be longer than the total length 2.

The paradox becomes even clearer, when we look at the proposed problem in terms of a division of the square of side 2, implied by any given division of the sides into subsegments of lengths X and W. Drawing perpendiculars at the points of division, the square is divided into the two squares XX, WW plus two X-by-W rectangles. The total area of the original square is 2×2 = 4. If, on the other hand, we demand that XW = 2, then the sum of the two rectangles would already be 4, and would thus completely fill up the square! But then there would be no room left inside for the smaller squares XX and WW — unless, by some extraordinary circumstance, the two areas XX and WW were somehow to cancel out , i.e. XX + WW = 0 or WW = – XX. But since the areas of squares are always positive, the latter is clearly impossible .

This “impossible” geometrical problem leads directly to a quadratic equation. In fact, we can restate the problem in algebraic form as a combination of two simultaneous equations:

X + W = 2 and XW = 2

The first equation implies W = X – 2, and consequently

XW = X x (2 – X) = 2X – XX.

In terms of X, the requirement that XW = 2 now becomes 2X – XX = 2 or, by “rebalancing” the equation: XX – 2X + 2 = 0.

Again, graphing the values of XX – 2X + 2 yields a parabolic curve lying entirely above the X-axis; the minimum value, reached at X = 1, is 1. On the other hand, if we apply our “general formula” for the solution of the quadratic equation to this case, we obtain the paradoxical value X = 1 + SQRT ( -1 ).

At this point, the formal algebraicist, like Euler and Lagrange, might seek refuge in the following comforting thought:

“After all, doesn’t the appearance of the “impossible” square roots of negative magnitudes in the formula for the solution of a quadratic equation, coincide with the case, where the equation has no real solution? Our formula gives a solution in exactly those cases, in which a solution really exists. ‘Imaginary’ magnitudes like SQRT( – 1) mean nothing, but merely signal the impossibility of solving the corresponding equation. So, our algebraic world is closed and perfect.”

If that’s what you think, you are in for a rather unpleasant surprise! Just wait for the next installment, on Cardan’s formula. Notes:

The solution to the quadratic equation XX + BX + C = 0 is commonly presented in a slightly different, but entirely equivalent form: X = ( -B + SQRT( BB – 4C ) ) / 2 .

In the case of the equation AXX + BX + C = 0, in which the square term appears multipled by an arbitrary coefficient A, the solution takes the form: X = ( -B + SQRT( BB – 4AC ) ) / 2A.

Both formulae are simply alternative manifestations of the principle of “completing the square”, and add nothing of substance to the above discussion.

The Fraud of Benchmarking

by Jonathan Tennenbaum

This week’s pedagogical discussion illuminates the issue of “non-linearity in the small” from a somewhat different angle than the preceeding ones. This week we shall take a look at the celebrated case of Mercedes’ famous “A-Class” automobile. The following documentation was researched by Rudiger Rumpf and Ralf Schauerhammer, and will be featured in a forthcoming article in the German magazine FUSION. I have translated the core of their manuscript and added some comments. —

Jonathan Tennenbaum

To remind readers who are not “car freaks” and may not have followed this affair from the beginning: Two years ago the world-famous automobile manufacturer Daimler-Benz — manufacturer of the legendary Mercedes-Benz automobile and now partner of Chrysler in the greatest mega-merger in industrial history — triumphantly introduced into the world market for low-priced compact cars its own, specially-designed model called “A-Klasse.” Not only would Class A offer people with small pocket-books the prestige and “feeling” of driving a “Mercedes,” but the car itself boasted extraordinary features. Instead of the engine being located in front of (or behind) the passenger cabin as in other cars, the A-class has its engine placed {underneath}, with the passenger cabin built on top. This so-called “sandwich” construction, never before utilized in this sort of car, offers greater flexibilities in the use of space, while at the same time the passengers sit significantly higher above the ground than in other cars.

On Sept. 23, 1997 test drivers in Denmark found that the Class A tilted onto two wheels during a swerving manouver (i.e. sharp turn of the sort needed to steer around an object) at 55 km/hour (34 mph). A month later, on Oct. 30, a Class A Mercedes flipped upside down during the so-called “Elk Test” at 60 km/hour (37 mph), slightly injuring three test drivers in the car.

The serious issue raised by these events is not so much the obvious design weaknesses of Mercedes Class A; what is significant is the kind of faulty {thinking}, embedded in the process of design and development of the car, which ultimately caused the embarassing and commercially disastrous result. In fact, the main weakness in the design and development process of Class A lay in a dependence on so-called “benchmarking” methods.

Up until recently, Mercedes has traditionally devoted much more time and investment to develop new models, than Japanese firms for example. In order to improve its “competiveness,” in the case of the Class A, Mercedes set the goal of reducing the development time from the traditional 7 years (84 months) or so, to a mere 32 months! Yet, the comparison with the development time of Japanese makes is unrealistic and misleading, because Japanese producers typically concentrate on improving already established and proved designs. In that case only about 20-40% of the components must be newly constructed, even to make a new model. But in the case of the Mercedes Class A, for example, 100% of the components had to be newly developed.

Although Mercedes is one of the leading automobile manufacturers in the world, Mercedes engineers had never before built a compact car with front wheel drive. Furthermore, the planned “sandwich” construction had never before been used in a small car. Setting these goals while at the same time cutting the development time from 84 to 32 months, placed huge pressure on the development department of Mercedes. That pressure greatly strengthened the already-existing tendency to think that computer simulations could replace actual, real-life driving tests.

In order to save development time and costs, in April 1993 Daimler-Benz engineers input the then-available, projected basic design parameters of the Class A into a computer simulation system designed to simulate the dynamic behavior of the car. This was done before even the components and parts of the automobile had been constructed. These imaginary driving tests, according to Mercedes, were supposed to be sufficient to provide “all the answers” concerning certain important design decisions as, in particular, which of three alternative construction types (“Mehrlenker,” “Verbundlenker” or “Laengslenker”) should be chosen for the rear axle assembly. On the basis of those simulations, the cheapest of the three alternatives was chosen as fully acceptable.

As a further test for the projected driving characteristics of the Class-A (which had not yet been built, even in prototype), “engineers put themselves behind the wheel of a Mercedes S 280 (a completely different model-JT) which had been programmed to simulate the dynamic behavior of the Class-A,” a Mercedes report boasts. “During a double lane change (the similar type of manouver to the now-infamous “Elk Test”- JT) which reveals the handling characteristics of an automobile as well as its safety reserves in marginal situations, the Laengslenker axle (the cheapest of the three alternatives – JT) performed convincingly…. In the context of the total concept of the Class A, the Laengslenker demonstrated itself to be the best compromise.” The “compromise” referred to, was a compromise between handling characteristics and cost. And here the fact that the Laengslenker was much less costly, clinched the decision in its favor. All other Mercedes-Benz models are equiped with the “Mehrlenker” axle, which is much costlier, including in comparison with the systems used in competing models.

In an information brochure on the Class A Mercedes also wrote: “This time there was not enough time to carry out the extensive basic investigations with different axle types, which are normal procedure for the development of a completely new automobile type.”

Other German producers calculate up to 12 months for the costly process of harmonizing and adjusting the chassis and related assemblies in connection with electronic systems such as ABS (anti-lock brake system) and ESP (electronic stability program). This is normally done only after 3 years of testing with prototypes or rebuilt older models in order to determine the proper design of the axle. This is what Mercedes itself had always done before.

For example: In 1989, driving tests in extremely demanding, mountainous areas revealed — contrary to the results of computer simulations — that the braking system of the then-newly-designed Class S auto (V-12 motor with up to 400 horsepower) was far from meeting the full stress performance requirements. The entire brake system had to be completely redesigned. But at that stage the projected beginning of mass production of the auto was still two years in the future.

In the case of the Class A, Mercedes not only did not have the necessary time, but also lacked sufficient capacities in its development department: just in the period from June 1993 to October 1997, 9 new models had their premieres. Juergen Stockmar, Director of Development at Opel, was quoted as saying that many employees in the development departments of automobile producers today are working beyond the limits of their endurance, and that the overworked condition of the employees ran like a red thread through a series of technical breakdowns and other problems.

After the disaster in October 1997 Mercedes was finally able to bring the problems of the Class A under control — although only after three separate attempts and after company head Schrempp had intervened to halt deliveries until further design and development had been carried out. But the methods used to “solve” the problem were rather dubious.

The electronic stability system (ESP), which was originally planned to be sold as an option for 1700 DM, was now included as standard equipment on all Class A autos and delivered to the buyer without additional cost. The ESP had originally been conceived as a supplementary program for safe and stable cars, to assist control of the vehicle under extreme conditions such as wet or slippery roads. But in the Class A, the ESP became indispensable even to carry out a simple avoidance maneuver on a dry surface — something which competing models had never had problems with.

The fact that Mercedes now claims it has solved the problems of Class A by supplementary installation of the ESP system, demonstrates that the fundamental problem behind the Class A disaster has not penetrated to the conciousness of the company’s board members. They are still holding to their belief in benchmarking and computer simulations.

This becomes obvious from the fact that the board of Daimler-Chrysler still refuses to withdraw its even bigger disaster, the regrettably-misnamed “Smart,” from the market. “Flop” would be a more appropriate name for this totally misconceived and technically defective product. Besides the problems shared with the Class A — shorter wheelbase and elevated center of gravity — the developers thought they could simply ignore problems that had been known for decades. These are the problems which arise when one attempts, as with the “Smart,” to realize rear-motor, rear-wheel drive in a vehicle with short wheelbase, leading to a situation where nearly two-thirds of the car’s weight falls on the rear axle.

Ignoring the long and problematic history of constructions of this type, the Mercedes engineers even installed an over-powerful engine (after all, the car was supposed to be “smart”!), with the philosophy that “electronics will fix everything.” Since the car was known to be unstable, the maximum speed was set at 130 km/h. But even below that speed the electronics cannot compensate for the fundamental fallacies in the design. Physics prevents this! The cheap electronic stabilizing program called “Trust” has revealed itself, in all tests which were not designed in advance to give a positive result, as a failure.

Readers will not have failed to recognize a recurring syndrom of today’s larger world in our story of the “A-Klasse”: Rather than correct fundamental, axiomatic fallacies in the design of policy, the reponse to each ensuing disaster is: “we’ll fix it!” The “successful” result is to carry the axiomatic fallacies forward into the next, even worse phase of disaster, whose onset has been rendered inevitable by the follies of such linear “crisis management.”

Note, also, a second point: in a multiply-connected manifold, “dimensionalities” can never be treated as Cartesian independent variables. In substituting or modifying even an apparently minor technical component within a complete functional system such as an automobile or a space vehicle, the potential nonlinear impact of that change upon the characteristic functioning of the whole system is an issue of physics, not mathematics. In a unique experiment, the components of the experimental apparatus and their characteristics, taken in and of themselves, seem to be fully “known.” But the composition of the experiment generates an irreducible anomaly, refuting exactly the sort of linear “curve fitting,” which turned Mercedes’ proud creation into a total flop.

Gauss vs. Empiricism

by Jonathan Tennenbaum

Lyn has emphasized, how Carl Friedrich Gauss’ 1799 dissertation on the so-called “Fundamental Theorem of Algebra”, constituted a devastating refutation of the leading scientific authorities of his day, including Jean-Louis Lagrange and Leonard Euler.

Gauss first points out fundamental flaws in purported proofs of the “Fundamental Theorem”, put forward by D’Alembert, Euler and Lagrange in succession, showing that they were based on arbitrary assumptions and fell far short of actually establishing the proposition, they claimed to demonstrate. Then Gauss presents his own, rigorous proof, based on different principles.

So what’s the big deal? Certainly, Gauss’s ruthless exposure of gaping “holes” in D’Alembert’s, Lagrange’s, and Euler’s proofs was a scandal in and of itself, suggesting — as Gauss himself clearly intimates — a shocking degree of conceptual sloppiness on the part of men who were considered to be standards of scientific rigor. Also, Gauss’ concise proof, going to the heart of the matter in a few pages, contrasted with the voluminous and prolix treatises of Lagrange and Euler, that Gauss tore apart in the first half of his dissertation.

But the real issue is not one mathematics per se, but of physical principle. A PRELIMINARY access to that issue opens up, when we turn from the particularities of the Fundamental Theorem itself (although they are important and indispensible) and first ask the question: WHY did D’Alembert, Lagrange and Euler FAIL? What was wrong with their THINKING?

Someone might say: well, the 20-year-old Gauss was a towering genius. But in what did that genius really consist? Was Gauss’ proof itself so fantastically clever, complicated and ingenious? No, not at all! It is quite simple, natural and direct, once one masters the basic principles involved. From a purely {formal-technical} standpoint, there is nothing in Gauss’ proof, which was not well within the range of what Lagrange, Euler and many others could easily have done.

So, what prevented them from doing so? Ah! Here we come to the exact same “mechanism”, which causes the collapse of seemingly all-powerful empires — empires possessing vast, apparently overwhelming material and intellectual resources, but which collapsed nonetheless. Why is it, that the ruling elites of such empires — and their army of sometimes highly talented, skillful and experienced advisors, analysts and other “lackeys”, selected from the “cream of the cream” of the population — why did they invariably FAIL, at crucial junctures of history, to take actions that might have prevented, or at least greatly delayed, the collapse of their systems? Why is it, that we invariably discover, as the crucial element that finally dooms such empires to destruction, an obsessive insistence on expending every last ounce of resources, skill and cunning, in the attempt to make an intrinsically FAILED system “work” — even against the laws of the Universe –, rather than to accept a CHANGE in its the underlying, flawed axioms of that system.

So, we have Lagrange and Euler, both highly skillful, knowledgeable and experienced mathematicians, much more so than most leading academic authorities today, but who FAILED most decisively, where Gauss SUCCEEDED. The case of Euler is particularly instructive, because he was, to all accounts, extremely industrious and in some ways quite shrewd and perceptive, as well as a virtuouso master of algebraic methods. Euler also made some not-insignificant experimental discoveries, in number theory and other fields, as Gauss himself pointed out. But, at the same time, Euler was a crude phillistine in philosophical terms, and above all a fanatical EMPIRICIST, in the precise sense that Lyndon LaRouche has identified, with great precision, in a recent paper. Lyn writes, in part:

“Empiricism has the form of a synthesis of three, ostensibly mutually exclusive, categorical elements, as follows:

“1. First, the empiricist assumes that no experimentally verifiable knowledge exists outside the bounds of simple sense-certainty.

“2. Secondly, therefore, every cause-effect relationship which can not be located explicitly in an sense-observed agency, is related to a domain of such forms of attributed bias in statistical behavior of observable events, or to some anonymous agency to which neither sense-certainty nor cognitive reason provides access.

“3. Thirdly, the second element leaves available a niche for the creating the illusion of the existence of purely magical spiritual powers, operating entirely outside the reach of access by sense-certainty, but able to make arbitrary interventions, even capriciously, into the domain of sense-certainty.”

Let us compare this characterization by Lyn, with the clinical evidence Leonard Euler supplies for his own case. Take, for this purpose, the same paper, that is the immediate target of criticism in Gauss’ dissertation of 1799. This is Euler’s “Recherches sur les racines imaginaires des equations”, published in the “Memoires de l’Academie des sciences de Berlin” in the year 1749.

Euler begins by writing down the general form of an algebraic equation involving a single unknown, “X”. By “algebraic equation” Euler meant any formula, built up from the “unknown” X and specific integers, fractions or so-called irrational numbers, by means of the algebraic operations of addition, subtraction, multiplication and division, and then set equal to zero. So, for example: 2XX – 5X + 10 = 0, or XXX + 3 XX – 5X + 21 = 0 and so forth. (XX means X times X or “X squared”, XXX means “X cubed”, and so forth). (Apparently more complicated cases can occur, as for example (XXXX – 4)/X = 0 or others, that involve divisions. It turns out, that divisions can be eliminated by manipulations of the algebraic equation. But these technical aspects are not important for the point we want to make here. )

The problem (as Euler understands it) is to find a specific number or magnitude, which, when put in place of “X” in the formula, yields the value zero when the additions, subtractions, multiplications and divisions are all carried out. Such values became known as “solutions” or “roots of the equation”. Thus, the equation XX – 4 = 0 has two roots, namely 2 and -2. Referring implicitly to the work of Cardan and others in the 16th century, Euler notes (my emphasis):

“It happens (in general) that not all the roots are REAL quantities, and that some of them, or perhaps all, are IMAGINARY quantities.”

Thus, for example, the equation XX + 1 = 0, appears to have no solutions or roots among the magnitudes, that Euler regarded as “real”, namely positive or negative quantities corresponding to positions to the right or left of zero on the “number line” of standard textbook mathematics. For, whether X is positive or negative, XX (X squared) is always positive, so XX + 1 will always be at least 1, or larger, for any X on the “number line”. Hence, a solution or root of XX + 1 = 0, if one could speak of such a thing at all, could only be an “imaginary” entity, as when a formal algebraicist, merely playing with symbols, might reason:

XX + 1 = 0 implies

XX = -1, which implies

X = “the square root of -1”.

But such a value of “X” could have no real existence, because it corresponds to no point on the “number line”. Euler writes:

“One calls a magnitude IMAGINARY, when it is neither greater than zero, nor less than zero, nor equal to zero. This would therefore be something IMPOSSIBLE, as for example sqrt(-1), or in general a + b sqrt(-1), because such a quantity is neither positive, nor negative, nor zero.”

Thus, for Euler a magnitude such as sqrt(-1), which is neither more, nor less than, nor equal to zero, lies outside the domain of sense-certainty, and is therefore “impossible” or “imaginary”. On the other hand, a few paragraphs further down in his paper, Euler insists, that mathematicians must study and utilize these very same “impossible” quantities! Euler writes:

“Although it seems that the knowledge of the imaginary roots of an equation would be devoid of any use, since they furnish no (real) solutions to any problem, nevertheless it is very important in analysis to become familiar with the imaginary quantities. Because we thereby not only obtain a more perfect knowledge of the nature of equations; but the analysis of the infinite can enjoy considerable benefits.”

Euler goes on to remark, that various methods for the calculation of integrals and other mathematical problems, require the use of “imaginary quantities”, even though he himself has denounced those same quantities as “impossible”! Here Euler displays the second and third characteristics of empiricism, detailed by Lyn above, and which are curiously inconsistent with the first point: A mathematician must learn to communicate with GHOSTS, “imaginary quantities”, which are unreal and yet lend the mathematician MAGICAL POWERS to manipulate the visible universe!

Here Euler reveals exactly his empiricist problem. A true physical principle, which can only be generated as an IDEA in a single, sovereign human mind, cannot be known, he thinks. Instead, there are only sense perceptions, on the one hand, and “other-worldly” entities with magical powers, on the other. Above all, formalism itself — like a cult ritual — is supposed to convey magical powers, as many modern physicists for example, ascribe awesome powers to the so-called “quantum mechanical formalism” today. A revealing example is Euler’s contorted attempt to formally justify the “rules” for multiplication with simple NEGATIVE NUMBERS, in his algebra text from 1770:

“It remains still to solve the case where – is multiplied by – or, for example – a by – b. It is obvious initially that as for the letters, the product will be ab; but it is dubious still if it is the sign + or well the sign – that it is necessary to put in front of the product; all that one knows, it is that it will be one or the other of these signs. However I say that it cannot be the sign -; because – a by + b gives – ab and – a by – b cannot produce the same result that – a by + b; but it must result the opposite from it, i.e. + ab; consequently we have this rule: + multiplied by + made +, just as – multiplied by -.”

This, explicitly cultish, gobble-dee-gook formalism, became a paradigm for the teaching of mathematics, leading to generation after generation of crippled minds.

Only a bit less openly occult is Jean-Louis Lagrange, whose famous 1788 “Méchanique analytique” became a prototype for modern “systems analysis”. In his preface Lagrange writes:

“You will find no diagrams in this work. The methods I present, require neither constructions, nor geometrical or mechanical reasoning, but only algebraic operations, proceeding in regular and uniform manner. Those who love analysis will note with pleasure, how Mechanics thereby becomes one of its new branches, and will be grateful to me for having extended the domain of analysis in this way. “

Lagrange thus pretended, to MAKE PHYSICS INTO A BRANCH OF FORMAL MATHEMATICS — exactly the opposite of what Leibniz stood for, and what Nicolaus of Cusa and Plato before him had stood for. Common to Lagrange and Euler, is the demand, that no physical principles should be permitted to intrude upon the domain of mathematics! Yet, it is easy to demonstrate, in the typical contemporary physicist and physics student, a fanatical quality of belief in the supposed magical powers of the so-called “Lagrangian”.

As Gauss points out, that while pretending to “correct” the defects of Euler’s purported “proof” of the fundamental theorem, Lagrange maintains Euler’s implicit, though baseless assumption, that all roots of an algebraic equation, whether “real” or “imaginary”, must be capable of a formal algebraic representation, in terms of addition, subtraction, multiplication, division and the so-called extraction of roots (square roots, cube roots, fourth and fifth roots and so forth). But exactly the impossibility of such a universal formula for the roots of an equation, was key to Gauss’ s understanding of the significance of the complex domain.

Gauss himself repeatedly refers to the fundamental difference in method, between his approach and the stubborn empiricism of Euler, in his early writings in particular.

So, he writes in the introduction to his Disquisitiones arithmeticae, “When I, in the beginning of the year 1795 first took up this sort of (number theoretic) investigations, I knew nothing about the work of the moderns (Euler, Legendre et al) in this field … While I was engaged in other work, I chanced upon a remarkable arithmetic truth — when I am not mistaken, it is the theorem in D.A. Article 108 (**) –; and since I found it not only very beautiful in and of itself, but suspected that it must be connected with other remarkable properties, I devoted my entire energies to comprehending the PRINCIPLE upon which those properties are based, and obtaining a rigorous proof. When at last I succeeded in my wishes, the beauty of these investigations had taken such a hold over me, that I could not tear myself away from them; so it came about, that, as each thing led to another in turn, I had accomplished most of what is contained in the first four sections of this book, before I had seen anything of the similar work of other Geometers.”

In a letter to his former teacher at the Carolineum, E.A.W. Zimmermann in October 1795, soon after he had entered Göttingen University, Gauss wrote:

“I have seen the library and hope to derive from it a considerable contribution to a happy existence in Göttingen. I already have several volumes of the Commentaries to the Petersburg Academy (by Euler) at home, and I have looked through many more. I cannot deny, that it is very unpleasant for me, to see that the largest part of my discoveries in indeterminate Analysis were only discovered for the second time. What comforts me is this: All the discoveries of Euler, that I have found so far, I had also made by myself, plus some more, too. I have found a more general, and, I believe, more natural standpoint, and an immeasurable field (for further discoveries) lies in front of me; Euler made his discoveries over a period of many years, and only after many successive tentaminibus (attempts). “

Euler’s attitude toward the so-called “imaginary” or “impossible” numbers, reflected exactly his own crippling intellectual problem: for him, IDEAS — physical principles grasped by the mind — couldn’t really exist, as objects of concious deliberation. So, he was reduced to sniffing around, with his nose to the ground, for some sort of magic formulae by which he might manipulate the world. By concentrating on issues of PRINCIPLE, Gauss had overtaken a lifetime of trial-and-error-style number-theoretic investigations by Euler, within less than a year.

FOOTNOTE

(**) It is worth giving here, at least briefly, some idea about the subject of Article 108 of Disquisitions Arithmeticae, as this is closely connected with the genesis of Gauss’ Fundamental Theorem of Algebra.

Gauss calls a given whole number A a “quadratic residue” relative to a prime number p, if there exists a “square number” — i.e. the square of a whole number: 1, 4, 9, 16, 25, 36, 49 etc — such that p divides the difference between that square and N. In Gauss’ language of congruences, the latter condition is expressed by saying, that “N is congruent to some square modulo p”. Of course, the square numbers themselves always fullfill that condition, but the more interesting case is when N is not the square of a whole number. For example, we can easily see that 2 is a quadratic residue relative to the prime number 7 — 7 divides the difference between 9 (a square!) and 2. Also 2 is a quadratic residue of 17 (the square 36 is congruent to 2 modulo 17), and of a whole series of other prime numbers. On the other hand, it turns out that 2 is NOT a quadratic residue relative to 5, nor is 2 a quadratic residue relative to 11, 13 and a whole series of other primes. Thus, the prime numbers fall into two series or species: those for which 2 is congruent to some square, and those for which 2 is not congruent to any square. Taking as N instead of 2 any other non-square number, we get a different division of the primes into species. (The harmonic interrelations of those species are the subject of an extraordinary discovery, which is the centerpiece of the Disquisitiones Arithmeticae — namely the so-called “Law of Quadratic Reciprocity”).

Another way to look at this, implied by Gauss, is to consider the realm of congruences relative to a given prime number, as defining a geometrical domain of a special type. In that domain, congruent numbers are considered to have the same “shape” and to be otherwise equivalent. So, for example, instead of saying, for example, that “2 is congruent to a square modulo 7”, we would actually regard 2 itself as a “square number” in the congruence domain defined by 7. Within that domain, for example, 2 and 9 ( = 3 squared) would be considered equivalent and indistinguishable. Thus, 2 will be “square-shaped” or “of the second power” in some prime domains, but not in others. The prime number p (the “modulus”) are thus not simple numbers, but topological characteristics.

Now, what about the number -1? This special case — which turns out to be of crucial importance in all of Gauss’ “higher arithmetic” — is the subject of the cited Article 108. As we noted above, the criterion for -1 to be a quadratic residue relative to a given prime p, is that p divides the difference between some square number and -1. But subtracting -1 from a number, means the same thing as adding 1 to it; so the condition is equivalent to saying, that there is a square number, such that when 1 is added to it, the result is divisible by p. Examples are easy to find. For example, 4 + 1 is divisible by 5, so -1 is a quadratic residue of 5. Similarly, -1 is a quadratic residue modulo 17. Also, 25 + 1 = 26 is divisible by 13, so -1 is a quadratic residue modulo 13, too, and so on for a certain series of prime numbers. But it turns out, for example, that -1 is NOT a quadratic residue relative to 7, nor relative to 23, nor 29, nor for a whole other series of prime numbers.

As a bit of reflection shows, one can characterize the two species of prime numbers also as follows: Take the series of squares 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121 etc. and add one to each of them: 2, 5, 10, 17, 26, 37, 50, 65, 82, 101, 122 etc. The primes of the first species — those for which -1 is a quadratic residue — are the ones which divide at least one of the numbers in the latter series. Those primes which do not divide any of the numbers 2, 5, 17, 26, 37, 50 etc, form the second species. (How many terms of the series do you need to check, to tell whether a given prime divides one of them?).

For the first species of prime numbers, “-1” is a square number in the corresponding congruence domain, i.e. sqrt(-1) corresponds to a specific value in the domain (for example, 5 is equivalent to sqrt (-1) in the domain of congruences modulo 13), while for the second species, the introduction of sqrt(-1) requires going OUTSIDE the domain.

Two years before his discovery of his first proof of the “Fundamental Theorem of Algebra” , Gauss uncovered the harmonic law governing the distribution of the primes into these two species: the first, as it turns out, are the primes which leave a remainder of one when divided by 4 (such as 5, 13, 17, 29…) and the second species are the primes leaving a remainder of 3 when divided by 4 (such as 3, 7, 1, 19, 23 …).

Most significant is the circumstance, that BOUNDING PRINCIPLE involved is physical in nature, and has nothing to do with any formula or formal proceedure. So. for example, in the case of primes of the first class, the existence of a value for sqrt(-1), is “forced”, as a singularity, by the overall geometry of the domain; while the specific value of sqrt(-1) assumes in that domain, must be discovered, a posteriori, by direct observation.

Motion Is Not Simple!

By Jonathan Tennenbaum

Things are not what they appear, nor does the world function as the naive varieties of “common sense” (horse sense) would have it do. Those who subscribe to Rene Descartes’ doctrine of “clear and distinct” truths, and pride themselves on not listening to anything that smells of “theory,” or cannot be explained in five words or less, are liable to be ripped off by the nearest used car dealer (or stock broker?). For a pedagogical exercise, consider the following sales pitch, invented by wily old Descartes himself.

As every simpleton thinks he knows, the universe consists of “matter and motion.” (In fact, the famed J.C. Maxwell marketed his famous textbook on physics under that title.) To measure the performance of your used car engine, Descartes tells you, just ask “how much car (weight in pounds or tons) it moves how fast (miles per hour).” You just multiply the pounds together with the miles per hour, to get the handy performance rating at Rene’s Used Car Lot. For example, how would you choose between:

Car A: a two-ton “super classic,” with wall-to-wall marble ashtrays and other extras. Flooring the accelerator, it reaches 40 mph in 30 seconds. Rene urges us to buy this “hell of a car.”

And:

Car B: a lower-class model, weighs half as much as the “super-classic,” but reaches much less than twice the speed, namely 60 mph in the same 30 seconds.

A glance at both cars tells you, that their bodies are essentially junk. If anything, the only items of significant value are the engines. Now, Rene will let you have Car A for the same price as Car B, which (“as a friend”) he points out, is a “fantastic deal.” Car A is a “bit slower” but, as you can easily calculate yourself, with two-thirds the speed but twice the mass, its engine performance rating is more than 30% larger than Car B’s.

Rene adds another generous offer: If you prefer the smaller, faster car, he will switch the engines for you, and install Car A’s motor into Car B, free of charge. Could you turn down such a deal? After all, with Car A’s engine and half the weight, other things being equal, Car B should zip up to 80 mph in the same time it took to bring the heavier car up to 40.

Rene’s enthusiasm makes you a bit suspicious, on several points. Simplest of all: does the product of mass and attained velocity really represent the work performed in accelerating a car or other massive object up to a given state of motion? “Clear as day!” Descartes explains, appealing to our “horse sense” with the following argument:

Suppose we have a two-ton object. For it to have a speed of 40 miles per hour means, that in any given hour, those two tons move a distance of 40 miles. Dividing the object into two parts, each of 1 ton mass, we see that each of those has been moved 40 miles by that same motion. Obviously, it would be the equivalent amount of motion to move the two halves one at a time, instead of simultaneously, over the same 40 miles. In other words, in the first half hour we move the first half 40 miles, and then during the second half hour we move the other half 40 miles, the result being to move the whole mass 40 miles in the course of that hour. Or, again, since the two halves are identical in terms of mass, it represent the same effort to take only one of them, and move it 40 miles in the first half hour, and then just continue to move it another 40 miles in the second half hour. Thus, with an equivalent process we have moved one ton, 40 plus 40 = 80 miles in the given hour. We repeat this for every succeeding hour. Thus, two tons moving at 40 mph is equivalent to one ton moving at 80 mph. QED.

Corollary: Car A’s motor is a better buy than Car B’s.

An admirable specimen of deductive-type reasoning. But, if you swallow the axiomatics of this argument, you are going to be cheated! Can you prove them wrong? Such a demonstration will be given in next Tuesday’s briefing. [jbt with ap ]

MOTION IS NOT SIMPLE! -PART 2

In refuting Descartes on the measure of “quantity of motion” and related points, Leibniz pointed out three interrelated fallacies. First is the implicit assumption, that physics can be subsumed within a deductive form of mathematics. Second is the implicit assumption of “linearity in the small,” that physical action has the form of singularity-free continuous motion or extension in a three-dimensional Euclidean-like space. Third is the assumption that matter is characterized by nothing but such passive qualities as space-filling (extension), inertia, and resistance to deformation. In fact, in his 1686 piece on “A memorable error of Descartes,” and in other locations, Leibniz gave a simple demonstration of physical principle, showing that the process of change of velocity (acceleration) of material bodies involves something which is absolutely incompatible with Descartes’ assumptions. Leibniz demonstrated, for example, that the work of acceleration is NOT proportional to the mere product of the mass with the velocity attained, but (to a first degree of approximation) increases as the SQUARE of the velocity! To accelerate a mass to twice a given velocity, we need, not twice, but FOUR times the work.

If you have never stopped to consider, how utterly incomprehensible such a result is from the standpoint of naive sense-certainty and “horse sense,” do yourself that favor now.

Review the Huygens-Bernouilli-Leibniz discussion on the cycloid-brachistochrone for a richer development of the same point. Also Leibniz’s discussion of Descartes’ error and of the required notion of “anti-entropic” substance, in his “Treatise on Metaphysics” (Article 18 and preceding and following articles). Consider the relevance of Nicolaus of Cusa’s treatment of the Archimedes problem, and review the whole matter again from the higher standpoint of Lyn’s writings, including on the issue of “time-reversal.”

Here is Leibniz’s paper of 1686, referred to above: “Seeing that velocity and mass compensate for each other in the five common machines, a number of mathematicians have estimated the force of motion by the quantity of motion, or by the product of the body and its velocity. Or, to speak rather in geometrical terms, the forces of two bodies (of the same kind) set in motion, and acting by their mass as well as by their motion, are said to be proportional jointly to their bodies or masses and their velocities. Now, since it is reasonable that the same sum of {motive force} should be conserved in nature, and not be diminished–since we never see force lost by one body without being transferred to another–or augmented, a perpetual motion machine can never be successful, because no machine, not even the world as a whole, can increase its force without a new impulse from without. This led Descartes, who held motive force and quantity of motion to be equivalent, to assert that God conserves the same quantity of motion in the world.

“In order to show what a great difference there is between these two concepts, I begin by assuming, on the other hand, that a body falling from a certain altitude, acquires the same force which is necessary to lift it back to its original altitude, if its direction were to carry it back and if nothing external interfered with it. For example, a pendulum would return to exactly the height from which it falls, except for the air resistance and other similar obstacles, which absorb something of its force, and which we shall now refrain from considering. I assume also, in the second place, that the same force is necessary to raise a body of 1 pound to the height of 4 yards, as is necessary to raise a body of 4 pounds to the height of 1 yard. Cartesians, as well as other philosophers and mathematicians of our times, admit both of these assumptions. Hence it follows, that the body of 1 pound, in falling from a height of 4 yards, should acquire precisely the same amount of force as the body of 4 pounds, falling from a height of 1 yard. For, in falling 4 yards, the body of 1 pound will have there, in its new position, the force required to rise again to its starting point, by the first assumption; that is, it will have the force needed to raise a body of 1 pound (namely, itself) to the height of 4 yards. Similarly, the body of 4 pounds, after falling 1 yard, will have there, in its new position, the force required to rise again to its own starting point, by the first assumption; that is, it will have the force sufficient to raise a body of 4 pounds (itself, namely) to a height of 1 yard. Therefore, by the second assumption, the force of the body of 1 pound, when it has fallen 4 yards, and that of the body of 4 pounds, when it has fallen 1 yard, are equal.

“Now let us see whether the quantities of motion are the same in both cases. Contrary to expections, there appears a very great difference here. I shall explain it in this way. Galileo has proven that the velocity acquired in a fall of four yards, is twice the velocity acquired in a fall of one yard. So, if we multiply the mass of of the 1-pound body, by its velocity at the end of its 4-yard fall (which is 2), the product, or the quantity of motion, is 2; on the other hand, if we multiply the mass of the 4-pound body, by its velocity (which is 1), the product, or quantity of motion, is 4. Therefore the quantity of motion of the 1-pound body after falling four yards, is half the quantity of motion of the 4-pound body after falling 1 yard, yet their forces are equal, as we have just seen. There is thus a big difference between motive force and quantity of motion, and the one cannot be calculated by the other, as we undertook to show. It seems from that that {force} is rather to be estimated from the quantity of the {effect} which it can produce; for example, from the height to which it can elevate a heavy body of a given magnitude and kind, but not from the velocity which it can impress upon the body. For not merely a double force, but one greater than this, is necessary to double the given velocity of the same body. We need not wonder that in common machines, the lever, windlass, pulley, wedge, screw, and the like, there exists an equilibrium, since the mass of one body is compensated for by the velocity of the other; the nature of the machine here makes the magnitudes of the bodies–assuming that they are of the same kind–reciprocally proportional to their velocities, so that the same quantity of motion is produced on either side. For in this special case, the {quantity of the effect}, or the height risen or fallen, will be the same on both sides, no matter to which side of the balance of the motion is applied. It is therefore merely accidental here, that the force can be estimated from the quantity of motion. There are other cases, such as the one given earlier, in which they do not coincide.

“Since nothing is simpler than our proof, it is surprising that it did not occur to Descartes or to the Cartesians, who are most learned men. But the former was led astray by too great a faith in his own genius; the latter, in the genius of others. For, by a vice common to great men, Descartes finally became a little too confident, and I fear that the Cartesians are gradually beginning to imitate many of the Peripatetics at whom they have laughed; they are forming the habit, that is, of consulting the books of their master, instead of right reason and the nature of things.

“It must be said, therefore, that forces are proportional, jointly, to bodies (of the same specific gravity or solidity) and to the heights which produce their velocity or from which their velocities can be acquired. More generally, since no velocities may actually be produced, the forces are proportional to the heights which might be produced by these velocities. They are not generally proportional to their own velocities, though this may seem plausible at first view, and has in fact usually been held. Many errors have arisen from this latter view, such as can be found in the mathematico-mechanical works of Honoratius Fabri, Claude Dechales, John Alfonso Borelli, and other men who have otherwise distinguished themselves in these fields. In fact, I believe this error is also the reason why a number of scholars have recently questioned Huygens’ law for the center of oscillation of a pendulum, which is completely true.” [Adapted from {Gottfried Wilhelm von Leibniz: Philosophical Papers and Letters}, LeRoy E. Loemker, ed. (Chicago: University of Chicago Press, 1956); vol. I, pp. 455-458)]

The astronomical origins of number theory, Part 1

by Jonathan Tennenbaum

Once our prehistoric predecessors created the concept of a day, year, and other astronomical cycles, a new fundamental paradox arose: By its very nature, a cycle is a “One” which subsumes and orders a “Many” of astronomical or other events into a single whole. But what about the multitude of astronomical cycles? Must there not also exist a higher-order “One” which subsumes the astronomical cycles into a single whole?

We can follow the traces of Man’s hypothesizing on this issue, back to the most ancient of recorded times, and beyond. The oldest sections of the Vedic hymns — astronomical songs passed down by oral tradition for thousands of years before being written down — are pervaded with a sense of the implicitly paradoxical relationship among various astronomical cycles, as an underlying “motiv.” That motiv, in turn, shaped the long historical struggle to develop and perfect astronomically-based calenders, as a means to organize the activities of society in accordance with Natural Law.

A familiar example of the problem involved, is the relationship of the day (as the cycle of rotation of the entire array of the “fixed star”) and the solar year. Egyptian astronomers made rather precise measurements of the solar year, including the slight, but measurable discrepancy between a solar year and 365 full days. Four solar years constitute nearly exactly 1461 days (4 x 365, plus 1, the additional “1” appearing in the present-day calender as the extra day of a “leap year”). The use of a 4-year cycle was taken as the basis of the so-called Julianic calender. In reality, however, the apparent coincidence of 4 years and 1461 days is not a perfect one; a small, measurable discrepancy exists, amounting to an average of about <11 minutes per year>. This tiny “error” eventually led to the downfall of the Julianic calender, around 1582, by which time the discrepancy had accumulated to a gross value about 10 days!

Another classical example is the cycle of Meton, invented in ancient Greek times in the attempt to reconcile the cycle of the synodic month (defined by the phases of the Moon) with the solar year. Observation shows, that a solar year is about 10.9 days longer than 12 synodic months. Assuming the first day of a year and the first day of a synodic month coincide at some given point in time, the same event will be seen to occur once again after 19 years or 235 synodic months. That defines the 19-year cycle of Meton, which was relatively successful as the basis for astronomical tables constructed in Greek times. But again, more careful observation shows that this apparent cycle of coincidence is not a precise one. A slight discrepancy exists, between 19 years and 235 synodic months, which would cause any attempted solar-lunar calender based on rigid adherence to the Metonic “great cycle,” to diverge more and more from reality in the course of time.

The same paradox emerges, with even greater intensity, as soon as we try to include the motions of the planets into a kind of generalized calender of astronomical events. In fact, after centuries of effort, no one has been able to devise a method of calculating the relationship of the astronomical cycles, which will not eventually (i.e., after a sufficiently long period of time) give wildly erroneous values, when compared to the actual motions of the Sun, stars, and planets! No matter how sophisticated a mathematical scheme we might set up, and no matter how well it appears to approximate the real phenomena within a certain domain, that domain of approximate validity is strictly finite. Outside that finite region, the scheme becomes useless — its validity has “died.”

What is the reason for this persistent phenomenon, which we might call “the mortality of calenders?” Should we shrug our shoulders amd take this as a mere negative “fact of life?” Or is there a positive <physical existence> waiting to be discovered — a new, relatively transcendent <physical principle>, accounting for the seeming impossibility of uniting two or more astronomical cycles into a single whole by any sort of fixed mathematical construction?

According to the available evidence, the Pythagorean school of ancient times attacked this problem with the help of certain geometrical metaphors, perhaps along something like the following lines.

The simplest notion of an astronomical cycle embodies two elementary paradoxes: First, a cycle would appear to constitute an <unchanging process of change>! Indeed, the astronomical motions, subsumed by a given cycle, constitute <change>; whereas the cycle itself seems to persist <unchanged>, as if to constitute an existence “above time.” Secondly, we know that the <real> Universe progresses and develops, whereas the very concept of a cycle would seem to presume exact repetition.

Reacting to these paradoxes, construct the following simple-minded, geometrical-metaphorical representation of astronomical cycles:

Represent the unity of any astronomical cycle by a circle A, of fixed radius. Roll the circle along a straight line (or on an extremely large circle). Choose a point P, fixed on the circumference of the rolling circle, to signify the beginning (and also the end!) of each repetition of the cycle. As the circle rolls forward, the point P will move on a <cycloidal path>, reaching the lowest point, where it touches the line, at regular intervals. This is the location where the cycloid, traced by p in the course of its motion, generates a singular event known as a <cusp>. Denote the series of evenly-spaced cusps, by P, P’, P” etc. The interval between each cusp and its immediate successor in the series, corresponds to a single completed cycle of rotation of the circle A.

(For some purposes, we might represent the length of an astronomical cycle simply by the linear segment PP’, and the unfolding of subsequent cycles by a sequence of congruent segments PP’, P’P”, P”P”’ etc., situated end-on-end along a line. In so doing, however, it were important to keep in mind, that this were a mere projection of the image of the rolling circle, the latter being relatively more truthful.)

The fun starts, when we introduce a <second> astronomical cycle! Represent this cycle by a circle B, rolling simultaneously with the first one on the same line and at the same forward rate. Let Q denote a point on circle B, chosen to mark the beginning of each new cycle of B. A second array of points is generated long the line, corresponding to the beginning/endpoints of the second cycle: Q, Q’, Q” etc.

Now, examine the relationship between these two arrays of singularities P, P’, P” … and Q, Q’, Q” …. Depending on the relationship between the cycles A and B (as reflected in the relationship of their radii and circumferenes), we can observe some significant geometrical phenomena.

(At this point, it is obligatory for the readers to explore this domain themselves, by doing the obvious sorts of experiments, before reading further!)

Consider the case, where we start the circles rolling at a common point, and with P and Q touching the line at that beginning point. In other words, P = Q. If the radii of A and B are <exactly equal>, then obviously P’ = Q’, P” = Q” and so on. If, on the other hand, the radius (or circumference) of A is shorter than that of B, then a variety of outcomes are possible.

For example, the end of A’s first cycle (P’) might fall exactly in the middle of B’s cycle, in which case A’s second cycle will end exactly at the same point as B’s first cycle (P” = Q’). The same phenomenon would then repeat itself in subsequent cycles.

More generally, we could have a situation, where one cycle of B is equivalent in length to three, four, or any other whole number of cycles of A. It is common to refer to this case by saying, that A divides B evenly, or that B is an integral multiple of A.

The next, more complex species of phenomena, is exemplified by the case, where the endpoint of 3 cycles of A coincides with the endpoint of 2 cycles of B. Note, that in this case Q’ (the endpoint of B’s first cycle) falls exactly between the endpoint of A’s first cycle (P’) and the end of A’s second cycle (P”), while P”’ = Q”.

The defining characteristic of this type of behavior is, that after starting together, A and B seem to diverge for a while, but eventually “come back together” at some later time. Insofar as the lengths of A and B remain invariant, that same process of divergence and coming-together of the two processes must necessarily repeat itself at regular intervals. (Indeed: from the standpoint of the cycles A and B, the process unfolding from any <given> point of common coincidence, taken as a new starting-point, must be congruent to that ensuing from any <other> point of coincidence.) Aha! Have we not just witnessed the emergence of a third, “great cycle,” C, subsuming both A and B?

The length of this third cycle, would be the interval from the original, common starting-point of A and B, to the <first> point afterwards, at which A and B come together again (i.e., where the rotating points P and Q touch the line simultaneously at the same point). This event intrinsically involves two coefficients (or, in a sense, “coordinates”), namely the number of cycles completed by A and B, respectively, between any two successive events of coincidence.

Seen from the standpoint of mere scalar length per se, the relationship of C to A and B would seem to be, that A and B both divide C evenly; or in other words, C is a multiple of both A and B. More precisely, we have specified that C be the <least common multiple> of both A and B. In our present example, C would be equivalent (in length) to 3 times A, as well as to 2 times B.

Those skilled in geometry will be able to construct any number of hypothetical cases of this type. The simplest method, from the standpoint of construction, is to work <backwards> from a fixed line segment representing “C”, to generate A and B by dividing that segment in various ways into congruent intervals. For example: construct a line segment representing C, and divide that line segment into 5 equal parts, each of which represents the length of a cycle A. Then, take a congruent copy of C, and divide it (by the methods of Euclidean geometry, for example) into 7 equal parts, each of which represents the length of B. Next, superimpose the two constructions, and observe how the set of division-points corresponding to cycles of A, fall between various division-points of B. Try other combinations, such as dividing C by 15 and 12, or by 15 and 13, for example.

Carrying out these exploratory constructions with sufficient precision, we are struck with an anomaly: the “near misses” or “least gaps” between cycles of A and B.

In the case of division by 7 and 5, for example, observe that before coming together <exactly> after 7 cycles of A and 5 cycles of B, the two processes have a “near miss” at the point where B has completed two cycles and A is just about to complete its third cycle. In terms of scalar length, three times A is only very slightly larger than two times B. For different pairs of cycles A and B, dividing the same common cycle C, we find that the position and gap size of the “near misses” can vary greatly. For example, in the case of division by 15 and 12, the “least gap” already occurs near the beginning of the process, between the moment of completion of A’s first cycle and that of B’s first cycle. But for division by 15 and 13, the “least gap” occurs near the middle, between the end of B’s 6th cycle and A’s 7th cycle.

Resist the temptation to apply algebra to these intrinsically geometrical phenomena. Don’t fall into the trap of collapsing geometry into arithmetic! Although we can use algebra and arithmetic to calculate the division-points and the lengths of the gaps generated by the division-points, there is no algebraic formula which can <predict> the location of the “least gap”!

The Astronomical Origins of Number Theory, Part II

by Jonathan Tennenbaum

In the previous article, we began to investigate the relationship between two astronomical cycles A and B, representing these by circles of different radii rolling on a common line. We were investigating especially the case, where the cycles A and B can be brought together under a “great cycle” C, whose length is a common multiple of the lengths of A and B. Our attention was drawn to the anomalous phenomenon of “near misses” — i.e., points where the two cycles nearly end together, but not exactly. The irregularity of this phenomenon suggests, that we have not yet arrived at an adequate representation of the “great cycle” C and its relationships to A and B.

Take a new look at the circles A and B, rolling down the line. In our chosen representation, the rate of forward motion of the circles is the same, and they make a common point of contact with the line at each moment. But what is the relationship of rotation between A and B? Would it not be essentially equivalent, to conceive of A as rolling on the inner circumference of B, at the same time B is rolling on the line? It suddenly dawns upon us, that the geometrical events occurring between A and B in the course of any “great cycle” C (including the phenomenon of “near misses”), are governed by the indicated, <epicycloid> relationship of A and B alone!

Accordingly, leave the base-line aside for the moment; instead, generate an epicycloid curve by rolling the smaller circle A on the inside of the larger circle B, the curve being traced by the motion of the point P on A. Observe, that an equivalent array of cusps is generated, in a somewhat more convenient way, if we roll A on the <outside> of B instead of on the inside. Experimenting with our first example of a “great cycle,” observe that the epicycloidal curve in this case wraps around B twice, before closing back on itself, while A completes 3 complete rotations. Also observe, that the points where P touches the circumference of B — i.e., the 3 cusps of the epicycloid — divide B’s circumference into 3 equal arcs. Observe, finally, that the points of contact of A, while it is rolling, with the locations of the cusp-points of the epicycloid, include not only P, but also the opposite point to P on A’s circumference. In fact, each of the 3 equal arcs on B’s circumference correspond, by rolling, to one-half of A’s circumference.

Aha! That arc-length (i.e., one-third of B, equivalent to one-half of A) constitutes a <common divisor> of A and B. Comparing the epicycloidal process of rolling A against B, with the earlier process of A and B rolling on a common straight line, what is the relationship between the <common divisor>, just identified, and the <least gap> generated by the two cycles?

To investigate this further, carry out the same experiment with the pair of cycles A and B, obtained by dividing a given cycle-length C by 7 and 5, respectively. Rolling A on the outside of B, we find that the epicycloid must go around B 5 times, before it closes on itself. That corresponds to the “great cycle” C. In the course of that process of encircling B five times, the rolling circle A will complete exactly 7 rotations, generating 7 cusps in the process; these 7 cusps divide the circumference of B into 7 equal arcs, each of which is equivalent to one-fifth of the circumference of A. Those equivalent arcs all represent a <common divisor> of A and B.

Accordingly, construct a smaller circle D, whose radius is one-fifth that of A (or, equivalently, one-seventh that of B). In the course of a “great cycle” C, D makes 35 rotations. One cycle of A is equivalent in length to 5 cycles of D, and one cycle of B is equivalent in length to 7 cycles of D.

Compare this with the “least gap” constructed in Figure 5 of last week’s article. Evidently, the “least gap” generated by A and B, is equivalent to the <common divisor> of A and B, generated by the epicycloidal construction described above. Those skillful in mathematical matters will easily convince themselves, that if C corresponds to the <least common multiple> of A and B in terms of length, then D corresponds to their <greatest common divisor>.

Evidently, C and D constitute a “maximum” and “minimum” relative to the cycles A and B — C containing both and D being contained in both. Out of this investigation, we learn, that <if A and B have a common “great cycle,” then they also have a common divisor>; or in other words, they are <commensurable>. Also evidently, the converse is true: if A and B have a common divisor D, then we can easily construct a “great cycle” subsuming A and B. If fact, if A corresponds to N times D, and B corresponds to M times B, then A and B will fit exactly into a “great cycle” of length NM. (The length of the minimum “great cycle” is defined by the least common multiple of N and M, which is often smaller than the product NM; for example, if N = 6 and M = 4, the least common multiple is 12, not 24.)

Return now to our original query about the possibility of uniting a “Many” of different astronomical cycles into a single “One.” The result of our investigation up to now is, that there will always exist a “great cycle” subsuming integral multiples of cycles A and B into a single whole, as along as A and B are commensurable — i.e., as long as there exists some sufficiently small common unit of measurement, which fits a whole number of times into A and a whole number of times into B. Does such a unit always exist?

Remember the result of an earlier pedagogical discussion, in which we reconstructed the discovery of the Pythagoreans, of the <incommensurability of the side and diagonal of a square>! A pair of hypothetical astronomical cycles A and B, whose lengths (or radii) are proportional to the side and diagonal of a square, respectively, could never be subsumed exactly into a common “great cycle,” no matter how long! If we start A and B at a common point, they will <never> come together exactly again, although they will generate “near misses” of arbitrarily small (but nonzero) size!

This situation presents us with a new set of paradoxes: First, although A and B have no simple common “great cycle,” the relationship of diagonal to side of a rectangle is nevertheless a very precise, <lawful relationship>. This suggests, that the difficulty of combining A and B into a single “whole” does not lie in the nature of A and B per se, but in the conceptual limitations we have imposed upon ourselves, by demanding that the relationships of astronomical cycles be representable in terms of a “calender” based on whole numbers and fixed arithmetic calculations. Secondly, what is the new physical principle, which reflects itself in the existence (at least theoretically) of linearly incommensurable cycles? In fact, the work of Johannes Kepler completely redefined both these questions, by overturning the assumption of simple circular motion, and introducing the entirely new domain of elliptical functions. The bounding of elementary arithmetic by <geometry>, and the bounding of geometry (including so-called hypergeometries) by <physics>, is one of the secrets guarding the gates of what Carl Gauss called “higher arithmetic.”

How Johannes Kepler Changed the Laws of the Universe

Part II of an Extended Pedagogical Discussion

by Jonathan Tennenbaum

In Part I of this series (which readers should review before proceeding further here), I presented a series of arguments, purporting to demonstrate that there is no way to determine the actual movement of a planet in space, from observations made on the Earth. To this effect, I showed how, for any given pattern of observed motions, to construct an infinity of hypothetical motions in space, each of which would present exactly the same apparent motions as seen from the Earth. Short of leaving the Earth’s surface — an option not available to Kepler and his contemporaries — the effort to determine the actual orbits of the planets would appear to be nothing but useless speculation. Actually, similar sorts of arguments could be used to “prove” the futility of Man’s gaining any solid knowledge at all about the outside world, beyond the mere data of sense perception per se!

But wait! Man’s history of sustained, orders-of-magnitude increases in per-capita power over Nature since the Pleistocene, demonstrates exactly the opposite: The human mind <is> able, by the method of hypothesis, to overleap the bounds of empiricism, and attain increasingly efficient knowledge of the ordering of the Universe. Kepler’s own, brilliantly successful pathway of discovery, in unravelling the form and ordering of the planetary orbits, provides a most instructive case in point. The conclusion is unavoidable: the arguments I presented earlier in favor of a supposed “unknowability” of the planetary motions, must contain some fundamental error!

Kepler’s emphasis on <physics> and the <method of hypothesis>, as opposed to the impotence of mere “mathematics and logic,” should help us to sniff out the sophistry embedded in those arguments.

Did we not, in constructing a multiplicity of hypothetical orbits consistent with given observations, implicitly assume that those motions took place in a non-existent, empty mathematical space of the Sarpi-Galileo-Descartes-Newton type, rather than the real Universe? Did we not implicity collapse the “observer” to an inert mathematical point, ignoring the crucial factor of <curvature in the infinitesimally small>? Didn’t we overlook the inseparable connection between <human knowledge>, <hypothesis>, and <change>?

Human knowledge is not a contemplative matter of fitting plausible interpretations to an array of sense perceptions. On the contrary, knowledge develops through human intervention <to change the Universe> — a process which involves not only generating scientific hypotheses, but above all <acting> on them. The “infinitesimal” is no mathematical point; it possesses an internal curvature which is in demonstrable correspondence with the curvature of the Universe as a whole. That relationship centers on the role of the sovereign, creative human individual as God’s helper in the ongoing process of Creation.

For example, even the most banal application of “triangulation” in elementary geometry, reflects the principle of change. Rather than impotently staring at a distant object X (for example, a distant mountain peak, or an enemy position in war) from a fixed location A, “triangulation” relies on <change of position> from A to a second vantage-point B and so on, measuring the corresponding angular shifts in X’s apparent position relative to other landmarks and the “baseline” A-B.

Notice, that when we shift from A to B, we not only change the apparent angle to X, but we change the entire array of relationships to every other visible object in the field of view of A and B. Taken at face value, the two spherical-projected images of the world, as seen from A and as seen from B, are <formally contradictory>. They define a <paradox> which can only be solved by <hypothesis>. So, we conceptualize an additional dimension, a “depth” which is not represented in any single projection per se. The same metaphorical principle is already built into the binocular organization of our own visual apparatus. Compare this with the more advanced principle of Eratosthenes’ measurement of the Earth’s curvature, and the methods developed by Aristarchus and others to estimate the Earth-Moon and Earth-Sun distances.

The circumstance, that even our sensory apparatus (including the relevant cortical functions) is organized in this way, once again underlines the fallacies embedded in Kant’s claimed unknowability of the “Ding an sich.”

It was Kepler himself, who first used the combination of Mars, the Earth and the Sun — without leaving the Earth’s surface! — to unfold a “nested” series of triangulations which definitively established the elliptical functions of the planetary orbits and their overall organization within the solar system. But, as we shall see, the key to Kepler’s method was not simple triangulation in the sense of elementary geometry, but rather his shift away from naive Euclidean geometry, toward a revolutionary conception of <physical geometry>.

Turn back now to the paradoxes of planetary motion as seen from the Earth, particularly Mars, Jupiter, and Saturn. Mapping the motion of these planets against the stars, we find that they travel around the ecliptic circle (or more precisely, in a band-like region around the ecliptic), but <not at a uniform rate>. Although the predominant motion is forward in the same direction as the Sun, at periodic intervals these planets are seen to slow down and reverse their motion, making a rather flat “loop” in the sky, and then reverting to forward motion once again. This process of retrograde motion and “looping”, invariably occurs around the time of the so-called opposition with the Sun, i.e., when the positions of the Sun and the given planet, as mapped on the “sphere of the stars,” are approaching opposite poles relative to each other. Curiously, around that same time, the planet appears the brightest and largest, while in the opposite relative position — near the so-called conjunction with the Sun — the planet appears smaller and weaker, while at the same time displaying its most rapid apparent motion!

Although the “looping” of Mars (for example) recurs at roughly equal intervals of time, and is evidently closely correlated with the motion of the Sun, the period of recurrence is <not> equal to a year, <nor> is it the same for Jupiter and Saturn, as for Mars! The so-called synodic period of Mars — the period between the successive oppositions of Mars to the Sun, which coincides with the period between successive “loops” in Mars’ orbit — is observed to be approximately 780 days. In the case of Jupiter, on the other hand, the opposition to the Sun and formation of a loop, occur at intervals of about 399 days, or roughly once every 13 months.

But there is an additional complication. The planet does not come back to its original position in the stars (its siderial position) after a synodic period! The locus of the “loop”, relative to the stars, <changes> with each cycle of recurrence. After ending its retrograde motion and completing a loop, Mars proceeds to travel something more than a full circuit forward along the ecliptic, before the looping process begins again. Long observation, shows that the locus of each loop is shifted an average of about 49 degrees forward along the ecliptic, relative to the preceeding one.

Our experiments on the behavior of epicycloids, strongly suggest, that what we are looking at is some sort of epicyloid-like combination of two (or more) astronomical cycles! If so, then one of them would be the one producing the “looping,” and having a cycle length equal to 780 days, the synodic period of Mars. The other cycle — which <cannot be observed directly>, because it is strongly disturbed and distorted by the looping — would be the one determining the overall, net “forward” motion of Mars along the ecliptic. The fact, that Mars travels 360 plus 49 degrees along the ecliptic, before “looping” recurs, suggests that the cycle determing the looping has a somewhat <longer> period, than the cycle responsible for the net forward motion. In fact, the synodic cycle would have to be about 13.5% longer than the other cycle, to give the shift of 49 degrees forward from loop to loop. Or, to put it differently: if the hypothetical cycle of forward motion along the ecliptic, generates an angle of 360 plus 49 degrees in the time between successive loops — i.e., 780 days — then the time needed by that same forward motion to complete a full cycle of exactly 360 degrees, would be 687 days, or about 1.88 years. Of course, this whole reasoning assumes that each cycle progresses at a uniform, constant rate.

Let’s stop to reflect for a moment. On the basis of assumptions which, admittedly, require further examination, we have just adduced the existence of a 1.88-year cycle of Mars, a cycle which is <not directly observable>. Firstly, as Kepler remarked, Mars’ apparent trajectory <never closes>! Evidently we have a phenomenon of “incommensurability” of cycles. Moreover, the Mars trajectory itself does not lie exactly on the ecliptic circle, but winds around it in a band-like region like a coil. When Mars returns after going around the eliptic, it does not return to the same precise positions. So, where is the cycle? If we leave aside the deviations from the ecliptic, and just count the number of days Mars needs to make a single circuit within the ecliptic “band,” we get many different answers, depending on when and where in the “looping” cycle, we begin to count. Again, the observed motion of Mars is not strictly periodic. The 1.88-year cycle is born of hypothesis, not of direct empirical observation.

Historically (and as per the discussion in Kepler’s “Astronomia Nova,” the adduced 1.88-year cycle was referred to as the “first inequality” of Mars, while the cycle governing the “looping” phenomenon, was called the “second inequality.”

Now, our analysis up to now has been based on the assumption, that the underlying motion of a “cycle,” is uniform circular motion. That assumption dominated astronomical thinking up to the time of Kepler, and not without good reasons. After all, didn’t the approach of combining circular motions prove rather successful, earlier, in unravelling the motion of the Sun? We found that the Sun’s apparent motion can be understood as a combination of two circular motions: a daily rotation of the entire sphere of the stars, and a yearly motion of the Sun along a great-circle path (the ecliptic) on that stellar sphere. In the case of Mars (and the other outer planets), we evidently are dealing with a combination of <three> degrees of rotation: the daily stellar rotation; the “first inequality” with a period of 690 days; and the “second inequality” with a period of 780 days.

We are not finished, however. As Kepler would have emphasized, “the devil is in the detail.” To undercover a new set of anomalies, we must drive the fundamental hypothesis which has been the basis of our reasoning up to now — the hypothesis of uniform circular motion as elementary — to its limits. This is exactly what Kepler does in his {Astronomia Nova}. As his point of departure, he reviews the three main methods, developed up to that time, to construct the observed motions from a combination of simple circular motions. These were: 1) the method of epicycles associated with Ptolemeus, but actually developed by Greek astronomers centuries earlier; 2) the method of concentric circles, associated with Copernicus, but which had been put forward 14 centuries earlier by Aristarchos, and probably even by the original Pythagoreans; and finally 3) the method favored by Kepler’s elder collaborator, Tycho Brahe, which combines elements of both.

The differences between the constructions of Ptolemy, Copernicus and Tycho Brahe do not concern their common assumption of simple circular motion as elementary; at first glance, they merely differ in the way they combine circular motions to produce the observed trajectories.

(Readers should construct models to illustrate the following constructions!)

In the simplest form of Ptolemy’s construction, the Earth is the center of motion of the Sun and the primary center of motion of all the planets. The “first inequality” (of Mars, Jupiter or Saturn) is represented by motion on a large circle, C1 (called the “eccentric”), centered at the Earth, while the planet itself is carried along on the circumference of a second, smaller circle C2 (called the “epicycle”), whose center moves along C1. That motion of the planet on the second circle, corresponds to the “second inequality.” In the case of Mars, for example, the planet makes one circuit of the second circle in 780 days, while at the same time the center of the second circle moves along the first circle at a rate corresponding to one revolution in 690 days. It is easy to see how the phenomenon of retrograde motion is produced: At the time when the planet is located on the portion of its epicycle closest to the Earth, its motion on the epicycle is opposite to the motion on the first circle, and somewhat faster, yielding a net retrograde motion. From the angle described by the retrograde motions we can conclude the ratio of the radii of the two circles. To account for the transverse component of motion in a loop according ot this hypothesis, we must assume that the plane of the second circle is slightly skewed to that of the first circle. Ptolemy used an somewhat different, but analogous construction to account for the apparent motions of the “inner” planets Mercury and Venus.

In the simplest form of the so-called Copernican construction, the circular motions are assumed to be essentially concentric, centered at the Sun, although in slightly different planes. The apparent yearly motion of the Sun is assumed to result from a yearly motion of the Earth around the Sun. As for Mars, we represent its “first inequality” by a circle around the Sun, upon which Mars is assumed to move directly. The “second inequality,” on the other hand, now appears as a mere artifact, arising from the combined effect of the supposed, concentric-circular motions of the Earth and Mars. Since the Earth’s period is shorter than that of Mars, the Earth periodically catches up with and passes Mars on its “inside track.” At that moment of passing, Mars will appear from the Earth as if it were moving backwards relative to the stars. On the other hand, as the Earth approaches the position opposite to Mars on the other side of the Sun, Mars will attain its fastest apparent forward motion relative to the stars, the latter being exaggerated by the effect of the Earth’s motion in the opposite direction.

In Tycho Brahe’s construction, the planets (except the Earth) are supposed to move on circular orbits around the Sun, while the Sun itself (together with its swarm of planets, some closer, some farther away than the Earth) is carried around the Earth in an annual orbit.

Now, in his discussion in {Astronomia Nova}, Kepler emphasized that the three constructions, when carried out in detail, produce <exactly the same apparent motions>. From a purely formal standpoint, it would seem there could be no basis for deciding in favor of the one or the other. Yet, from a conceptual standpoint, the three are entirely different. And since Man does not merely contemplate his hypotheses, but <acts> on them, every conceptual difference — insofar as it bears on axiomatics — is eminently <physical> at the same time, even if the effect appears first only as an “infinitesimal shift” in the mind of a single human being.

Next week, by pushing the theories of Ptolemy, Copernicus, and Brahe to their limit, Kepler will evoke from the Universe a most remarkable response: All three approaches are false!

How Johannes Kepler Changed the Laws of the Universe

Part III of an extended pedagogical discussion

by Jonathan Tennenbaum

“It is true that a divine voice, which enjoins humans to study astronomy, is expressed in the world itself, not in words or syllables, but in things themselves and in the conformity of the human intellect and senses with the ordering of the celestial bodies and their motions. Nevertheless, there is also a kind of fate, by whose invisible agency various individuals are driven to take up various arts, which makes them certain that, just as they are a part of the work of creation, they likewise also partake to some extent in divine providence….

“I therefore once again think it to have happened by divine arrangement, that I arrived at the same time in which he (Tycho Brahe) was concentrated on Mars, whose motions provide the only possible access to the hidden secrets of astronomy, without which we should forever remain ignorant of those secrets.” (Kepler, Astronomia nova, Chapter 7).

Last week we briefly reviewed the three main competing approaches to understanding the apparent planetary motions, examined by Kepler: those of Ptolemy, Copernicus, and Tycho Brahe. Kepler emphasized the purely formal equivalence of the three approaches, at least in their simplest versions, but he pointed out crucial differences in their physical (i.e., ontological-axiomatic) character, while also noting some deeper, common assumptions of all three. Kepler first of all attacked Ptolemy’s method, on the grounds of its arbitrary assumptions, which reject the principle of reason:

“Ptolemy made his opinions correspond to the data and to geometry, and <has failed to sustain our admiration>. For the question still remains, what <cause> it is that connects all the epicycles of the planets to the Sun…” (My emphasis – JT).

“Copernicus, with the most ancient Pythagoreans and Aristarchus, and I along with them, say that this second inequality does not belong to the planet’s own motion, but only appears to do so, and is really a byproduct of the Earth’s annual motion around the motionless Sun.”

In his Mysterium Cosmograpium, Kepler had pointed out:

“For, to turn from astronomy to physics or cosmography, these hypotheses of Copernicus not only do not offend against Nature, but assist her all the more. She loves simplicity, she loves unity. Nothing ever exists in her which is superfluous, but more often she uses one cause for many effects. Now under the customary hypothesis there is no end to the invention of circles; but under Copernicus a great many motions follow from a few circles.”

In the Ptolemaic construction, each planet has at least two cycles, and not only the “first inequality,” but also the “second inequality” is different for each one. Not only does the hypothesis of Aristarchus eliminate the need for many “second equalities” — deriving them all, as effects, from the single cycle of the Earth — but countless other specifics of the apparent planetary motions begin to become intelligible.

Truth, however, does not lie in the simplicity of an explanation per se. Indeed, very often the “simplest” explanation, one in which everything appears to fit together effortlessly, and all irritating singularities disappear, is the farthest from the truth! When things become too easy, too banal, watch out! To get at the truth, we must always generate a new level of paradox, by pushing our hypotheses to their breaking-points. This Kepler does, by focussing on the implications of certain irregularities in the planetary motions — overlooked in our discussion up to now — which would be virtually incomprehensible, if the cycles of the “first and second inequalities” were based only on simple circular action.

Indeed, on closer examination, we find that the “loops” of the planet Mars (for example) are not identical in shape, but vary somewhat from one synodic cycle to the next! Nor is the displacement of each loop, relative to the preceeding one, exactly equal from cycle to cycle. Furthermore, even the motion of the Sun itself along the ecliptic circle, upon close study, reveals itself to be alternately speed up and slow down significantly in the course of a year, contrary to our tacit assumption up to now.

Indeed, already in ancient times astronomers wondered at the paradoxical “inequality” of the Sun’s yearly motion. In fact, when we carefully map the Sun’s motion relative to the “sphere of the fixed stars,” we find, that although the Sun progresses along the ecliptic at an average rate of 360 degrees per year, the angular motion is actually about 7% faster in early January (about 0.95 degrees per day) than in July (about 1.02 degrees per day). This variation causes quite noticeable differences in the lengths of the seasons, as these are defined in terms of a solar calender. Indeed, the four seasons correspond to a division of the ecliptic circle into four congruent arcs, the division-points being the two equinoxes (the intersection-points of the ecliptic with the celestial equator) and the two solstices (the points on the ecliptic midway between the equinox points, marking the extremes of displacement from the celestial equator and thereby also the positions of the Sun on the longest and shortest days of the year). Due to the changes in the Sun’s angular velocity along the ecliptic, those four arcs are traversed in different times. In fact, the lengths of the seasons, so determined, are as follows (we refer to the seasons in northern hemisphere, which are reversed in the southern hemisphere):

Spring: 92 days and 22 hours; Summer: 93 days and 14 hours; Fall: 89 days and 17 hours; Winter: 89 days and 1 hour.

This unevenness in the solar motion confronts us with a striking paradox: How could we have a “perfect” circular trajectory, as the Sun’s path (the ecliptic) appears to be, and yet the motion on that trajectory not be uniform? That would seem to violate the very nature of the circle. Or shall we assume, that some “outside” force could alternately accelerate or decelerate the Sun (or Earth, if we take Copernicus’ standpoint), without leaving any trace in the shape of the trajectory itself? Furthermore, how are we to comprehend this variation, if we hold to the hypothesis, that the elementary form of action in astronomy is uniform circular motion? On the other hand, if we give up uniform circular motion as the basis for constructing all forms of motion, then we seem to open up a Pandora’s box of a unlimited array of conceivable motions, with no criterion or principle to guide us.

One “way out” — which only shifts the paradox to another place, however –, would be to keep the assumption, that the Earth’s motion (and that of the other planets) is uniform circular motion, but to suppose that the center of the orbit is not located exactly at the Sun’s position. This notion of a displaced circular orbit was known as an “eccentric”; both Ptolemy and Copernicus employed it in the detailed elaboration of their theories, to account for the mentioned irregularities in planetary motions. Assuming such orbits really exist, it is not hard to interpret the speeding-up and slowing-down of the Sun’s apparent motion as a kind of illusion due to projection, in the following way: Taking Copernicus’ approach for example, the “true” motion of the Earth would be a uniform circular one; but the Sun, being located off of the center of the Earth’s orbit, would appear from the Earth to be moving faster when the Earth is located on the portion of its eccentric closest to the Sun, and slower at the opposite end. On this asssumption, it is not hard to calculate, by geometry, how far the center of the eccentric would have to be displaced from the Sun, in order to account for the 7% difference in observed angular speeds between the perihelion (closest distance) and aphelion (farthest distance) of the eccentric.

From the standpoint of this construction, the “true” motion of the Sun (or the Earth, in Copernicus’ theory) would be that corresponding exactly to the mean or average motion of 360 degrees per year, while the apparent motion would vary according to the varying distance between Earth and Sun. Accordingly, Tycho Brahe and Copernicus elaborated their analyses of the apparent planetary motions on the basis of the assumed “true” circular motion of the Sun (or Earth).

This exact point becamce a focus of debate between Kepler and Tycho Brahe. Kepler writes:

“The occasion of … the whole first part (of Astronomia nova) is this. When I first came to Brahe, I became aware that in company with Ptolemy and Copernicus, he reckoned the second inequality of a planet in relation to the mean motion of the Sun … So, when this point came up in discussion between us, Brahe said in opposition to me, that when he used the mean Sun he accounted for all the appearances of the first inequality. I replied that this would not prevent my accounting for the same observations of the first inequality using the Sun’s apparent motion, and thus it would be in the second inequality that we would see which was more nearly correct.”

This challenge eventually led to the breakthroughs which Kepler announced in the title of Part II of his Astronomia Nova: “Investigation of the second inequality, that is, of the motions of the sun or earth, or the key to a deeper astronomy, wherein there is much on the physical causes of the motions.”

Kepler had reason to be suspicious about the assumption of perfect circular orbits as “elementary.” On the one hand, Kepler was a follower of Nicolaus of Cusa, who had written, in the famous Section 11 of Docta Ignorantia,

“What do I say? In the course of their motion, neither the Sun, nor the Moon nor the Earth nor any sphere — although the opposite appears true to us — can describe a true circle … It is impossible to give a circle for which one could not give one even more perfect; and a heavenly body never moves at a given moment exactly the same way as at some other moment, and never describes a truly perfect circle, regardless of appearances.”

On the other hand, already Ptolemy knew that the tactic of uniform motion on displaced, “eccentric” circles, fails to fully account for irregularities turning up in the “first inequality” of the planets Venus, Mars, Jupiter, and Saturn (particularly Mars). To explain the accelerations and decelerations of the planets, which still remain after the effect of the “second inequality” is removed, and to reconcile those with other features of the apparent motions, it was not sufficient to merely displace the circle of the “first inequality” from the observer on the Earth. Ptolemy (or whoever actually did the work) accordingly introduced a new artifice, called the “equant”: On this modifed hypothesis, the motion along the circumference of the eccentric circle, instead of being itself uniform and constant, would be driven forward by a uniform angular rotation around a fixed point called the “equant,” located at some distance from the center of the circle. In the case of Mars, for example, the Earth and the equant would be located on opposite sides of the circle’s center. This would result in a real acceleration of the planet going toward its nearest point to the Earth (and deceleration moving toward the opposite end), adding to the effect of viewing this from the Earth. Actually, on the basis of the “equant” construction, Ptolemy and his followers, were able to make relatively precise calculations for all the planets (except Mercury). It was first using the more precise observations of Tycho Brahe, that Kepler could finally give Ptolemy the “coup de grace.”

Copernicus rejected the “equant,” essentially on the grounds that it de facto instituted “irregular” motions (i.e., non-circular motion) into astronomy. To avoid this, Copernicus and Brahe invented still another circular cycle (in addition to the “second inequality”) to modify the supposed uniform motion on the eccentric circle. We seem to be headed into a monstrous “bad infinity.”

But, isn’t there something absurd and wholly artificial about the idea of a planet orbiting in a circle around a mere abstract mathematical point as center? And being propelled by an abstract ray pivotting on another mathematical point? Kepler writes:

“A mathematical point, whether or not it is the center of the world, can neither effect the motion of heavy bodies nor act as an object towards which they tend … Let the physicists prove that natural things have a sympathy for that which is nothing.”

The same objection applies also, of course, to the device of the epicycle, whose center is supposed to be a mere mathematical point. Later Kepler adds:

“It is incredible in itself that an immaterial power reside in a non-body, move in space and time, but have no subject … And I am making these absurd assumptions in order to establish in the end the impossibility that every cause of the planet’s motions inhere in its body or somewhere else in its orb … I have presented these models hypothetically, the hypothesis being astronomy’s testimony, that the planet’s path is a perfect eccentric circle such as was described. If astronomy should discover something different, the physical theories will also change.”

Aha! While seeking means to accurately determine the real spatial trajectory, Kepler explores the notion, that something like the effect of the “equant” might actually exist, as <a new mode of physical action>:

“About center B let an eccentric DE be described, with eccentricity BA, A being the place of the observer. The line drawn through AB will indicate the apogee at D and the perigee at F. Upon this line, above B, let another segment be extended, equal to BA. C will be the point of the equant, that is, the point about which the planet completes equal angles in equal times, even though the circle is set up around B rather than C …” Copernicus notes this hypothesis among other things in this respect, that it offends against physical principles by instituting “irregular celestial motions … the entire solid orb is now fast, now slow.” This Copernicus rejects as absurd.

“Now I, too, for good reasons, would reject as absurd the notion that the moving power should preside over a solid orb, everywhere uniform, rather than over the unadorned planet. But because there are no solid orbs, consider now the physical evidence of this hypothesis when very slight changes are made, as described below. This hypothesis, it should be added, requires two motive powers to move the planet (Ptolemy was unaware of this). It places one of these in the body A (which, in the reformed astronomy will be the very Sun itself), and says that this power endeavors to drive the planet around itself, but possesses an infinite number of degrees corresponding to the infinite number of points of the ray from A. Thus, as AD is the longest, and AF the shortest, the planet is slowest at D and fastest at F… The hypothesis attributes another motive power to the planet itself, by which it works to adjust its approach to and recession from the Sun, either by strength of the angles or by intuition of the increase or decrease of the solar diameter, and to make the difference between the mean distance and the longest and shortest equal to AB. Therefore, the point of the equant is nothing but a geometrical short cut for computing the equations from an hypothesis that is clearly physical. But if, in addition, the planet’s path is a perfect circle, as Ptolemy certainly thought, the planet also has to have some perception of the swiftness and slowness by which it is carried along by the other external power, in order to adjust its own approach and recession in such accord with the power’s prescriptions, that the path DE itself is made to be a circle. It therefore requires both an intellectual comprehension of the circle and a desire to realize it…

“However, if the demonstrations of astronomy, founded upon observations, should testify that the path of the planet is <not quite circular>, contrary to what this hypothesis asserts, then this physical account too will be constructed differently, and the planet’s power will be freed from these rather troublesome requirements.”

Kepler’s hypothesis (which undergoes rapid evolution across the pages of “Astronomia Nova”) means throwing away the notion, that the action underlying the solar system has the form of “gear-box”-like mechanical-kinematic generation of motions. Instead, Kepler references a notion of “power” and a constant activity which generates dense singularities in every interval. While for the moment, the circle remains a circle in outward form, we have radically transformed the concept of the underlying process of generation. In a sense, that shift in conception amounts to an infinitesmal deformation of the hypothetical circular orbit, which implicitly changes the entire universe. The successful measurement of deviation of a planet’s path from a circular orbit, would constitute a unique experiment for the hypothesis of a new, non-kinematic principle of action. That is the “deeper astronomy” of Kepler!

So we come back to the problem: How to determine the precise trajectory of a planet in space, given observations made only from the Earth, and taking into account the fact, that the Earth itself is moving? Having identified the “second inequality” as the crux of the problem of apparent planetary motions, Kepler turns the tables on the whole preceeding discussion, and uses Mars and the Sun as “observation posts” to determine orbit of the planet whose motion is the most difficult of all to “see” — the Earth itself!

But, how can we use Mars as an observation-post? Mars is moving. No matter! Let us assume that <part> the hypothesis of Aristarchus remains true, namely that the planets have closed orbits, and that motion along those orbits is what produces the so-called “first inequality” determined by the ancients. In that case, Mars — <regardless of whether or not its orbit is circular!> — periodically returns to any given locus in its orbit. Furthermore, we already know the period-length of that recurrence: it is the 1.88-year cycle which we adduced last week, by <indirect means>, from the study of Mars’ bizarre apparent motions.

So, make a series of observations of the apparent positions of Mars and the Sun, relative to the stars, at successive intervals 1.88 years apart! If our reasoning is sound, Mars will occupy (at least roughly) the same actual position in space, relative to the assumed “fixed” Sun and stars, at each of those times. On the other hand, at intervals corresponding to integral multiples of 1.88 — 0, 1.88, 3.76, 5.64, 7.54 years etc, — the Earth will occupy <unequal> positions, distributed more and more densely around its orbit, the longer the series is continued (the phenomenon of relative incommensurability).

Now make two “nested” types of triangulations. Assuming first that the orbit of the Earth is very roughly circular, use the observations of Mars’s apparent position, as seen from two or more of those positions of the Earth, to “triangulate” Mars’ location in space. Next, use that adduced location of Mars, plus the angles defined by the apparent positions of Mars <and the Sun> relative to the stars, to triangulate the position of the <Earth> in space at each of the times 0, 1.88, 3.76 years etc. Then use these adduced positions of the earth to develop an improved {hypothesis} of the earth orbit. Apply the improved knowledge of earth’s orbit to correct the triangulation of Mars’ position. Use the improved localization of Mars to revise and correct the values for the Earth’s positions. Finally, use the adduced knowledge of the Earth’s orbital motion to “triangulate” a series of positions of Mars, and other planets!

The experiment was successful. Ramus, Aristotle, and Kant were demolished. The door was kicked open for a revolution in physics, and a new mathematics of non-algebraic, non-kinematic functions.

From Nicolaus Of Cusa To Leonardo Da Vinci: The “Divine Proportion” As A Principle Of Machine-Tool Design, Part I

Can You Solve This Paradox?

by Jonathan Tennenbaum

The following two-part discussion is intended to prompt a richer reflection on what was presented earlier, concerning Analysis Situs, the paradox of “incommensurability” in Euclidean geometry, and Nicolaus of Cusa’s discovery of a higher geometry based on “circular action.” At the same time, I will set the stage for a new series of pedagogical demonstrations, to be developed in coming weeks.

When you have encountered a new physical principle, you cannot just put it in your pocket and walk away. The new principle, if validated, implies a more or less revolutionary change in the entirety of existing knowledge. We have the task of integrating the new principle (“new dimensionality”) into a new, comprehensive hypothesis-system, incorporating the results of all pre-existing valid demonstrations of principle (i.e., the valid side of existing knowledge), as well as the new demonstration, as a new manifold of “dimensionality N|+|1.” What is the measure of the change in the per capita productive power of society, associated with the “impulse ratio” (N|+|1)/(N)? And how can we push the new manifold “to its limits,” uncovering new experimental anomalies which will provide us the stepping-stones on the way to future manifolds N|+|2, N|+|3|,|…?

From Cusa to Leonardo and Beyond

Would this sort of process be a fair way to characterize what happened during the 50-year period from Nicolaus of Cusa’s <cf2>“De docta ignorantia,”<cf1> to the collaboration of Leonardo da Vinci and Luca Pacioli on the “Divine Proportion”? Is it valid to conceptualize the scientific developments of the European Renaissance, from the Council of Florence through Leonardo and beyond, as a process of “integrating” Nicolaus of Cusa’s crucial discovery, with the best previous accomplishments of Classical Greek, Arab, and other European civilization?

Before entertaining the possible merits of such a working hypothesis, we should first make sure to reject any temptation to impose “linearized” misinterpretations on what Nicolaus of Cusa actually discovered. Here, as always, there is no substitute for “re-experiencing” the {process} of discovery, which at the same time constitutes its real {content.}

Among the most “tempting” and commonplace misinterpretations, for present-day readers, is to substitute naive visual imagination’s image of circular motion in empty space, in place of the radically different ontological conception of “circular action,” which Nicolaus actually adduced in his discovery. The promotion of this error by the Venetian agent Paolo Sarpi and his successors, as a willful fallacy, was key to the Enlightenment assault on the European Renaissance. Among other things, it provided the basis, via Galileo, Newton, D’Alembert, Lagrange, Euler, et al., for the elaboration of a so-called “analytical mechanics” as the model for an “Establishment science,” thoroughly “sterilized” against the seeds of discovery.

Circular Motion and Circular Action

Yes, there is a connection between the visible phenomena of rotation or circular motion, and Cusa’s principle of circular action. But the connection is that of a shadow to the real object, whose existence it lawfully reflects.

Two brief quotes from Nicolaus of Cusa himself might be helpful in this context. Both are taken from his mathematical essays on the quadrature of the circle and related topics. The first emphasizes the <cf2>Analysis Situs<cf1> principle of “relationship of species” as crucial to his discovery:

“Since polygons are not magnitudes of the same species as the circle, it is still the case, even though we can always find a polygon which comes closer to the circle than any given polygon, that among things, which can be made smaller or greater, the absolutely largest can never be attained in existence or possibility. In fact, the area of the circle is the absolute maximum relative to the areas of the [inscribed] polygons, which are capable of being more or less and therefore cannot reach the circular area, just as no number can ever attain the encompassing power of the Unity, nor the Composite the power of the Simple.”

Full Scope of Circular Action

Another essay ends with a magnificent stretto, in which Nicolaus reveals the full scope of his conception of “circular action,” encompassing the relationship between hypothesis, higher hypothesis, the hypothesis of the higher hypothesis, and “the Good”:

“We assert, therefore, that there exist beings of the nature of the circle, which could not be their own origin, since they are not like the absolutely greatest circle which alone is eternity. The other circles, which, indeed, seem not to have a beginning and an end, since they are conceived through abstraction from the visible circle, nevertheless, since they are not infinite Eternity itself, are circles whose being derives from the first, infinite and eternal circle. And these circles are, in a certain way, Eternity and complete Unity relative to the polygons inscribed in them. They possess a surface which incommensurably exceeds the surfaces of all the polygons, and they are the first images of the first, infinite circle, even though they cannot be compared with the latter on account of its infinity. And there are beings having an unending circular motion around the being of the Infinite Circle. These contain within themselves the power of all the other species, and from their enveloping power they develop, in imitation, all the other species; and, beholding everything within themselves, and beholding themselves as the image of the Infinite Circle, and through beholding this image–themselves–they raise themselves up to the eternal Truth or to the very Origin. These are the beings endowed with Reason, who comprehend everything by the power of their minds.”

Machine-Tool Design Prototype

By what mode of action do we expand the “enveloping power” of the human race, exercising increasing dominion over the Universe, and knowing Reason in the mirror of its own active participation in developing the Universe? What could be more fruitful, to deepen our understanding at this point, than to follow the track of Nicolaus’s discovery into the busy workshops and “design bureaus” of Leonardo da Vinci and his Renaissance friends! Here is the prototype of the “strategic machine-tool design sector,” which has been key to the emergence and survival of the modern nation-state up to the present.

Much oligarchical effort has been expended, over the centuries, to mystify and conceal the “machine-tool principle” underlying Leonardo’s work in all fields. For example, Leonardo is often portrayed as a “speculative genius” whose designs were wildly impractical in his day. As a matter of fact, much of Leonardo’s time was spent in direct collaboration with machine-building workshops and factories, as well as with construction teams involved in infrastructure and other projects, developing solutions to problems as they came up. Thus many, if not most, of Leonardo’s actual designs were implemented in his day.

Another malicious piece of gossip, spread by Joseph Needham among others, was that Leonardo made “no fundamental breakthrough” in the principles of machine-design. That assertion is commonly coupled with the assertion, that Leonardo was not a scientist, and that the real breakthrough, leading to the Industrial Revolution, came with the formal mathematical physics of Galileo, Newton, et al. For example, a book on Leonardo’s engineering work, published by one L. Olschki in 1949, claims: “The technical principles employed by Leonardo were hardly different from those handed down from antiquity and the Middle Ages…. He never attempted to frame new theoretical approaches or theories of mechanics.”

Leonardo’s Breakthrough

Leaving aside such malicious nonsense, get out a good collection of Leonardo’s sketches. Concentrate particularly on his designs for machines and mechanical devices of machines. Looking over those sketches, ask yourself: What was Leonardo’s crucial breakthrough in these matters? What is stunning, revolutionary, about Leonardo’s approach to the design of machines, and related matters, which went decisively beyond what had existed before? I am not talking about individual “inventions,” so often played up as isolated entities; I am asking for a “One.”

Whoever tends to read Nicolaus of Cusa’s principle of circular action as merely a form of “motion,” in the manner indicated above, will be plunged into a rather profound paradox at this point.

Looking at Leonardo’s designs, what do you see except mere mechanical linkages–assemblies of gears, pulleys, and levers, which transmit motion from one place and direction to another, without “adding” any new motion? Didn’t Archimedes already describe the basic mechanical principles involved, as typified by the action of the lever or pulley? Or is there something more than just “mechanics” in Leonardo’s machine-designs, something absolutely banned from the textbooks of “analytical mechanics,” but which is a key to the unprecedented rate of increase in the productive powers of labor, unleashed by the Renaissance?

To be continued.

From Nicolaus of Cusa to Leonardo da Vinci:

The “Divine Proportion” as a Principle of Machine-Tool Design Part II

CAN YOU SOLVE THIS PARADOX?

by Jonathan Tennenbaum

Lyndon LaRouche’s discoveries in physical economy provide the key to unlocking the secrets of Leonardo da Vinci and the Italian Golden Renaissance, to a degree which would have been impossible at any earlier time, before LaRouche’s work.

Observe that the leading features of Leonardo’s designs for machine tools and other machines–most emphatically including the method of “non-linear perspective” employed in his drawings–all cohere with one central conception:

The emergence of {nation-state physical economy} as a {living process} based on development of the cognitive powers of individual members of society, imposes a unique “curvature of space-time” upon the Universe, such that each and every particular must be conceptualized and measured by reference to the “horizon” defined by that curvature.

That central conception subsumes the following features and consequences@s1:

1. A physical economy is a special type of living process, whose maintenance and growth depends on development of the cognitive powers of the individual members of society.

2. The action of human Reason upon the Universe, occurs {solely} through the instrumentality of living processes. That is, through the activity of sovereign human individuals, working in and through society, upon the expanding domain of Man-altered Nature which constitutes the “substrate” of physical economy as a living process.@s2

Reason’s Dominion Over the Universe

3. Hence, Leonardo da Vinci’s conception of non-linear-perspective curvature is based on a relationship of Nicolaus of Cusa’s “species”: Living processes exercise increasing dominion over inorganic processes, and human Reason exercises increasing dominion over the entire Universe via its dominion over living processes (i.e., human individuals, the physical economy, and an expanding biosphere).

4. In particular, the required notion of “technology,” appropriate to the maintenance and development of physical economy, {cannot} be derived from inorganic physics. No mere physical laws, of the sort suitable to “inorganic physics,” could ever account for the impact of a new machine or other invention on increasing the productive powers of labor.@s3 Although it is possible to design a machine on the basis of a simple hypothesis, we cannot measure its economic {effect} that way. The survival of human society, therefore, depends on shifting attention from the mere “engineering approach” of simple hypothesis, to encompass the “horizon” defined by higher hypotheses. Leonardo’s drawings have the included purpose, to communicate exactly that conception.

5. For these and related reasons, Leonardo’s studies of anatomy, and his collaboration with Luca Pacioli on the “Divine Proportion,” were decisive inputs to his approach to machine-tool design. Leonardo sought to apply to the design of machines, a reflection on the principles and means by which living organisms exercise dominion over the inorganic domain.

6. When a living organism incorporates non-living material into its active domain, it {imposes} its own characteristic {ordering} upon that material. (One day soon, the environmentalists might turn against plants and trees, denouncing them for imposing their “authoritarian values” upon poor, defenseless dirt!)

Harmonic Proportions

7. Leonardo, Pacioli, and others demonstrated how the peculiar space-time ordering of living processes finds its lawful {visible} expression in self-similar elaborations of the harmonic {proportions} derived from the division of the circle and sphere. The latter all belong to the dominion of the circle’s “Golden Section.”

8. This sort of approach points to a principle of {harmonic composition of motion} for the evolution of machine-tool designs integrating an increasing number and density of degrees of freedom. The harmonic principle of the “Golden Mean” will be reflected, not necessarily in the individual machine per se, but rather in the context of the evolutionary series of species of technology. The latter constitutes, on the one side, a central functional feature of the growth of physical economy as a living process, while at the same time embodying an ordering of mutually inconsistent theorem-lattices of increasing “power” under the principle of “higher hypothesis.”

9. In the continuation of this process, with the increase in energy-flux density and precision of machine-tool design, discoveries in microphysics oblige us to replace the concept of “motion” by a generalized notion of “harmonically ordered physical action.” The approach of Leonardo (and later Kepler) received preliminary, but brilliant confirmation in the domain of atomic and nuclear physics.

Beauty of Leonardo’s Drawings

10. Hence, the stunning beauty of Leonardo’s drawings! He communicates not merely a set of “specifications” for a machine, but a {conception}–a conception of that invention as seen in the perspective defined by the creative principle of the Universe as a whole. By this method of “non-linear perspective,” Leonardo is able to communicate the creative process itself, and not merely a particular product. Thus, Leonardo’s designs and machines are vehicles for the communication of higher ideas, for the generation of higher qualities of labor power. Like great Classical music, they embody Reason’s ironic reflection on the principle of life.

Notes:

1. Besides Lyndon LaRouche’s writings, I would especially recommend juxtaposing to our discussion, the relevant articles by Dino de Paoli on Leonardo and related matters.

2. Have you stopped to consider the significance of the fact, that we need a brain in order to think? Actually, we need more than that: To develop, individual creative reason must continually expand and intensify its “active domain.” By the term “active domain,” I mean, roughly, the region of the Universe which is directly subject to the deliberate actions of a given individual. The growth of the active domains of members of society, is obviously correlated to increase in per capita and per hectare consumption of energy and other components of the market baskets, as it is to increase of the productive powers of labor. To the extent that the creative contributions of individuals are communicated and realized by society, their active domains may encompass the entire physical economy, and more. Would it be justified to conside, that growth of the “active domains,” in some respects represents an enlargement of the physiological processes of the brain, as an instrument for the development and realization of valid ideas?

3. Some might reject such a categorical proposition as preposterous. Don’t we know countless examples of inventions, whose labor-saving effects can easily be explained by any physics student? For example:

@sb|Levers, pulleys, and similar devices permit a single man to lift a weight which would otherwise require the muscle power of many men.

@sb|Ball-bearings and similar devices improve the performance of an existing machine by reducing friction and wear (a major focus of Leonardo’s work, by the way).

@sb|Steam engines and other power-generating devices multiply the amount of useful power at the disposal of an industrial operative, etc.

What could be more obvious, than the increase in productivity, caused by the above-mentioned inventions? Yet, such a casual affirmation overlooks at least one decisive point: What about the direct and indirect {costs} (in real terms) of developing, producing, and maintaining a given machine or technical improvement? How can we be {sure,} in any given case, that that additional cost will not actually exceed the saving in labor, or other benefits provided?

Observe, for example, the vastly greater complexity and intensity of motion of Leonardo’s machines, compared to the rudimentary gadgets of pre-Renaissance Europe. Even the simple act of introducing ball-bearings and related devices into machine design, adds new degrees of freedom to the system as a whole, raising the demands on the {quality of labor} required for the manufacture and maintenance of the machine. Actually, the purpose of the machines themselves, as means for urban-centered development of the nation-state, is not to “economize labor” per se, but to rather to {uplift its cognitive quality}. (Thus, while industrialization subsumes as a necessary aspect the reduction and final elimination of manual labor, a healthy industrial society actually increases the “work load” which must and can be borne by the average member of society.)

Reflecting upon such matters, we realize that the increase in the productive powers of labor, associated with the introduction of a new machine into the productive process, can hardly be determined from a mere analysis of the machine itself. It requires that we carry out a measurement of the entire economic process within which a proposed new machine design is to be “inserted.” Since the insertion of a new technology changes the characteristics of the economic process, that measurement must take into account, not only the present, but also its projected development in the future. In the last analysis, there is no adequate answer which does not center on the rate of improvement of the cognitive powers of labor, associated with any given “pathway” of economic development. Herein lies the cause of the essential “incommensurability” of real economic growth, relative to any linear sort of engineering or “systems analysis” standards of measurement. The significance of the Golden Section (“Divine Proportion”) comes once more to the fore.

How To Purge Your Mind of “Artificial Intelligence”: Introduction to a new pedagogical series

by Jonathan Tennenbaum

One of the reasons why you don’t really understand the significance of Plato’s five regular polyhedra, is because you have never questioned your own, completely unfounded assumption, that the sphere is a figure in 3-dimensional space.

We all remember the type of horror movie, where the Earth has been invaded by alien beings with the capability of taking over the minds and wills of human victims. The victims look the same as before, but their brains have been hollowed out or short-circuited by some sort of implanted devices, so that they effectively are no longer human any more.

However frightening the experience of such a horror movie, it hardly compares with the real horror story, of what standard school mathematics education has done to the minds of nearly everyone. As a result of what was done to you, the creative, cognitive processes of your mind are “turned off” most of the time, even when you are engaged in what you consider to be intellectual effort. Instead, a form of “artificial intelligence” operates, that was installed through school education and related, mostly early experiences. Under adversive conditions of intense cultural pessimism, even those who have known the joy of real thinking, will tend to revert back to those previously implanted, school-room (i.e. “career-oriented”) habits of “artificial intelligence”.

This “artificial intelligence” (otherwise known as Aristotelianism) excludes from consideration exactly that, which is the object of human cognition. The tactic is to divide the Universe into “sufficiently small” domains of experience, in which the cognitive considerations, thus ignored, are assumed “not to matter”. Afterwards, the flattened, linearized pieces are fitted together again to construct a parody of human knowledge. The typical symptom of artificial intelligence is an obsessive fixation on the presumed existence of “objective, hard facts” –a fixation whose most revealing manifestation, perhaps, is the inability to conceptualize the fundamental significance of the sphere and the five Platonic solids.

A remedy is at hand, however. This being the best of all possible worlds — and not the vicious, artificial world of a horror film– the condition just described is more than reversible. By rooting out the problem at its deepest origins, we may be enabled, not only to restore our own creative powers to fullest blossom, but also to discover a means to render future generations forever immune to the disease of oligarchism.

That is the issue we are committed to fighting through, in the following series of pedagogical discussions on the sphere and the Platonic solids. Here, on this battlefield of choice, we are resolved to smoke out and defeat the internalized enemy of the human mind. In the process, all the most “advanced” topics we met with in our previous work — including Gauss’ biquadratic residues, modular functions and the Gauss-Riemann domain of multiply-connected action — will reveal themselves as the most elementary sorts of notions, already implicit in the original discovery of the regular solids’ uniqueness, more than 2500 years ago. All has been buried under the myth, that a non-existent “plane geometry” was the starting-point for Greek mathematics.

Start, therefore, with the following task: Given a clear night in which the stars are visible throughout the sky, how can we make a preliminary, but conclusive measurement of the curvature of the Universe?

The First Measurement of the Universe –

Part II of a series

by Jonathan Tennenbaum

The spread of mythologies in the name of “history of science,” began very early.

The Greek historian Herodotus reported, that geometry was invented in Egypt and transmitted from there to the Ionians. He also claimed, however, that geometry arose in connection with the practical problem, of measuring and reconstructing the division-boundaries of agricultural fields after each periodic flooding of the Nile (geo-metry = earth-measurement). If Herodotus intended the term, “geometry,” to signify some specialized knowledge relevant to surveying, there may be an element of truth to the latter assertion; but if he meant the geometry of Thales, Anaximander, Pythagorus and Plato, then the account is certainly wrong and highly misleading. This story of geometry’s alleged practical origin (whether Herodotus is to blame for it or not), found its way into the subsequent histories of science, up to this day. It reminds us of the theory of the “opposable thumb” and other absurdities of Friedrich Engels’ “dialectical materialism.” Contrary to this, the overwhelming evidence — including that contained in Plato’s Timaeus, in the Vedic and other ancient calendars, as well as the implied navigational skills of the “peoples of the sea” –, demonstrates that {all physical science originated in astronomy}. Astronomy, in turn, was cultivated in some form already tens, probably hundreds of thousands of years before the classically recorded Egyptian civilization, by maritime cultures spread across the globe. Geometry begins with nothing less, than Man’s attempt to measure the Universe as a whole.

This should indicate that the practice of basing school mathematics education on so-called “plane and solid geometry” — a practice that has dominated European education, despite the Renaissance, for over two millennia — is profoundly in error. Henceforth, the teaching of geometry should begin with the {failure} of plane and solid geometry, to account for the most elementary features of visual astronomy. That failure has a precise, knowable structure; to characterize that singularity, is to carry out the first scientific measurement of the Universe.

Bearing in mind that we are dealing with matters of fundamental importance, we need not apologize for the elementary nature of the following account. It should help refresh the mind on familiar matters, while opening some new flanks at the same time.

Constructing a Star Chart

Imagine you are a prehistoric astronomer, attempting to produce a star chart on a clay tablet or papyrus sheet. You require that the chart should accurately represent the shapes of the familiar constellations of stars, and also the mutual orientations of the various constellations relative to each other, so that the chart can be used for navigational purposes.

As far as individual constellations are concerned, you find no difficulty drawing any one of them separately. You just naively transfer the image of what you see, {as if unchanged}, to the tablet. No problem? But, as you begin to map {larger} portions of the sky, adding more and more constellations to the chart, difficulties arise. The constellations don’t fit together. You begin again, with another constellation as starting-point. Once again, things don’t fit. Why? Although in each case you can specify the point at which the mapping process begins to break down, the underlying cause clearly lies {outside} the specifics of each attempt.

This problem embraces paradoxes, of the sort any curious child will have observed. I stand up and look straight ahead at some point on the horizon. Now I look to the right of that, and more to the right, and so on, until, by continuing my action of “looking to the right,” I turn all the way around and come back to the original point…from the {left}! Or instead, if I start by looking straight ahead as before, and now look {up}, and keep turning my head in that “upward” direction further and further, I end up bending backward until I am moving my head {downward} toward the ground and seeing everything upside down!

(Let no one laugh off these simple paradoxes of linearity, who is not prepared, for example, to explain to any child or adult, how it can happen that the Earth can be in two different days, depending on the position on the Earth’s surface, at one and the same moment in time.)

These sorts of paradoxes give rise to unavoidable, interwoven {periodicities} in our attempt to construct a star chart — as for example when I attempt to represent the observer’s looking “to the right” and “upward” by motion “across” and “up” on the chart.

(At a more apparently “advanced” level, the same problems plague the cartesian-like coordinate systems still used by astronomers to record the positions of the stars. To describe one such system in a perfunctory manner: Given any star, let “y” be its angular “height” above the horizon (i.e., the magnitude of angle from the position of the star “downward” to the point “directly below it” on the horizon), and “x” the angle along the horizon from that point to some chosen fixed point on the horizon. We might thus represent the position of any star by a point in the cartesian plane, whose rectilinear coordinates are proportional to x and y, repectively. The resulting mapping, however, grossly distorts the shapes and angular relationships of the constellations, especially those in the vicinity of the overhead or zenith-point, where the mapping “explodes.”)

This mere descriptive approach, however falls short of identifying the underlying cause of the problem. In particular, it does not answer a crucial question which ought to pose itself to us: Does the difficulty arise only when we want to map {large portions} of the sky; or is it already present, albeit so far unnoticed, in the attempt to represent any {arbitrarily small} portion of the sky?

The Spherical Bounding of the Universe

To progress further, we need to examine the internal characteristics of that action by which we, as ancient astronomers and navigators, are attempting to measure the Universe. The ancient astronomer makes a series of {star sightings}, measuring, in effect, the {rotation} from one direction in the sky to another. Imagine that a movable “pointing-rod” of fixed length is fixed at one end to a universal joint at our point of observation. Observe that the tip of that rod moves on a {spherical surface} whose center is the fixed pivot point, and whose radius is the rod’s length. Imagine we were to construct a transparent spherical shell of that dimension around the center, and mark the shell at each position where the end of the rod points to a star. The result would be a spherical star-chart, whose markings would coincide {exactly} to the observed star positions when viewed from the center of the sphere (and only then).

We have demonstrated a {spherical bounding} of our action to measure the Universe! The sphere is not an object in the sky, but a determinate feature of our act of measurement: a representation of its underlying {ordering-principle}. Does that make it arbitrary or “purely subjective”? By no means! This phase of astronomy is a necessary step in the self-development of the Universe, and thus an imbedded characteristic of the Universe itself.

It now appears, that the ancient astronomer’s problem of drawing a star chart on a clay tablet or papyrus is equivalent to the problem of mapping the inner surface of a sphere onto a plane surface. (Note: “inner surface of a sphere” signifies — paradoxically enough — a {completely different} geometrical ordering-principle, than the “outer surface.” “Inner surface” signifies the ordering of the surface with respect to the spherical center only.)

There exist innumerable possible methods to attempt such a projection, each of which fails in a different way. The simplest is the method of central projection onto a plane outside the sphere, defined as follows: For any locus on the inner spherical surface — corresponding to a pointing-direction from the center — prolong that direction outward until it intersects the plane. Readers should thoroughly investigate this species of projection with the help of a transparent plastic sphere and a suitable light source, noting several important characteristics.

For example: the action of simple rotation (e.g. of the pointing-rod) generates a {great circle} on the inner surface of the sphere; the projected image of a great circle, so constructed, produces the effect of a {straight line} on the plane surface. Encouraged by that result, examine the effect of the projection on various arrays of great circles. At the same time, observe that the projection maps only a {half} of the spherical surface, a hemisphere, onto the plane. The boundary of that hemisphere — a great circle whose location we can determine by cutting the sphere by a plane surface parallel to the projection-plane — defines a {singularity}: the mapping “blows up” when we approach that boundary circle. In the vicinity of the boundary, the projection introduces wild distortions relative to the relationships on the inner spherical surface. The least distortion apparent occurs farthest away from the boundary, in the “polar region” of the hemisphere.

The “catastrophic” distortions near the boundary, and the circumstance, that only half of the sphere is mapped (or actually much less, if we want to avoid the worst distortions), suggests to our ancient astronomer the following tactic: Instead of trying to map the entire spherical surface (or night sky) at once, divide the surface into regular, congruent regions, and construct the “truest possible” mapping for each one. The combination of such sectoral charts would hopefully fit together to replace a single one. Note, that a complete set of central projections, of the sort we now envisage, corresponds to a {regular array of great circles} on the sphere, each constituting the singular boundary of the corresponding mapping.

Out of the corner of our mind’s eye we might already have anticipated a new source of failure: The attempt to “fit” the mappings together at the edges of the chosen regions, will result in {discontinuities}!

We have entered into the domain governed by the five regular solids. We propose to explore that domain, from a new standpoint, in next week’s pedagogical discussion. To finish this one, consider the following:

We saw, that in order to reduce the effect of distortion in each spherical mapping to a minimum, the portion of the spherical surface mapped, should be made as small as possible. But, how finely can the surface of the sphere be subdivided?

The characteristic of linear, planar, solid or cartesian geometry in general — a characteristics which distinguishes such hypothetical, “virtual” geometries from the real Universe — is the purported possibility of unlimited, self-similar subdivision or “tiling” of space. Take a square in the plane, for example; by connecting the midpoints of the opposite sides, we can divide the square into four congruent subsquares, and so on ad infinitum. An analogous construction applies to any triangle. Similarly, a cube in so-called “solid geometry” can be divided into 8 (or any cubed number) of congruent cubes.

What about the inner surface of the sphere? Take the division of the spherical surface into six congruent, curvilinear-square regions — i.e. a regular spherical cube. What happens when we try to subdivide those regions into smaller, congruent curvilinear squares? What happens for the division of the spherical surface, defined by the regular octahedron, and the other regular solids? What is the {common source} of the barrier to further subdivision?

The First Measurement of the Universe –

by Jonathan Tennenbaum

Part III: Anti-Deductive Ordering Principles

How does the One subsume the Many? The key to the Enlightenment’s “coup d’etat” against the Renaissance, was to remove the Platonic conception of higher hypothesis/change from its newly reestablished, leading role in scientific work, and replace it by the principle of {logical-deductive consistency}. Britain’s Hollywood-style promotion of Newton and his famous Law of Universal Gravitation — a “discovery” actually lifted out of the pages of Kepler’s “New Astronomy” — marked a late turning-point in this neo-Aristotelian coup. Generations of gullible minds were seduced by the promise of a “world formula”: a single mathematical law, or set of laws, from which the entirety of physical phenomena could supposedly be derived by logical deduction and calculation. The British-Venetian propaganda machine succeeded in installing this cultish idea, which Leibniz had denounced and torn to shreds in his correspondence with Clarke and elsewhere, as the academically-accepted “norm” and “ideal” of the natural sciences to this very day.

Do you think you have been immune to this operation? How many times a day, in organizing, do you try to explain to contacts the relationship between two events X and Y, by attempting to prove to them, in a deductive fashion, one of the following three propositions:

X implies Y;

Y implies X;

some Z exists, that implies both X and Y.

What if the most crucial events occuring in the Uuiverse — including those most intimately related to each other — cannot be reduced to deductive consistency? In other words: what if the actual ordering of cause and effect in the Universe is anti-deductive?

Take, for example, the question:

“Why does the well-tempered system permit only a discrete set of musical tones (12 in number) within each octave? Why does it reject an unbroken continuum of pitches, including the pitches {in-between} the 12 pitch-levels of the well-tempered scale?”

Consider, as a response, the proposition:

The necessity of a specific, discrete series of musical pitches for well-tempered polyphony, flows from the same underlying cause, which determines the {impossibility} of a singularity-free mapping of the celestial sphere onto a plane surface.

Does that mean to say, that we can {deduce} the well-tempered system of music from the geometrical properties of the sphere? Wouldn’t we thereby be falling into a species of irrational, cultish belief: “sphere-worship,” or (in an earlier phase of our discussions on these issues), “spiral-worship,” or “the cult of the Golden Mean”?

Reflecting on the difficulty experienced by many in grasping the significance of the regular solids, my attention was called to a crucial step in those solids’ derivation, which few people have even noticed, and even fewer have thought through in a rigorous way. The point in question touches upon the much-misunderstood concept of the “celestial sphere.” Omitting or glossing over the relevant step, opens the door to serious confusions and misinterpretations of a sort which appear to be rampant among us, and can derail the whole effort. It is therefore urgent to clarify this matter now, before proceeding further along our orbit. The habit of focussing unblocked attention on just such matters, as the professionally-educated trend to dismiss as “too trivial to be worth thinking about,” which most often yields flashes of insight into the most advanced issues in science.

These are some of the reasons, why I deliberately began last week, not with the sphere per se, but with astronomy and the problems posed by the attempted construction of a flat star map. Note: I made no assumptions about the shape of the heavens or anything like that, but set out instead to {measure} — first in rough way, by attempting to draw the sky directly onto a flat surface, and then using a pivoted pointing-rod as an instrument. That {action} of attempted measurement, called forth an ordering principle, the which (for reasons indicated last week) I qualified as “spherical.”

We must, however, not gloss over a very crucial point here: The ordering principle in question is {not directly visible to the eye}; the immediate result of my measurement effort was a pattern of distortions and {discontinuities} — singularities of “failed” mappings!

So, forget the sphere as a visible form. Get it out of your head entirely. It tends to drag your thinking into a downward, aristotelian direction. Don’t say “sphere,” until you have generated the concept.

(An aside: Remember, Baby Boomers tend to throw words at things, as a substitute for working problems through. This is called “verbal skills”: the magic powers by which Baby-Boomers were typically raised and taught to manipulate their liberal parents, and to succeed in school, university and career … reinforced, naturally, by an occasional temper-tantrum. For this and related reasons, it is {mandatory} that the reader actually carry out the experiments indicated in last week’s, this and the following pedagogical discussions. The worst mistake of all, is to think you don’t need to actually carry out a construction or related experiment, because you presume you already know what the result will be, or can discern it from the text. That is pure information theory, pure post-industrial ideology. DO the experiments described. Don’t just do them in your head, don’t just try to imagine them, don’t watch somebody else do them, don’t read a description of them… DO THEM! Otherwise, you may read the words and make interpretations, but you won’t know what I am really talking about. Afterwards you may devise more elegant and powerful ways to evoke the relevant concepts, for which I and others will be most grateful.)

Make sure you have really worked through the main experiment from last week — the attempt to represent the visible arrangement of stars in the sky, on a flat surface. If a full, clear sky is not available, you can do a roughly equivalent experiment in the middle of a room. Take a large piece of paper, and try to draw your whole surroundings, as they appear to you, on that flat surface.

Leaving aside for a moment the spherical projections described last time, let’s examine the problem anew from the standpoint of multiply-connected circular action. Taking the cue from Leonardo da Vinci and later Johannes Kepler in his “Snowflake” paper, examine how the most elementary, multiply-connected features of circular action are expressed in the harmonic motions of the human body.

Standing straight from the vantage-point of your drawing experiment, point your arm straight ahead. Now, rotate the arm to point to the right (or left). That defines a first interval of rotation. From that position pointing right, now rotate into the straight upward direction. That defines a second interval of rotation. Finally, rotate down from the upward direction to the straight-ahead direction. With these three rotations you have generated a triangle: not a visible triangle, but a {triangle of rotational action}.

Now compare the two intervals: {forward==>up}, and {right==>up}. Observe, that if and rotate our whole body around to face to the right, then the interval {forward==>up} now becomes the interval {right==>up}. Observe also, that if we bend forward at the waist, so the trunk of our body is pointing straight forward, then the motion of our arm, which produced the interval {right==>up}, now does {right==>forward}. In this way, by {rotating the rotations}, each of the three rotations forming our “triangle” can be rotated into any of the others. We have an equilateral rotational triangle! The rotations which carry each of those three rotations into any of the others, constitute the {angles} of the triangle. (Note, that the rotations in question are all of the type described in ordinary geometry as “right angles.” Anticipate the paradox: a triangle whose angles are all right angles!)

Explore, in the same way, the interrelations of the total of 12 mutually-similar rotational intervals and 8 equilateral rotational triangles arising from what Kepler identified as the astronomically-derived, “three distinctions” embedded in the construction of any animal: forward-backward, up-down, left-right. Don’t confuse them with coordinate axes in cartesian space; we make no assumption of scalar, linear extension here, but only angular, rotational action, implicit in our astronomical measurements.

Indeed: What is the crucial distinction of the manifold of rotational action, we have begun to explore, as compared to a flat, cartesian manifold?

Note the following: In a flat plane, for example, the linear displacement “to the left-and-right” vis-a-vis motion “up-and-down” are apparently {independent} degrees of action. If we, for example, move one unit distance to the right in a plane, and then one unit upward, the result will be the {same}, as if we would first go up, then to the right. Compare the composition of motions, that constituted our “rotational triangle.” Are rotation “up,” and “to the right,” for example, strictly {independent} dimensions of action? Or is not the very existence of the equilateral right-angled triangle, just generated, characteristic of the multiple-connectedness of the rotational manifold?

— —————–

Note: Our ongoing pedagogical exploration should provide guide-posts for sorting out the real history of ancient geometry, and demolishing encrusted mythologies. The following note from my own, preliminary readings, will hopefully encourage comments and contributions by others.

An 1870 German treatise on the development of geometry before Euclid, refers to an ancient Egyptian treatise on geometrical constructions, the so-called Rhind papyrus from 1100-1000 B.C. According to the German author, that papyrus documents familiarity with the regular solids, as well as the elements of spherical (i.e. rotational) geometry. The papyrus contains a note to the effect, that it is a copy of a treatise dating from much earlier, probably to 3400-3200 B.C. Much later, around 600 B.C., the Ionian Thales devoted much of a lengthy visit to Egypt, to studying the methods and results of Egyptian astronomy. Back in Miletus, Thales and his school, including most notably Anaximander, reworked the Egyptian results and launched a revolution in Ionian-centered scientific development. The next phase appears to eminate from the philosophical-political movement of Pythagorus, who (among other things) is credited by later Greek writers with discoveries concerning the construction of what were referred to as the “cosmic figures” (kosmica skema). Of course, the Greek sense of “cosmic” has nothing to do with the present-day connotation of the mystical or other-worldly. Quite the opposite: the Greek expression connotes “ordering” in the sense of “ordering of the Universe” or “the Universe as ordering principle.” This is the platonic conception Wilhelm von Humboldt’s brother Alexander intended as the title of his many-volume summary of the natural science of his day: “Kosmos.”

It would also be worthwhile to investigate the obvious astronomical origin of the Chinese “Book of Changes,” which (among other things) contains unmistakable references to the characteristic, octahedral singularity of visual astronomy, explored above in a preliminary fashion. The Chinese and Egyptian developments are evidently coherent.

Part V: The Curvature of Visual Space

by Jonathan Tennenbaum

When we attempt to relive Kepler’s discovery of the efficient ordering-principle of the solar system and its crucial empirical feature — the exploded planet between Mars and Jupiter — a chief obstacle we encounter is our own, deeply-ingrained assumptions concerning the nature of space. However much some people might scream and hurl epithets at Newton, when you scratch the surface, you often find their idea of space essentially coincides with Newton’s; indeed, it seems virtually impossible to them to imagine anything essentially different, than an infinitely-extended, featureless void in which straight-line motion (or something equivalent to it) is the elementary form of action. This typically goes together with an awful sense of smallness, the existentialist’s squatting in the middle of an endless parking lot.

Fortunately, remedies are at hand. The Universe is much, much smaller than you think. What at first glance might appear to be “merely subjective” paradoxes of visual geometry, can help us free ourselves from the prison of Cartesian space, and provide a preliminary insight into a notion of anti-Newtonian curvature of the Universe, in which no isolated events are possible. The following, experimental exploration paves the way.

For this purpose, let’s go back once again to our starting-point — mapping the stars — from a somewhat different angle. Rather than attempting to draw the heavens (or other features of your surroundings) onto a piece of paper, proceed as follows. Take a flat surface of transparent material (such as plexiglass) and fasten it somehow in a fixed position in front of you, so that you can see a chosen constellation of stars (or arrangement of objects in a room) through that transparent window. Using a marker pen, and being careful not to change your eye’s position in space (better use one eye and keep the other closed) mark onto the window the positions of the stars or other objects that you see. By construction, the positions and configuration of the resulting marks will coincide {exactly} with those of the corresponding objects — at least, as seen from the chosen vantage-point of your eye.

What is the problem with this procedure? For one thing, we evidently cannot map more than the one-half of the visible world which lies in front of us through the window, at one time. We might of course set up the window on the opposite side of our vantage point, turn around at map the other “half-world”. But the two maps do not fit together smoothly. Separating the two half-worlds is a singularity, where our ray-of-sight becomes more and more skew to the surface, and finally does not touch it at all.

This is not the only difficulty. Taking the window down from its fixed position and examining directly various parts of the image drawn on it, we that they generally {do not} match what we see at all, but become more and more distorted as we move outward from the center (i.e. the region of the window directly in front of the vantage-point). Distorted how? Take a constellation of stars, for example. As the ancients did, we assist our memory by imagining the stars of the constellation joined by imaginary lines in the sky; the resulting {shape}, as reflected in specific angles between those lines, helps us to recognize the constellation. Now look at the image of constellations which are far from the center of our projection. If we measure the {angles} made by the corresponding lines on the surface of the window, we find they are generally very different from the angles we see in the sky. Try it!

The paradoxical nature of the difficulty involved, will become clearer, it you do the following very simple additional experiment: Stand in the middle of an approximately cubical room, facing one of the walls. That wall, bounded by the edges where it meets the ceiling, floor and adjacent two walls, is clearly {square} or at anyway {rectangular} in shape; and that is exactly what you see when you look toward the middle of the wall. In particular, the angles at the four corners are obviously right angles, right? But now look directly into one of those corners. You see three lines representing the edges of the cube coming together… at equal angles of each 120 degrees! What happened to the right angle at the corner of the wall? It now appears as a 120 degree angle! How is that possible? How can an angle change just because I look at it differently? Note, that the change is not explainable as the result of a shift in the position of the vantage-point in space. That point remains the same; all that changes is the {direction} we are looking in.

You can use the device of our transparent window to verify this bizarre phenomenon. If I set up the surface parallel to the wall, and trace out the outlines of the boundaries of the wall as they appear from my vantage-point, I get a square. The images of the boundaries are straight line segments intersecting at right angles. If I now look straight at the upper right-hand corner, and hold the transparent surface at a perpendicular to my line-of-sight to the corner, what I mark on the surface is three line segments intersecting symmetrically at a common point. The 90 degre angle has now become 120 degrees!

Do you think the cubical shape of the room itself is the cause of this problem? I say no. For, imagine we would install a large, transparent sphere (for example) around our position at the mid-point of the cubical room. Taking the common midpoint of the cube and sphere as our vantage-point, we could project the image of the 12 edges of the cube onto the spherical surface. If done with great care, in fact, an observer situated at our vantage point, and looking only at the pattern of lines traced on the spherical surface, would have the impression of standing in the middle of a cube! Again, looking first toward the middle of what appears to be one of the walls, and then looking toward one of the apparent corners, the observer would experience {exactly} the same change of corner angle, from 90 to 120 degrees, as before.

(Some industrious persons should prepare “pedagogical museum” demonstrations for each local, along the lines just sketched. The key item to be procured, is a set of large (preferably at least 20 cm-diameter), transparent plastic hemispheres, which fit together to form a full sphere. Use water-soluble markers to trace the spherical equivalent of the cube, and later the octahedron and other regular solids in succession, on the surface of the sphere. Now have people look with one eye into any of the hemispheres, from a location close to the midpoint of the corresponding sphere. Note, that the great circles, corresponding to the edges of the solids, at appear as straight lines when viewed from the spherical center. In the case of the octahedron, when we look toward the middle of any face, we see what appears to be an rectilinear equilateral triangle, whose angles are 60 degrees. But when we look at any vertex, we see edges intersecting at right angles! Demonstrate an equivalent phenomenon, by placing a small, bright light source (e.g. the bulb of a small halogen lamp) at the center of the sphere, and examining the projected images of the curvilinear solids onto flat screens (e.g. heavy white cardboard) placed in different positions outside the sphere.) With a bit more care, we can demonstrate the same phenomenon of shifting angles in the observation of any constellation of stars that sweeps across a sizeable section of the sky. As we shift the center-point of our vision from one star to another, the apparent, overall shape of the constellation {changes}. This can be verified using projection on a flat transparent window.

Evidently, the cause of these phenomena is not located “out there” somewhere, not in some specific feature of the objects we are observing, but rather in the “infinitesimally small” of visual space itself.

Investigate this further with the help of the following experimental device. Take a small ball of polystyrene or a clump of putty to represent any given observation-point. (It is best to mount the ball of clump at the top of a slender rod or stick, whose lower end is fixed to a flat base). Taking something like slender bamboo skewers (thin shashlik sticks or equivalent), we can represent any given {direction} from the given observation-point, by sticking the tapered end of a skewer, pointed in the given direction, into the center of the ball or clump. So, for example, let two such sticks, stuck into the ball, point in the directions of any two stars. Note the {angle} formed by the two directions. What is the value or measure of that angle? It would appear to be nothing but the magnitude of the {rotation} necessary to rotate the one direction into the other. Note we have made an implicit assumption or hypothesis here: the notion, that for any two directions taken from our vantage-point, it is actually possible to transform the one into the other by a simple rotation.

Now consider {three} directions, represented by three sticks pointing out from the common center. These might, for example, represent the directions of three stars, as seen from the given vantage-point. What are the {true angles} formed by that triangular constellation of stars? I don’t mean here the angles we might imagine are formed “out there” between the stars themselves, as objects supposedly existing somewhere in some sort of space, hundreds or thousands or millions of light-years away; let’s avoid making any assumptions about that. Rather, I mean the angles formed directly inside a hypothetical “monad” located at the given vantage-point.

Looking at the configuration of the three sticks, perhaps you might suddenly realize something you never noticed before: Any two pairs of directions define an {angle} — a unique rotation carrying one to the other. That defines 3 angles of rotation. But this is not all! Name the three given directions A, B, C. Compare the two rotations from A to B and A to C, respectively, and recall our earlier discussion of the notion of “rotation of rotation”: From the standpoint of the direction A, the two rotations, A->B and A->C are characterized, beside a definite magnitude or angle of rotation, by two different {directions of rotation}. Between those two directions of rotation there is an {angle}, namely the angle of a rotation carrying the one direction of rotation to the other. Thus, any three directions determine not 3, but a total of {6} angles of rotation!

(Some may be accustomed to a different approach to the same relationships in terms of standard solid geometry, as follows: two directions from the common center define a common {plane}. Two such planes, defined for example by pairs A, B and B,C, intersect to form a “plane angle”. The three “additional” angles are the angles formed by the pairwise intersections of the three pairs of planes through A,B and A,C and B,C. Fundamentally, however, the planes in question represent nothing but directions of constant rotational action, and the concept of “plane angle” is just a disguise for multiply-connected rotational action. We require no assumption of self-evident linear extension, of the sort which pervades so-called “standard classroom mathematics”.)

Now examine the relationship between the array of six angles, just defined, and the {changing} shapes which an observer, located at the given vantage-point, will observe when looking in different directions at one and the same constellation of stars. Just look at the effect of projection of the three directions onto a variable plane.

To close this week’s work, try a final experiment: What is the effect of two rotations, carried out in succession? Hold a book in front of you, for example. First rotate it 90 degrees in the clockwise direction. After that, rotate it 90 degrees around the horizontal axis (the upward part rotating downward away from you). Note the resulting orientation of the book. Now, do the rotation on the vertical axis first, and then the clockwise rotation. Why is the result different? Compare this with the case of combining relatively linear displacements, as when we slide the book on a table a certain distance, parallel to itself, in each of two different directions. Is the multiply-connectedness of the rotational manifold, just demonstrated, responsible for the paradoxes of vision?

The First Measurement of the Universe

Part VI: What Is a Singularity?

by Jonathan Tennenbaum

We now enter a crucial phase of our journey, which begins by discovering the axiomatic implications of what at first appears as a mere optical illusion, takes us to Kepler’s discussion of “the curved and the straight” in his Mysterium Cosmographicum, and on from there to a fresh view of the regular solids.

First, an experiment.

Take a large transparent hemisphere, held or fixed with its border-circle in the vertical plane, so you can look into the inside with your eye in the spherical center and the pole of the hemisphere straight in front of you.

Trace a great circle (or actually half-circle) approximately at mid-height on the hemisphere, so that it has the appearance of a horizontal line when viewed from the center. Now trace a “vertical” great circle, which cuts the horizontal one at right angles at a point X. Viewed from the hemisphere’s center, this second line appears as a straight line running perpendicular to the first one. Note, that relative to the horizontal line (circle) running right-left, the up-down line is perfectly {symmetrical}: it does not “lean” in either direction. Now chose a point Y about 30 degrees to the right (or left) on the horizontal circle from the position of X, and draw a third great circle through Y, at right angles to the original circle. What do you see when you look from the sphere’s center? You see a straight, horizontal line with perpendiculars drawn to it at two points X and Y, don’t you? And as perpendiculars intersecting the common base-line, they must be parallel to each other, must they not? Indeed, focussing attention on a point mid-way between X and Y, the perpendiculars appear as perfectly parallel, vertical straight lines coming off the horizontal.

But as you look upward from the horizontal line, you notice that the “parallels” come closer and close together, as if leaning toward each other! How is that possible, if they remain straight? You recheck the angles at the horizontal line. No question, they are right angles, which means {complete symmetry}: the perpendiculars cannot lean in either direction, left or right. And yet, you just found them converging toward each other! Did they somehow get bent? You follow each of the perpendiculars carefully, and find no divergence anywhere from what appears to be {perfect straightness}! How could it happen, that perfectly straight lines, making right angles to a common line, stop being parallel?

Compare this paradox with that of our earlier investigation of the great-circle triangle with 90 degree angles, traced on the surface of a sphere. Looking from the center the sides appear as perfectly straight lines, forming an equilateral triangle. Looking at the angles, you see that each one is a right angle. Try to draw what you have seen on a piece of paper. Impossible? Why? A triangle is a triangle isn’t it? If so, why can’t you draw it?

Evidently, something anomalous is going on here, which is much simpler and more fundamental than most ordinary sorts of optical illusions. I suggest, that the problem is not located in your visual apparatus per se, but in your mode of {interpretation} of visual perceptions, so that you experience the paradox as “unheimlich”, as bursting forth inside your own mind. Let’s try to trap this critter for closer examination:

1. An arbitrary configuration of great circles, when viewed from the center of the sphere, appears to the viewer as a configuration of {perfectly straight lines}.

2. No {single} view of that configuration contains {anything} which were incompatible with the assumption, that what we are looking at, is an array of straight lines drawn in a plane.

3. A difficulty arises only, when we compare {more than one view} of the same apparent configuration. Indeed, when we try to {correlate} our various perceptions of that array, as we look from the center in various directions, we encounter phenomena (i.e. the equilateral triangle with right angles, or the converging perpendiculars) which are absolutely incompatible with the assumption just articulated.

4. Why does this surprise and baffle us? Evidently, one and the same perception, and one and the same array of predicates (the straight-line images) can be interpreted in more than one way, from the standpoint of more than one set of assumptions concerning the geometry in which those predicates are embedded. It would seem as if the very appearance of an array of straight lines tends to evoke, in our minds, the assumption of a linear, plane geometry. Whereas, that same appearance is not only consistent with a curved geometry, but the {changing} array of appearances, arising when we change our direction of viewing, is compatible {only} with a non-zero curvature of a certain type.

5. That, then, is where the implicit flaw is located; not so much, I submit, in the formal assumption of a plane geometry per se, but rather the deep-seated tendency to regard the characteristics of a geometry as something emanating from, or self-evidently determined by, the predicates (appearances, objects, isolated “facts”) in and of themselves. Whereas, what {distinguishes} the curved from the flat geometry, in this case, is not the predicates per se, but the characteristic of {change} in the adducible relationships within the array of predicates, or more precisely, in our cognition of that change and its implications.

Kepler’s Argument

From this standpoint, let us turn to the kernel of Kepler’s argument in his Mysterium Cosmographicum, the section entitled “The sketch of my main proof”. I hope the pedagogical devices of this and the preceeding pedagogicals will throw some new light on Kepler’s notion of “the curved versus the straight” (or, for reference to our present discussion, “curved versus linear, or flat”), not as an “objective” contraposition of types of forms in space, but rather in terms of the {mental processes} we have just begun to explore, and particular the process of {shift of basic assumptions}, from one type of geometry to another. Much more could be said about this, and we shall come back to it again, but let us go ahead and read what Kepler writes:

“God wanted, that Quantity should be created before all other things, in order that a comparison of {curved} and {straight} might occur. Exactly for this reason I find the Cusaner (Nicolaus of Cues) and others possessed of divine greatness, namely because they attached such high importance to the behavior of the straight and the curved toward each other, and dared to attribute the curved to God, the straight to the created things…What the Cusaner ascribed to the circle, and others to the space enclosed by the sphere, I attribute only to the surface of the sphere alone. I am firmly convinced, that no curved thing is more noble and more perfect than the spherical surface. For, the (solid) ball is more than the spherical surface, and is mixed with the linear, by means of which alone the interior is filled. A circle arises only in a plane, i.e. only when the ball or sphere is cut by a plane….

“But why did God chose the difference between curved and straight, and the noble nature of the curved, when he wanted to form the world? Why, indeed? Only because the most perfect architect must necessarily construct a work of the highest beauty…. In order that the world might be the best and most beautiful, in order that it might be able to receive this idea, the All-wise Creator produced Magnitude and brought forth the Quantities, whose entire nature is comprehended, in a sense, in the differentiation of the two concepts, the straight and the curved… It is probable, that God from the first moment selected the curved and the straight, in order to engrave in the world the divine nature of the Creator; to make possible the existence of these two concepts, quantity was created; and in order that quantity might be conceived, He created before everything else (spatial) body.

“As we before chose the sphere, because it is the most perfect quantity, so we now make a {single} jump to the bodies, which are the most perfect among the straight quantities and consist of three dimensions.”

In this light, let’s review the ground we have traversed, once more. We had two geometries, a linear geometry represented on a plane surface, and a spherical geometry. The two geometries are fundamentally incompatible, hence Kepler’s expression: a “jump”. We cannot construct a single, consistent, “literal” representation of the curved surface of a sphere (i.e. the manifold of rotations), within the bounds of linear plane geometry. Might there exist some {other} form of representation? We already have the answer on the tip of our tongues: Given the impossibility of a consistent representation of spherical geometry within the linear, plane domain, the spherical domain’s {existence} could have no other lawful manifestation within that linear domain, than through the generation of characteristic patterns of {anomalies}! What, then, is the {minimum} set of anomalies, sufficient to characterize what Kepler describes as the “single jump” from the “straight” to the spherical geometry?

Aha! We already encountered a relevant sort of anomalies, which arose in the attempt to {correlate} different views of one and the same configuration of great circles on the spherical surface, as seen by looking in different directions from the center of the sphere.

Juxtaposing an Array of Projections

Investigate this phenomenon, by placing a light source at the center of the transparent (hemi)sphere, and projecting images of various great circles traced on the spherical surface, onto a large, flat screen which we have mounted in any chosen, variable position relative to the sphere. We see that the images projected on the screen are indeed straight lines. Indeed, if we keep the screen in its given position, and replace the light source again by our eye, then the positions of the straight lines on the screen will be seen to coincide {exactly} with the appearances of the lines (great circles) on the sphere, as seen from that center.

Note, however, that the actual array of lines projected on the screen, including the magnitude of the angles between those lines, change greatly when you move the screen from position to position around the sphere. What is the significance of those changes? Evidently, the various projections correspond to what we referred to above as “different views of one and the same array of great circles” when viewed in {different directions} from the center.

The relationship becomes clear, when we determine the line running from the center of the sphere to the nearest point P on the plane of the screen; in other words, the perpendicular from the sphere’s center to the plane of the screen. Let Q mark the position where that line passes through the spherical surface. If we trace the projected images on the screen, and hold the screen in front of us so we are looking at P, then what we see is a “photograph” of how the sphere appears to us, when we look from the center into the direction of the point Q.

For any given position of Q, the corresponding positions of the screen are defined by the perpendicular planes to the corresponding direction. Clearly, moving the screen closer or further away from the screen, while keeping it perpendicular to that line, only blows up or contracts the dimensions of the image, while keeping the angles the same. If we choose to regard such changes as non-essential — which indeed seems justified in view of our search for a “minimum” representation of the anomalies — it makes sense to choose only one plane for each Q. Which one? The only unique choice, at this present stage, is to slide the screen up to the sphere until it touches it, at Q; in other words, project onto the {tangent plane} to the sphere at Q.

If, now, the anomalies which we observed earlier, are connected with the discrepancies or changes between such projections, when made in different directions, then two tasks confront us: First, to determine a minimal set or sets of projections, needed to display the type of anomaly in question; and second, to characterize the anomaly itself.

Bearing in mind what was said in the next-to-last paragraph, each projection involves the choice of a point Q on the sphere, such that the line from the sphere’s center through Q defines the direction of the projection; the screen being located at the tangent to the sphere at Q, perpendicular to that line. Accordingly, choosing an array of projections of the indicated type, amounts to choosing a {set of points “Q” arranged on the spherical surface}. Each projection is equivalent to a “snapshot” of the spherical surface, taken from the sphere’s center with our “camera” pointed at the corresponding point Q. The interesting phenomena will obviously be located in the regions where any two projections, say corresponding to points Q and Q’ from the set, intersect or overlap with each other.

With a bit of thought, we conclude that the character of the transformation or change between two such projections, depends only on the change of relative directions, i.e. of the relative positions Q and Q’ on the sphere; or in still other words, on the {arc} between them. As a result, to obtain the simplest, minimum characterization of the anomalies we are looking for, we must choose the array of points Q in such a way, that the arcs between adjacent points of the set, are all {equal}. In other words, they must form a {regular} array.

Isn’t it now obvious, where our journey is taking us? Look at the array of tangent planes (our “projection screens”) corresponding to the various points of the regular array of points Q. They form a kind of “envelop” within which the sphere is inscribed. Observe the edges formed where adjacent planes intersect, and cut off the portions of those planes which protrude on the outside of the intersections, in the obvious manner. What do you get?

Finally, do the following experiment. Trace a single great circle on your transparent sphere, and install a small, but bright light source in the middle of the sphere. Next, using some appropriate translucent material (i.e. plastic sheet), build a regular solid around your transparent sphere. The points of tangency with the sphere, defines the regular array of points “Q” in our previous discussion. Now observe the image of the great circle, projected onto the faces of your regular solid. It appears as a closed chain of straight line segments, whose “links” are at the edges of the solid. Observe the {discontinuous change of angle of inclination}, when the image crosses an edge, from one face of the solid to an adjacent one. Does this not remind you of the refraction of light? Finally study how te image behaves, when the sphere is rotated inside the solid.

The First Measurement of the Universe

Part VII– Prelude to the Pentagramma Mirificum

By Jonathan Tennenbaum

Recapitulation

Pursuing Kepler’s juxtaposition of the “curved and the straight” in terms of the attempted mapping of a spherical surface onto a plane, I last week suggested the following:

Given the manifest impossibility of a simple, consistent representation of spherical geometry within the linear, plane domain, the spherical domain’s {existence} could have no other lawful manifestation within that linear domain, than through the generation of characteristic patterns of {anomalies}! What, then, is the {minimum} set of anomalies, sufficient to characterize what Kepler describes as the “single jump” from the “straight” to the spherical geometry?

I sought to answer that question, by studying the anomalies, which occur when I project the sphere from the center onto a tangent plane. The most obvious anomaly is the apparent {incompatibility} between any two such projections: they don’t fit together, at least not in any way that can be accounted for by “plane geometry.”

I ended up with the idea, that a minimum representation of spherical curvature would be achieved by an array of projections, whose “incompatibilities” all have the same form. This led to the requirement, that the {directions} or midpoints of the projections, should form a {regular array} on the surface of the sphere. I argued, that this amounts to {projecting the sphere onto the faces of a regular solid}.

Some readers surely recognized at least one major inadequacy in my approach, namely, that I posed the choice of a regular array as a {requirement}, but I didn’t account for {where} such regular arrays come from and {why} they must exist.

The effort to fill this lacuna will takes us into new territory, a territory inhabited and ruled by a wonderous creature, called the “pentagramma mirificum.”

Now, the territory in question has a fearful reputation: Many of those who venture in, either never return again, or if they do, they tend to emerge in a scrambled-up state, suffering from dizziness and giddyness and unable to report what they saw in a coherent way. To avoid falling victim ourselves, it is necessary to proceed step-by-step, and above all to fix our conceptual bearings from the start.

The Singularities of Rotation

For most of us today, the idea of simple rotation of a body around a well-defined axis seems self-evident. That idea is deeply embedded in human culture, it would seem, since no later than the proverbial invention of the wheel. Have you ever stopped to think, that an {hypothesis} is required? Indeed, if we put aside astronomy, and observe the motions of “natural” Earth-bound objects, then, apart from man-made objects, constructed or selected on the basis of that very hypothesis, we hardly find any case of rotation around a well-defined, precise axis. Pick up an irregular rock, throw it into the air or try to spin it on a hard surface. You never get a simple rotation, but rather a very complicated, wobbly motion. Imagine someone challenges you to go into a forest, without any modern tools or other products of our technical culture, and construct a wheel from the natural materials you find there. How would you do it? How would you, for example, starting from “nothing”, build a rotating table for producing pottery, an elementary form of machine-tool? If you make the attempt, you might develop a healthy respect for the level of technology embodied in the simplest household artifacts of ancient cultures.

So the idea of simple rotation as a fundamental principle of physical action, does not arise from mere sense-perception of objects around us. Nor does the notion of {axis of rotation} as a subsumed singularity of such action arise so. Might it not be the case, that the notions of rotational action and axis of rotation, at least insofar as they became a concious principle of ancient machine-tool design, developed from astronomy? Remember how, millenia later, Gauss and Wilhelm Weber initiated a revolution in electrodynamics, by carrying over principles of astronomical measurement, into the microscopic domain. But let’s be careful not to jump over crucial steps here. Bringing machine-tool principles from heaven down to the Earth, is no self-evident linear process.

Observe the heavens on a clear night. Do you see the rotational motion of the stars? Not directly, anyway. But suppose, we as very prehistoric astronomers, have once established, with the help of our memory, the existence of the daily cycle of star motion and finally the circumstance, that the individual stars move in what appears to be a system of concentric circles in the sky. (Having filled in, in our minds, those motions unseen during the interruption of daylight and the periodic disappearance of stars below the horizon.) Now someone will probably jump up and say: What’s the big deal? You already have it: rotation!

But wait a moment. {Where} is the {axis} of that rotation? Through what points in space does it pass? Does it go through your body? Does it pass through somebody a mile away, who also observes the circular motions in the heavens? Or does the rotation have any axis at all in the sense of a line going through There is still a big topological distinction between the cyclical motion we adduce in the heavens, and that of the wheel we are about to invent.

To make my point a bit clearer, take out the measuring apparatus introduced earlier, consisting of a thin pointer rod or stick whose end pivots around a fixed locus, the latter corresponding to the view-point of the observer. (Many variants of this sort of instrument are possible; what is important is the functional result, namely to determine and record the {directions} in which stars are sighted, when seen from the given locus). Now examine the {manifold} of positions of the pointer rod, as it follows a given star in the course of an evening. Supplement those positions according to the presumed motion of the star when it is not directly visible. Do the same thing for a variety of stars. What do you find?

In the course of a daily cycle, the pointer rod describes the surface of a {cone}! [Show this with the bamboo skewers (shashlik sticks) stuck into a small ball of putty or styrofoam, or equivalent means.] Different stars determine different cones. Some are narrow, some wide, in correspondence to the apparent size of the circular path of the star in the heavens. But the entire array of cones is ordered in a very beautiful way, as a {nested} series. We find there are stars which barely change position in the course of the night (and day); such a star generates a very thin cone. A star a bit further away from that region of the heavens generates a larger cone, which contains the first one, and so on. The cones open out more and more, until they become virtually flat (for stars near the so-called celestial equator, which I shall define rigorously in a moment); after that they begin to close away from the direction of original narrow cones, becoming narrower again.

Now pay attention to the really interesting features of this family of cones, its {singularities}! On the one side, we have the narrow cones, which, as they become narrower and narrower in a nested manner, appear to converge toward a certain definite direction in the heavens, common to all. How shall we characterize that singular direction? It is the direction of {least motion}.

On the other hand, as we examine stars located progressively further and further away from the locus of the heavens corresponding to least motion (known as the “celestial pole”), the cones open outward, and we encounter another singularity: an ambiguity separating two subfamilies of cones, the ones opening toward the pole, and the ones opening in the opposite direction. That ambiguity corresponds to a hypothetical, “perfectly flattened” cone, which makes what we today call a {right angle} to the direction of least motion. The corresponding “ring” around the heavens, correponding to all possible directions of the pointing rod moving in the flattened cone, is known as the celestial equator. Stars in this region have the {greatest motion}, compared to everywhere else. At the same time, we conceive the existence of a {second} pole of {least motion} opposite to the other pole, although the earth under our feet blocks it from view.

Now, how do the characteristics of motion, which we have adduced from our observation of the stellar motions as a whole, project from the astronomical scale, down to our own, earthbound scale?

Take a putty or styrofoam ball, and 4-5 shashlik sticks. Insert one stick into the ball so that it points in the direction of the celestial pole, and insert the remaining sticks so that they point in the direction of as many stars, including one on or near the celestial equator. With time, of course, those stars will change position. Is there a single continuous motion of the ball, such that each of the pointers remains pointed at its assigned star? Now we have it: the {rotation of a solid body around a (relatively) fixed axis} is the form of action, on our earthly scale, which corresponds to the adduced motion of the heavens. The axis is the direction of the stick which points to the celestial pole. This also suggests a possible astronomical determination of a “primordial straight line”: motion along the constant direction of a pole. (There is more to be said on this, but not now).

Now imagine our pointing device placed in the middle of a transparent sphere. If we mark the locations on the surface of the sphere, corresponding to the positions of various stars as seen from the center, then the two celestial poles correspond to two points on the sphere, located opposite each other from the center, and the celestial equator corresponds to a great circle, located on the plane through the center perpendicular to the direction from the center to the poles (that plane is the “flattened cone” we spoke about earlier). We can represent the relationship of the pointing device to the heavens, by the relationship of the center of the sphere to the spherical surface. The daily motion of the stars corresponds to a rotation of the entire sphere, around the axis through the center and the two poles. In that rotation process of the sphere, the poles constitute the {regions of least motion}, the equator the {region of maximum motion}. That, in turn gives us the principle of the wheel, which combines maximum motion on the perimeter with minumum motion of the axel.

The Singularities of Multiple, Self-reflexive Rotation

Now obviously, the preceeding astronomical derivation of rotation and its singularities, does not exhaust the Universe. Although the Sun does have a daily rising and setting, if we plot its position on the transparent sphere, now made to rotate so that it follows the motion of the “fixed stars”, the relative locus of the Sun {changes}. In fact, it traces out what looks like a circle on the sphere (corresponding to the so-called ecliptic in the heavens). That circle intersects the equator (the circle corresponding to the celestial equator) in two points, apparently exactly opposite to each other (these are the Spring and Fall equinox points, the two days when night and day have equal length.) In fact, if we apply rotation to the sphere, taking as the axis of that rotation the direction through the center and those two points, we find that we can rotate the equator {exactly} onto the ecliptic. At the same time, the points corresponding to the celestial poles are carried into new positions, which have the same relationship to the ecliptic as the original poles had to the original equator. Evidently those new positions constitute the poles of a {third rotation}; a rotation whose equator is the ecliptic. You will probably see these relationships a bit more clearly, when we generate them in a slightly different way, in a moment.

Still another degree of rotation reveals itself, when we move our observation-point to a different location on the Earth’s surface. As we go toward the north, the axis of rotation of our sphere must be {rotated} into an increasing angle relative to our apparent horizon increases. This fourth degree of rotation Thales and probably Eqyptians and others long before, interpreted correctly, as a manifestation of the curvature of the Earth.

These circumstances, among others, suggest, that action in our Universe involves nothing less than a combination of many degrees of rotational action. What is the interrelation or interconnection among those various rotations? Our comparison of the celestial equator with the ecliptic suggests the idea: rotational action applied to rotational action.

To explore this notion further, as follows, using the surface of a medium-sized plastic or styrofoam ball, and marking singularities with a non-permanent marker). Start with a first rotation R, which generates as singularities two poles N, S and an equator E (mark them). Now imagine any arbitrary second rotation R’. The second rotation generates a second pair of poles N’, S’ and a second equatorial (or “great”, i.e. maximum) circle E’. The two equators intersect in a pair of points X, Y lying on opposite sides of the spherical center. Examining the two great circles and their corresponding pairs of poles marked on the sphere, it is manifest how to rotate one onto the other: the required rotational axis, is the axis passing through the intersection-points X,Y of the two equators. In other words, X and Y are the {poles} of the rotational action which carries E to E’. That same action carries the poles of R into the poles of R’. Note, that the equator corresponding to that third rotation, passes through {all four poles} of the original two rotations.

If you look carefully now, you will find at least two fish flapping around in your net.

The first fish is the remarkable suggestion: all rotations can be generated from any single one, by rotating it in the manner just described! Actually, that is not exactly true; we only demonstrated we could rotate the chief {singularities} of the two rotations into coincidence with each other, but didn’t take into account the different modes and quantities of rotation — fast or slow, continuing (indefinite) rotation or terminated (definite) rotation, and in the latter case through what magnitude of angle, etc. So, the more accurate conclusion so far would be: any rotation can be obtained from any given one, by applying a definite rotation to that one, and in addition possibly changing its mode and quantity.

The second fish, caught in the corner of our eye, so-to-speak, is the suggestion of a self-reflexive sort of “connectivity” among rotations. Let’s try to catch this one. To do that, avoid the element of arbitrariness in the angle between E and E’ in the previous discussion, by considering the effect of a {continuing} rotation of E, i.e. one that does not stop at E’. Call that continuing rotation R1. The poles of R1 are still the points we called X and Y. As I noted before, the equator E1 of R1 contains the poles N, S of the original rotation R; in fact, each of those poles traces out E1 in the course of rotation by R1.

Interesting. That means E1 is, in a sense, covered {twice} in the course of a single cycle of R1. Look at the process a bit more closely. As we proceed to apply the rotation R1, the equator E rotates into a {variable} circle E’ which intersects E in X and Y, making ever larger angles to E. Suddenly, however, we run into a singularity: when the angle is what we now call 180 degrees, E’ coincides with E, although with an {opposite direction} of rotation! At that same moment, the poles N and S have {reversed position}. As we continue to rotate further, E’ separates again from E, only to coincide with it again when the total angle of rotation is 360 degrees, i.e. a full cycles. To sum up the result: rotation applied to rotation, results in the division of a full cycle into {half-cycles}, divided by a singularity corresponding to “reversal of direction.”

Choose a victim, and ask him to prove to you, why “1/2” and “-1” are equivalent as geometrical numbers!

So far, we have rotations R and R1, poles N,S and X,Y and equators E and E1 respectively. Observe, that E and E1 intersect at two additional points, Z and W, which lie opposite each other across the spherical center, and divide both equators in half. At the same time, notice that {E is carried into E1} by a rotation whose poles at Z and W. Call the corresponding {continuous} rotation R2, and its equator E2. Note that all four points N, S, X and Y lie on E2, and they divide a full cycle of rotation according to R2, into {four} congruent sections.

If we start with E and begin to apply R2, we again get a variable circle E”, intersecting E at the points Z and W. After a rotation of 90 degrees, E” coincides with E1. Continuing past that, we get to the singularity at 180 degrees, when E” coincides with E, except for a reversal of direction. Next, at 270 degrees, E” coincides with E1 but with reversed direction, before finally coming back to E. What shall we call the rotation from E to E1, the which, when repeated, takes us to the reversed-direction version of E? Call it “i”, otherwise famous as the first imaginary or complex number.

Now go back to your victim, and demand that he immediately explain to you the equivalence of “1/4” and “i”. Also the equivalence of both of these with “3”, since we required a uniquely-determined series of {three} rotations to divide a full rotation into four congruent subcycles and to generate “i”.

With the addition of the rotation R2, its poles W, Z and its equator E”, a new phenomenon occurs: closure! The attempt to continue the process of generating new rotations and poles from the configuration just created, in the manner pursued so far, yields nothing new. If, for example, we take the intersection of E1 and E2, we get the poles N and S of the original rotation; and that one-fourth of a full cycle of that rotation carries E1 into E2. Thus, the construction process has an intrinsic {periodicity}, returning to the starting-point after three steps.

Any {two} of the rotations R, R1, R2 are carried into each other by the third, through the same quarter-cycle of rotation. The equator of each of the 3 rotations contains the poles of the other two, which in turn divide that equator into 4 congruent segments. The combination of all 3 equators E, E1, E2 divides the surface of the sphere into 8 congruent regions, bounded by 12 arcs and 6 vertices (the poles). Each of the regions is bounded by an equilateral curvilinear triangle whose angles are all 90 degrees.

Note: There is nothing {arbitrary} about that configuration. If you begin with {any} continuous rotation (as R), and rotate it around {any} axis that lies on R’s equator (as R1), then you end up with the {same} — i.e. precisely congruent — configuration of three rotations R, R1, R2 and the same array of singularities (poles, equators, division of the spherical surface).

The reader has surely recognized the curvilinear octahedron, discussed in Part 3 of this series, and may also be familiar with the way the octahedron produces — with hardly any outside assistance! — the cube, and the cube the tetrahedron. But here we seem to encounter a natural boundary. To proceed further we must add a singularity. That will bring us face to face with the legendary pentagramma mirificum.

The First Measurement of the Universe

Part VIII: Pentagramma mirificum

By Jonathan Tennenbaum

“It is as if one were travelling, alternately, in two worlds. In one world, there is action-at-a-distance along straight-line pathways, a linear, empiricist or Cartesian world. In the adjoining world, a circular action is produced by {rotation}, not by action-at-a-distance along straight line pathways… These two worlds are two Types, of which the rotation-world is the superior, the bounding, the limiting, the determining, the higher one.” (Lyndon LaRouche, in “Cold Fusion: Challenge to U.S. Science Policy”, Chapter III)

We have now come to the construction, that Kepler’s enthusiastic contemporary Napier dubbed, “the wonder of the pentagram.” My description in words will be a bit awkward, probably unavoidably so, nor could static diagrams by themselves convey the required sense of self-reflexive, multiply-connected rotation. There is no substitute for the reader’s active exploration and replication of the following constructions.

The locus of the pentagrammum is the rotational manifold, that arose as a product of our attempt to map the heavens (see Parts 3 and 4 of this series). We have represented such rotations in two ways: {first} in terms of changes of direction, as when we observe the sky from a single viewpoint, i.e. the celestial sphere as seen “from the inside”; and {second}, in terms of the rotations of a spherical surface as seen “from the outside.” I shall start with the second representation, which is easy to experiment with, using plastic spheres and erasable markers of various colors to mark the singularities (great circles, poles etc.)

Start with an arbitrary rotation of the sphere. Call the equator for that rotation E1, its pole R. Choose any position P on the great circle E1. Think of P and R at first as reference-positions, relative to which we now juxtapose a third, arbitrary (variable) position. Let that third location, Q, be given anywhere on the sphere outside the equator E1. (To avoid certain difficulties, which I shall discuss below in part, it were best to begin with the case, were Q does not lie too far away from either P or R, i.e. forming an arc of less than 90 degrees from either of those two locations.)

Now, by “unfolding” what is implied in the relationship between the arbitrary locus Q and the two loci P and R, we obtain the following, seemingly endless {chain} of singularities:

First, there will be a {unique} great circle passing through P and Q, corresponding to the least rotation which carries P to Q (*1). Construct that great circle. Call the pole of that rotation S, and call the circle itself (i.e. the equator of the rotation) E2.

Next, there will be a unique great circle E3 passing through Q and R, corresponding to the least rotation carrying Q to R. Call the pole of that rotation T.

Again, there will be a unique great circle E4 passing through R and S. Call its pole U.

Still once more, there will be a unique great circle E5 passing through S and T. Call its pole V.

And so forth. At first glance, this chain of relationships might seem to go on ad infinitum: a “bad infinity.”

But do the experiment. You will find, to your initial surprise, probably, that the process {closes} by itself, after generating the {fifth} point! Indeed, the pole U appears to {coincide} with the starting-point of the chain, P, while V coincides with Q and so forth. The whole process repeats, generating {exactly} the same sequence of five great circles and poles once again! The points P, Q, R, S, T form the vertices of a (generally) non-regular, 5-sided spherical polygon.

This periodicity was first studied in detail, as far as we know, by Johannes Kepler’s contemporary Napier. What strikes us as so extraordinary (mirificum!), is the circumstance, that the character of periodicity does not depend on the choices of the initial points P, R and Q. More precisely: all that is assumed in the construction, is three arbitrary points P, Q and R, subject only to the condition, that P lies on the equator of a rotation whose pole is R. (As the reader can easily ascertain, the latter condition signifies that P and R are separated by an arc of 90 degrees as seen from the center of the sphere).

Evidently, the self-closure of the chain into the form of a non-regular spherical pentagon, reflects a {universal} characteristic of spherical geometry, having no obvious equivalent in simple plane geometry. That characteristic determines the outcome of the construction, as it were, “from outside”; standing above and beyond the seemingly arbitrary choice of starting-points for the construction.

But let’s try to see more clearly, {why} the pentagrammum {must} close. For this purpose, let’s review the chain of relationships once again, this time from “inside” the spherical geometry. We shall find that the pentagrammum is already implicit in the simplest astronomical observations.

Under a clear night sky, stand facing due north, looking toward the corresponding northermost point on the horizon. Take that point as your “P”. At the same time, note that the zenith point (directly overhead), corresponds to the pole of the horizon. In other words, if we point our arm toward the horizon and rotate our arm left-right so that it follows the horizon, then the axis of that rotation will be vertical, and the poles are the zenith and the point opposite to the zenith, “directly down” under our feet. Call the zenith-point “R”.

Now choose a star anywhere in the sky. Designate its position “Q”. Observe the relationships between Q, the horizon-point P and the zenith-point R. Note two imaginary arcs formed in the sky: from P to Q, and from Q to R. These are the first two sides of our pentagram. Note also the arc from the zenith R down to P, which makes right angles to the horizon — a celestial right triangle! That same arc will be a {diagonal} of the pentagram.

Now trace out the arc PQ, by pointing first at P, and then applying the relevant rotation to your arm until you are pointing at Q. Point with your other arm in the direction of the axis of that rotation, i.e. toward its pole. Call that pole “S”.

It might be helpful, in grasping these relationships, to tilt yourself in such a way, that the arc PQ appears as your new “horizon”, and your new “above” (zenith) is S. Similarly for the arc QR and its pole T, the arc RS and its pole U, etc.

In this way, we trace the pentagram as an imaginary “constellation” in the sky, unfolded from the relationship of any given star Q to the reference-points P and R. Note, that if we change the position of Q, the shape of the pentagram will also change.

Now, what makes the chain of arcs and poles close after exactly 5 steps? Perhaps the reason will emerge, if we draw up a list of the chain of relationships in the construction:

1. P is a point on the horizon. 2. Q is any arbitrary point off the horizon. 3. R is the pole of the horizon (i.e. the zenith). 4. S is the pole of the rotation P->Q. 5. T is the pole of the rotation Q->R. 6. U is the pole of the rotation R->S. 7. V is the pole of the rotation S->T. 8. W is the pole of the rotation T->U. 9. X is he pole of the rotation U->V. etc. etc.

From the list itself, we don’t see any reason why U should coincide with P, V with Q and so on. Have we failed to take account of something? Recall last week’s discussion of the multiple-connectedness of spherical rotation. Aha! We didn’t pay attention yet to the various {angles} in the pentagramma. For example: the very first angle in the construction, which is the angle formed at P between the horizon and the arc PQ. This is the angle an observer would have to tilt himself by, in order to make the great circle containing PQ into his new “horizon.” Or in other words, in the language of our earlier constructions on the sphere: it is the (lesser) angle formed between the great circles E1 and E2, which in turn is the amount by which we would have to rotate the sphere itself, to carry E1 into E2. Evidently, the point P, which is the intersection of E1 and E2, represents one of the {poles} of that rotation.

Now what happens to the pole of E1 (i.e. R), when we carry out that rotation of E1 into E2? Evidently, E1’s pole is carried to E2’s pole, i.e. R moves to S. Our conclusion: the rotation from E1 to E2 — a rotation whose pole is P — {coincides} with the rotation R->S. The latter rotation, however, appears as the 6th step in our list above, where the point “U” is defined as its pole.

So P and U are poles of one and the same rotation! Now we begin see why the chain of relationships closes.

“The Theory of Ambiguity”

But here a difficulty arises: The circumstance, that P and U are poles of the same rotation, does not necessarily mean they {coincide}. They might instead be {antipodes}. Indeed, a rotation always has {two} poles, at diametrically opposite positions on the sphere.

By failing to consider the ambiguity in our expressions, such as “S is the pole of the rotation P->Q” or “S is the pole of the great circle E2”, we left open two possible choices. If at each step of our construction, we permit either of the two choices, we evidently end up with many more possible constellations. The chain is no longer uniquely defined, and in some cases will not close after 5 steps. The chain only becomes well-defined, if we introduce some “external” criterion for choosing between the two poles at each step: as, for example, by requiring that P, Q. R, S, T etc all lie on the same hemisphere. Alternatively, we could require that all the rotations (arcs) are all less than 90 degrees — which can always be accomplished by the proper choice of poles –, or that at each step the pole chosen should be “upward” with respect to an observer who has tilted himself through the angle between the successive great circles (or more precisely, the lesser of the two pairs of complementary angles, the one less than 90 degrees) to get from one great circle as his “horizon” to the next one. In the course of the construction, that “upward” direction pivots around in a closed cycle, pointing always toward the “interior” of the pentagram. Examining the entire configuration of five great circles, we see that not one, but {two} identical pentagons are formed on the sphere. Their vertices — 10 in all — are antipodes of each other.

At first glance, the ambiguities might seem a bothersome complication. Yet, as Gauss and Riemann developed the point in great richness: it is the ambiguities which determine, to a great degree, the whole character of a process. We began to study a similar, related case in Gauss’ approach to the Pothenot problem. Evariste Galois, a disciple of Gauss, referred to this elementary part of analysis situs as “the theory of ambiguity” (*2).

If you think the problem of ambiguity can be avoided, just try to define the vertices of the pentagram as a {continuous function} of the variable Q. Watch what happens when Q approaches, and crosses, the great circle through P and R, or when Q runs around the back and underside of the sphere as seen from P and R. The ambiguities associated with the double nature of the poles, come out as discontinuities in any attempt to impose a single, simple continuous function on the pentagram relationships.

Higher Self-similarity

Now behold the array of self-reflexive relationships subsumed by the pentagram, which the reader should be able to verify without much trouble:

The {diagonals} PR, QS, RT, SP and TQ are all equal in magnitude, corresponding to quarter-circle arcs (90-degree arcs).

Each vertex P, Q, R, S, T is a pole of the rotation defined by the opposite side (arc) of the pentagram (i.e. P is the pole of the arc SR etc.). The exterior angle formed at each vertex by the great circle-prolongations of the adjacent sides, is equal to the angle spanned by the arc on the opposing side as seen from the center of the sphere.

Of the total of 20 intersection-points of the five great circles, 10 are vertices of the two, antipodal pentagons formed by those circles (namely the intersection-points of E1-E2, E2-E3, E3-E4, E4-E5 and E5-E1). At the other 10 intersection-points of the great circles (those of E1-E3, E2-E4, E3-E5 and E4-E1), {right angles} are formed.

Most important is the self-reflexive characteristic, that the pentagram can be “regrown” from any three consecutive vertices. In other words: if for example I take R, S and T as starting-points, instead of P, Q and R, and construct a pentagram from {them} in the same way as before, then I end up with {exactly the same figure}.

Thus, although Napier’s pentagram — unlike a regular pentagon — can take on a continuum of different visible shapes, including very irregular ones, the periodic character of the construction-process points to a higher form of symmetry and self-similarity. Instead of the five equal angles and sides of the visible regular pentagon, the pentagram embodies five equal {transformations}. The reader who has worked through the above constructions, should already have a sense of this (*3).

Needless to say, the existence of such a five-fold, self-similar periodicity embedded in the rotational manifold, points toward the existence of the duodecahedron whose sides are regular pentagons. Indeed, as we shall see next week, the pentagramma mirificum is the crucial singularity leading us beyond the domain of the spherical octahedron and its “children” — the spherical cube and tetrahedron, as well as the corresponding straight-line polyhedra which they bound — to the duodecahedron/icosahedron and the notion of a unique, universal characteristic of the “rotational world”, subsuming, and reflected in, the whole simultaneous array of the five regular solids.

——————————————————–

(1) By “least rotation” I have in mind the following. Given two loci X, Y on the sphere, there are {many} rotations of the sphere which carry X into Y. The total angle through which the whole sphere must be rotated, in order to carry X to Y, will differ depending on which axis we choose. For example, if we choose the axis passing through the midpoint between X and Y, then a rotation of 180 degrees is required to carry X to Y. If, on the other hand, we take the rotation which carries X to Y along the arc of the great circle through those two points, and whose axis passes through the corresponding poles of the sphere, then in general a much smaller angle will be required. In fact, the latter choice of axis provides the least rotation carrying X to Y.

(2) LaPlace and Cauchy bear direct, personal responsibility for Galois’ early death at the age of 21, as they do for the tragic, early death of another brilliant Gauss disciple, the Norwegian Niels Abel.

(3) One way to express that periodicity, albeit in a somewhat formal way, is as follows: Given any three loci A,B,C on the sphere, such that A and C are separated by an arc of 90 degrees (i.e. A,B,C satisfy the requirements to be consecutive vertices on a Napier pentagram), construct a locus D as the nearest pole of the rotation A->B, and constitute the {new} triple of loci “B,C,D”. Now conceptualize a transformation T, which carries the triple “A,B,C” to “B,C,D” (so defined), as a kind of geometrical function. T has the effect, when applied to three consecutive vertices of a Napier pentagram, of “shifting ahead by one” in the order of vertices. Thus, T(“P,R,Q”) = “R,Q,S”, T(“Q,R,S”) = “R,S,T” and so on. Evidently, the effect of applying T {five times}, is to come back to the original triple. Although T is not at all like a simple rotation, T’s self-similar periodicity makes it the higher analog of rotation by 360/5 = 72 degrees, which is the characteristic transformation of a regular pentagon.

The Pentagramma Mirificum and Cardinality

by Bruce Director

Before starting this pedagogical discussion, make sure you’ve worked through the report by Jonathan from two weeks ago (99056jbt001). In that discussion, you will have constructed the pentagramma mirificum, and begun to discover why Napier and Gauss referred to it as “mirificum”, i.e. miraculous. This week, we’ll take a further look.

First, from the construction itself a very surprising property emerges. Each side of the spherical pentagon, is the equator of the opposite spherical vertex, and, that vertex is the pole of that equator! Make sure you have grasped that property before proceeding.

In recent discussions, both written and oral, Lyn has emphasized the importance of knowing the difference in cardinality between a spherical surface and a flat one. These words will be only that, mere utterances, unless you work through a crucial paradox that brings this concept fully alive into your mind. For that, we have Gauss’ fragmentary investigations of the pentagramma mirificum, into which we will take a preliminary look today.

To begin, think first of the property mentioned above. Each side of the spherical pentagon, is perpendicular (when extended) to the other sides of the pentagon, that are not adjacent to it.

Compare that to a pentagon drawn on a flat piece of paper. Extend the sides of that pentagon. The non-adjacent sides will intersect at points outside the pentagon, forming a pentagram. (Like on the sphere, the pentagon and pentagram are not regular ones). The angles at which these non-adjacent sides intersect cannot all be perpendicular, but in the pentagram we constructed on the sphere, they all were.

Try a little experiment. On a flat piece of paper, draw a line labeled a. Now draw another line perpendicular to a called b. Now draw a third line perpendicular to b called c. Now, draw a fourth line perpendicular to c called d. Can you draw a fifth line perpendicular to d that is also perpendicular to a? But, when we constructed the pentagramma mirificum on the sphere, this is precisely what we did. In fact, on the sphere, our construction “automatically” closed after five perpendicular transformations. On the plane, the construction closes after only four.

Okay, this doesn’t surprise you. Of course, you say, the plane and sphere are of two different curvatures and so, as Lyn says, the geometry of each will be different in every small interval. So, it is to be expected that things that occur on the spherical surface, do not occur on the flat one. These are mere words, unless you can form in your mind a concept of the difference in cardinality between the two surfaces. I am NOT speaking of WHAT is different between the two surfaces, but the nature of the difference. (Think of the Socratic concept of change, embodied in Kepler’s concept of congruence, later adopted by Gauss in his geometrical study, Disquistiones Arithmeticae.)

The nature of that difference, is precisely the direction of Gauss’ fragmentary investigation into the pentagramma mirificum, and can be discovered by looking at fragment 2. (Before proceeding you should review part 6 of pedagogical discussion on spherical geometry by Jonathan Tennenbaum filed in the Alpha computer as 99036bmd001).

Go back to the spherical pentagon and take another look at the “self-polar” property. On the spherical pentagon, a “line” (great circle) connecting any vertex to its opposite side, will intersect that side in a right angle, since each vertex is a pole and the opposite side is its equator. If you were to connect each vertex to the opposite side, the five “lines” (great circles) might or might not all intersect each other in the same point. Taking the inversion, you could pick a point inside the spherical pentagon, and be able to draw perpendiculars from each vertex to its opposite side, so that they all intersect at the chosen point.

Now, perform a variation on the experiment discussed in part 6. Draw the pentagramma mirificum on a clear plastic hemisphere, and project that pentagramma on a flat surface by placing a light source at the center of the hemisphere. The flat surface should touch the hemisphere at one point. The spherical pentagramma will project, on the flat surface, a straight line pentagram. Keeping the hemisphere still, move the flat surface. First pivot it around the same point. Then move it from point to point. (To be most effective, make your flat surface out of stiff plexiglass covered with tracing paper. Trace the projected pentragam on the tracing paper. Use a different piece of paper each time you change the projection by moving or pivoting the plexiglass. That way you can draw a series of snapshots of the different projections, corresponding to the different angles and places the flat surface makes with the hemisphere. When tracing the projections, be sure to mark the point at which the flat surface touches the hemisphere.)

Now take the array of projected flat pentagrams drawn on the pieces of tracing paper, and draw lines from each vertex that intersect the opposite side at right angles. These lines will all intersect at one point, and that point will be the one at which the plexiglass touched the hemisphere! The self-polar property of the spherical pentagramma remains embedded, in the projected plane one. In fact, as Gauss notes in his fragmentary investigation, {every} plane pentagon is nothing more than the projection of a spherical one.

To help make this point sink in, take its inversion. Start with an arbitrarily drawn pentagon on a plane. Draw the perpendicular lines from each side to the opposite vertex. These lines will all intersect in one point. But, this pentagon was just drawn on a flat paper. No sphere was used in its construction, yet the spherical property of the pentagramma mirificum is still in there. Spherical action is present, even without the sphere.

(On this point, I refer the reader to the very important, but far too little read, Science Memo by Lyn on Cold Fusion, written in prison and released during the 1992 Presidential campaign.)

We leave you this week, by introducing for future contemplation, another piece of Gauss’ fragmentary investigation. Go back to the spherical pentagramma drawn on the hemisphere with a flat surface touching the hemisphere at one point. The light source at the center of the hemisphere will project a cone, whose apex is the center of the hemisphere, and whose base will extend through the points of the spherical pentagon. The flat surface will cut that cone obliquely, defining an ellipse, that circumscribes the projected pentagon. What does this have to do with Kepler’s determination of elliptical orbits, Gauss determination of the orbit of Ceres, and Gauss’ later investigations into the perturbations of planetary orbits? In future weeks, we will delve into these questions.

An Exercise in The Division of the Sphere

by Bruce Director

To begin with, the hastily written end of last week’s pedagogical might have caused some confusion for those who carried out the construction. And, as Kepler said, “A hasty dog produces blind pups.” In the next to the last paragraph, the reader was asked to draw an arbitrary pentagon, such that the altitude lines all intersected in one point. The intersection at one point, of the altitude lines of the plane pentagon, will only occur on those pentagons which are central projections of a spherical one. An arbitrary plane pentagon, may not necessarily be such a projection. In those cases, the altitude lines will not necessarily meet at one point. However, those pentagons, can be transformed into projections of spherical ones. We will take that up at a later date.

That said, this week we will look at the difference in cardinality between a flat and spherical surface from another standpoint; the principle that Kepler and Gauss called congruence, or in the Greek harmonia. Unfortunately, due to the political mobilization of the past weeks, this week’s discussion is also written hastily, so I beg your indulgence in advance for what may seem to be a rushed presentation. However, the issues are crucial, and I would not want to postpone your enjoyment of working through them, by delaying it’s appearance in the briefing.

In the second book of the Harmonies of the World, Kepler re-introduces the Greek concept of harmonia, as equivalent to the Latin term congruentia, or in English, “to fit together.” Kepler, demonstrates that on a surface of zero curvature, or a plane, certain polygons, i.e. squares, triangles, and hexagons, will fit together perfectly. He called this a perfect congruence. However, in three dimensions, (i.e. solid angles) the formation of perfect congruences is entirely different. Perfect congruences can be formed by three, four, or five triangles, three squares and three pentagons. In this way, the uniqueness of the five Platonic solids is demonstrated.

But, there is still an underlying assumption not completely revealed in the above demonstration. The difference in which perfect congruences can be formed, is a function of something not stated explicitly, an underlying curvature of space. On the other hand, if we look at this principle of congruence from the standpoint of the surface of the sphere, as the Greeks and Kepler undoubtedly did, we see that this difference in congruence between two and three dimensions, is a reflection of the difference in cardinality between a surface of zero curvature– a plane, and a surface of constant curvature — a sphere.

To create this concept in your mind, think of a crucial difference between a plane and sphere. On a plane, the sum of the angles of any triangle are equal to two right angles, or 180 degrees. On a plane, triangles can change their size and relative shape, but the sum of the angles are always 180 degrees. Additionally, there is no maximum triangle. A triangle can be as big as can be imagined.

Think of a triangle on a sphere. In the constructions in the earlier pedagogical discussions on this subject, we constructed triangles with three 90 degree angles, e.g. the triangle between the zenith, a point straight ahead on the horizon, and a point directly to the left or right on the horizon. The great circles which form the sides of this triangle intersect each other at 90 degree angles, and the area enclosed by them is 1/8 the entire area of the sphere. Now, in your mind, move one of the horizon points towards the other, keeping the zenith and second horizon point fixed. What happens to the angles of the triangle and the area enclosed? The great circles intersecting on the horizon will remain at 90 degrees each, but the angle between the great circles meeting at the zenith will decrease from 90 degrees to 0. Simultaneously, the area enclosed by the triangle will also decrease. When the two horizon points meet, the resulting triangle, will look the same as a great circle from the horizon to the zenith. This “triangle” will have two 90 degree angles at the horizon, and a 0 degree angle, for a total of 180 degrees. In other words, when the sum of the angles of a spherical triangle are 180 degrees, the triangle ceases to be!

Next, do the reverse. Rotate one of the horizon points away from the other, keeping the zenith and the second horizon point fixed. The great circles intersecting the horizon will remain at 90 degres, but the angle at the zenith will increase, and, so will the area enclosed by the triangle. When the two great circles intersecting the horizon come together, the resulting “triangle” will have a zenith angle of 360 degrees, and two base angles of 90 degrees, for a total of 540 degrees. The area enclosed by this “triangle” will be 1/2 the surface of the sphere. But this “triangle” will appear to be the same as the triangle of 180 degrees, but constructed in exactly the opposite manner.

From this we can begin to arrive at a concept of a maximum and minimum triangle on the sphere. To get this idea more firmly in the mind, think of an arbitrary triangle on the sphere. If we increase the lengths of the sides of this triangle, the area enclosed will increase, as well as the sum of the angles. But, as the triangle grows, the angles between the sides will get greater and greater, until the angles are all 180 degrees, for at total of 540 degrees. Like in our previous example, this maximum triangle, encloses an area equal to 1/2 the surface of the sphere. On the other hand, if we shrink this triangle, the angles will get smaller and smaller, and so will the area enclosed.

From this demonstration, you should now be able to grasp the concept, that, unlike on a surface of zero curvature, a triangle on the surface of a sphere, has a maximum and minimum area, and the sum of the angles has a maximum and minimum boundary. But, there is a crucial distinction between the nature of the minimum boundary and the maximum. The minimum sum of the angles is 180 degrees, but that is the sum of the angles of a plane triangle. Since the sphere is nowhere flat, even in the smallest interval, a 180 degree triangle does not exist on the sphere. On the other hand, the maximum boundary, is a great circle, which, when considered as the maximum triangle, contains three 180 degree angles and encloses an area equal to 1/2 the sphere. Consequently, the sum of the angles of a spherical triangle, is always greater than 180 degrees, but never greater than 540 degrees. And, the area of a spherical triangle is always greater than zero, but never greater than 1/2 the area of the sphere. Since the area of the triangle is proportional to the sum of the angles, and since a triangle whose angles equal 180 degrees has zero area, the area of a triangle is proportional to the amount by which the sum of its angles are greater than 180 degrees. This quantity is called “spherical excess.”

With this principle established in our minds, lets look at the formation of perfect congruences on the surface of the sphere. As Kepler did, we want to find what spherical polygons will make such perfect congruences. However, since the angles of a spherical polygon change with size, we must consider both shape and size when forming perfect congruences.

We begin with discovering which perfect congruences can be formed with triangles. Because of the crucial difference in the nature between the minimum and maximum triangle, we must start with the maximum triangle, i.e. a triangle whose angles sum up to 540 degrees and that encloses 1/2 the area of the sphere. We can then shrink this triangle, until we find one whose size is such that it can make a perfect congruence with at least three other triangles. Because, unlike the plane, the sphere is bounded, this process has two boundary criteria. First, since the triangles must form a perfect congruence, that is “fit together,” the angles of the triangles must add up to 360 degrees when they come together at a common vertex. And, since the sphere is bounded, these congruences must divide the total area of the sphere evenly.

Since the area of the sphere is 4 x Pi x the cube of the radius, the area of the maximum triangle, on a sphere whose radius is 1, will equal 2 Pi. This same area can be thought of as an angular change from the center of the sphere, or 360 degrees. The area of the entire sphere will thus be 4 Pi, or as measured from the center of the sphere, 720 degrees.

To make a perfect congruence with three spherical triangles, we shrink the maximum triangle until it has three angles of 120 degrees, or 1/3 of 360. The total sum of the angles of such triangles will be 3 x 120 or 360 degrees making a spherical excess of 180 degrees. Since the total area of the sphere is 720 degrees, 4 such triangles will fit exactly onto the sphere, forming a spherical tetrahedron.

If four spherical triangles are to be fitted together, we must continue to shrink the triangles until the internal angle are 1/4 360 degrees or 90 degrees. The angles of these triangles will have a sum of 270 degrees, or a spherical excess of 90 degrees, or 1/8th the entire surface of the sphere, forming a spherical octahedron.

For five spherical triangles to be fitted together, the internal angle must be 1/5 of 360 degrees or 72. The total sum for these triangles will be 216, making a spherical excess of 36 degrees, or 1/20 the total area of the sphere. This forms the spherical icosahedron.

If we make our triangle still smaller, so that six triangles fit together, the internal angles will be 60 degrees, for a sum of 180 degrees. But we already discovered that such a triangle can’t exist on a sphere, and so we’ve reached the boundary of dividing the sphere into equal regions with triangles.

Now, try dividing the sphere with spherical squares. Like with the triangle, a great circle is the maximum square, comprised of 4 180 degree angles, for a total of 720 degrees. The sum of the angles of a square on a surface of zero curvature, is 360 degrees. So the maximum spherical excess of a spherical square is 720 degrees – 360 degrees = 360 degrees. If we make the square smaller so that 3 can be fitted together, the internal angles must be 120 degrees, with the angles of each square having a total sum of 480 degrees. This makes a spherical excess of 480 degrees – 360 degrees = 120 degrees, or 1/6 the total area of the sphere, forming the spherical hexahedron, or cube.

If we make the square smaller, so that 4 fit together, then the internal angles must be 90 degrees, for a total sum of 360 degrees. But this is equal to the maximum spherical excess, and so such a square cannot exist on the sphere.

We can similarly show that the spherical pentagon will divide the sphere into the spherical dodecahedron, and that is the limit of equal divisions of the sphere. We leave this demonstration to the reader.

From this standpoint the nature of the difference in cardinality between the sphere and the plane can be seen anew. When we begin with the maximum polygon, a great circle, we form the simplest perfect congruence, division in half. Then as we descend from the maximum polygon, there are certain sizes which form perfect congruences, or harmonies. The polygons in between the maximum and the harmonic ones, form imperfect congruences or even dissonances. The spherical divisions, corresponding to the five Platonic solids, are the only perfect congruences, or perfect harmonies of the sphere surface. Work through this construction yourself, so we can discuss it further in future weeks.

Some Wisdom from Friends

by Bruce Director

Next week we will bring to a conclusion, this series of pedagogicals on spherical action, that began in the Dec. 18, 1998 briefing, and continued through last week’s discussion on the the spherical development of the five Platonic solids. You are strongly urged to review this series as a whole. In the meantime, this week we offer you some words of wisdom from our predecessors, Cusa, Kepler and Gauss.

Nicholas of Cusa

Leonard Ignorance Book 1, Chapter 23:

“… Hence, Parmenides, reflecting most subtly, said that God is He for whom to be anything which is is to be everything which is. Therefore, just as a sphere is the ultimate perfection of figures and is that than which there is no more perfect figure, so the Maximum is the most perfect perfection of all things. It is perfection to such an extent that in it everything imperfect is more perfect — just as an infinite line is an infinite sphere, and in this sphere curvature is straightness, composition is simplicity, difference is identity, otherness is oneness, and so on. For how could there be any imperfection in that in which imperfection is infinite perfection, possibility is infinite actuality, and so on?

“Since the Maximum is like a maximum sphere, we now see clearly that it is the one most simple and most congruent measure of the whole universe and of all existing things in the universe, for in it the whole is not greater than the part, just as an infinite sphere is not greater than an infinite line. Therefore, God is the one most simple Essence (ratio) of the whole world, or universe. And just as after an infinite number of circular motions an infinite sphere arises, so God (like a maximum sphere) is the most simple m easure of all circular motions….

“… Therefore, all beings tend toward Him. And because they are finite and cannot participate equally in this End relatively to one another, some participate in it through the medium of others. Analogously, a line, through the medium of a triangle and of a circle, is transformed into a sphere; and a triangle is transformed into a sphere through the medium of a circle; and through itself a circle is transformed into a sphere.”

Johannes Kepler

Epitome of Copernican Astronomy; Book 4

[written in Q and A format in original]

“What is the cause of the planetary intervals upon which the times of the periods follow?

“The archetypal cause of the intervals is the same as that of the number of the primary planets, being six.

“I implore you, you do not hope to be able to give the reasons for the number of the planets, do you?

“This worry has been resolved, with the help of God, not badly. Geometrical reasons are co-eternal with God — and in them there is first the difference between the curved and the straight line. Above (in Book 1) it was said that the curved somehow bears a likeness to God; the straight line represents creatures. And first in the adornment of the world, the farthest region of the fixed stars has been made spherical, in that geometrical likeness of God, because as a corporeal God — worshipped by the gentiles under the name of Jupiter — it had to contain all the remaining things in itself. Accordingly, rectilinear magnitudes pertained to the inmost contents of the farthest sphere; and the first and the most beautiful magnitudes to the primary contents. But among rectilinear magnitudes the first, the most perfect, the most beautiful, and most simple are those which are called the five regular solids. More than 2,000 years ago, Pythagoreans said that these five were the figures of the world, as they believed that the four elements and the heavens — the fifth essence — were conformed to the archetype for these five figures.

“But the true reason for these figures including one another mutually is in order that these five figures may conform to the intervals of the spheres. Therefore, if there are five spherical intervals, it is necessary that there be six spheres; just as with four linear intervals, there must necessarily be five digits.

“Why do yo call them the most simple figures?

“Because each of them is bounded by planes of one species alone, viz., triangles of quadrilaterals or pentagons, and by solid angles of one species alone — the three primary figures by the trilinear angle, the octahedron by the quadrilinear angle, and the icosahedron by the quinquelinear angle. The other figures vary either with respect to the angle or with respect to the plane….

“Why do you call these figures the most beautiful and the most perfect?

“Because they imitate the sphere — which is an image of God — as much as rectilinear figure possibly can, arranging all their angles in the same sphere. And they can all be inscribed in a sphere. And as the sphere is everywhere similar to itself, so in this case the planes of any one figure are all similar to one another, and can be inscribed in one and the same circle; and the angles are equal.”

Carl F. Gauss

Letter to Gerling; April 11, 1816

“It is easy to prove, that if Euclid’s geometry is not true, there are no similar figures. The angles of an equal-sided triangle, vary according to the magnitude of the sides, which I do not at all find absurd. It is thus, that angles are a function of the sides and the sides are functions of the angles, and at the same time, a constant line occurs naturally in such a function. It appears something of a paradox, that a constant line could possibly exist, so to speak, a priori; but, I find in it nothing contradictory. It were even desirable, that Euclid’s Geometry were not true, because then we would have, a priori, a universal measurement, for example, one could use for a unit length, the side of a triangle, whose angle is 59 degrees, 59 minutes, 59.99999 seconds.”

Kick the Newton Habit

by Jonathan Tennenbaum

(In partial celebration of the second anniversary of the pedagogical discussions.)

Whoever has worked through the previous installments of this series in a thoughtful manner, should now have a fairly solid grasp of 1) how the rotational manifold and spherical curvature arise in the most elementary astronomical measurement of the Universe; 2) the characteristic sorts of anomalies, that result from any attempt to map a spherically curved surface onto a flat surface; 3) the origin of the regular solids in this context, as a single interconnected unity with the dodecahedron/pentagrammum as the centerpiece; and out of this, 4) why the five regular solids constitute the necessary and sufficient (least action) expression of the singularity, that separates spherical curvature, as a {type}, from flat, linear geometries typified by classroom plane geometry.

While much more could be said on these geometrical matters, and the pedagogy should be further refined, the preceeding installments provide at least a first approximation to what is needed. But one big issue has still been left hanging. Many readers, I am sure, have a nagging thought in the back of their minds, concerning the meaning of the whole exercise. To put it crudely, but otherwise accurately, I read the thought as follows:

“Your spherical geometry is lots of fun, and now I understand the regular solids much better. But I just gotta ask you: Do you really want me to believe, that the {solids} determine the planetary orbits? They are just abstract ideas, aren’t they? How could they have any {physical} effects? I mean, don’t get me wrong, I know Newton was a bad guy and all that, but … what should I say?… I really can FEEL that gravitational force. It’s really there. You know what I mean?”

Here we have a clear case, where no decisive progress can be made, until certain entrenched, false ideas and habits of thinking are fully demolished and the rubble cleared out of the way. People should study Lyn’s most recent memo (reproduced in Friday’s briefing), which deals with exactly this topic. In honor of the second anniversary of the pedagogical discussions, I would like to add a few additional observations.

Firstly, observe that the form of the indicated, nagging doubt corresponds {exactly} to the what many people react to, in Lyn’s “triple curve” characterization of the curvature that is governing the collapse of the present global financial-economic system. They cannot accept the idea, that the reason for the collapse — and the emergence of the Russia-India-China-Iran “survivor’s club” — lies entirely {outside} the domain of Newtonian-like mechanical causality. They see events as being caused by the interaction of a huge number of “forces”: market forces, political forces, sociological forces, “lone assassins” etc. They reject the idea, that the entire manifold of current history might be shaped {as a whole} in such a way, that the possible courses of events at this juncture are restricted to a very few alternative pathways, and no others. Ignoring this higher bounding of history, they entertain all kinds of scenarios and “solutions,” which do not exist in reality.

Just so, the Newtonian imagines arbitrary planetary orbits at arbitrary distances from the Sun, while the real solar system permits only a discrete array of harmonically-determined orbital bands. (Offending objects, it appears, are ejected from the system, or end up in the “garbage can” of the asteroid belt).

What is the problem? Project a curved surface on a flat surface, and observe the distortions produced. If you stubbornly insist on regarding the linearity of the flat surface as an inherent feature of reality, then you will be obliged to invent a complicated system of “forces” to explain the distortions in the image.

This is exactly what Sarpi, Galileo, Newton, Descartes etc. did.

Look at the Universe. Look at the impossibility of constructing a “flat” projection of the heavens, and look at the spherical geometry (often refered to as the celestial sphere) we demonstrated to underly all astronomical measurement. Look at the hierachy of {periodicities}, {cycles} which the ancients found to govern all apparent motions of the stars and planets. Look at the spherical (or spheroid) curvature of the Earth, measured by Erathosthenes, and the spheroidal curvatures of all other celestial bodies. Look at the harmonic system of the planetary orbits, whose unique coherence with the regular-solid spherical harmonics was demonstrated by Kepler. Look down toward the microscopic scales. Look at Kepler’s founding of crystallography (in the snowflake paper), and Mendeleyev’s ensuing discovery of the periodic system of the elements. Look at the Huygens-Fresnel demonstration of the spheriodal geometry underlying the process of light propagation, and its organization in cycles of wavelength and frequency — work that demolished Newton’s linear fallacy of “light corpuscles” travelling in straight lines. Look at the Ampre’s preliminary discovery of the non-linear (angular) nature of electromagnetic action. Look at Wilhelm Weber’s derivation, from his own experimental confirmation of Ampre’s principle, of the necessary existence of an essential singularity of electromagnetic action at a “critical length” corresponding to subatomic scales. Look at the implicit (if somewhat flawed) extension of the Huygens-Fresnel-Gaus-Weber work to atomic physics, by Planck, De Broglie, Schrdinger and others. Finally, look at Dr. Robert Moon’s preliminary demonstration of the Keplerian ordering of the subatomic domain. Compare this with the harmonic characteristics of living processes, and with the harmonic characteristics of human Reason, as reflected for example in the well-tempered system of bel canto polyphony. And so forth.

Review, thus, the panorama of the Universe, as the actual process of discovery has thus far revealed the Universe to be. Do you find, anywhere in this, any trace of a supposed primacy, or even mere existence of simple, straight-line action in the Universe? No, not the slightest trace! Rather, we discover everywhere reflections of a universal curvature, coherent (to a first approximation) with the characteristics of spherical bounding as understood by Nicolaus of Cusa and Kepler.

But now arbitrarily stipulate, that all events in the Universe are taking place in an “empty”, featureless, euclidean three-dimensional space, extended indefinitely in all directions. Stipulate straight-line motion at constant velocity as the “natural” form of action inhering in that notion of space-time. Build that into your physics as a basic assumption. You have now transformed the entirety of the actual physical evidence into a gigantic anomaly!

Any motion, for example, which departs from constant, straight-line motion — i.e. all real motions! — is anomalous. So, postulate the existence of “forces” that are “bending” the motions into the observed curved trajectories. Elaborate that curve-fitting into a sophisticated mathematical structure. Congratulations! You have just received a Nobel Prize for virtual reality! The main anomaly left to be explained, is how Galileo, Sarpi, Newton etc. were able to get away with it.

“But can’t you understand, I really FEEL those gravitational forces.” We can hear Descartes swearing, pathetically: “I feel it, therefore it exists”! But sense perceptions are mere phenomena, they have no meaning in and of themselves. Some action, some change has occurred. So what?

Consider the following experiment: we suspend a magnet by its midpoint on a thread. A meter or so away, we set up a coil. When we pass an electric current through the oil, the magnet on the other side of our table rotates. What is the significance of that correlation of events? Does it mean that some physical entity (Leibniz called Newton’s forces “occult qualities”) emanates from the coil, reaches out through space across the table to the magnet, and turns it? Or were it not more reasonable, in place of such extravagant and arbitrary speculations, to report, that the magnet {responded} to a {change in the Universe}, which we generated with our actions, and that the Universe is manifestly bounded in such a way, that the correlation of events in the Universe takes a certain form, and not another. The phenomena remain the same, including the weight-lifter’s conviction, that he is working against “gravity”.

But the nagging starts in again and somebody asks, “Well, if you don’t believe in forces, then please {explain} to me, {why} the planets go around like that, why the Earth is spherical, and so forth.”

“You want an explanation? Forget it. That’s the way it is, buddy. Our Universe is (approximately) spherically-bounded, and you’re going to have to live with it!”

Sometimes, in science as in organizing, blunt answers are appropriate. Sometimes you make a bad mistake by trying to “explain” things. (Explain in terms of what?) Why? Because a certain mode of demanding explanations is really just a ruse for refusal to except reality. Because, what the person is really saying is, “I will refuse to except that X is happening, if the existence of X contradicts my deepest beliefs.” What people commonly mean by “explanation,” is to demonstrate the {deductive consistency} of an event, with their own underlying assumptions and beliefs. But, what if their beliefs are wrong? If the entire coherence of the evidence contradicts a firmly-held belief or habit of thought, then as scientists and truth-seekers, we must part with those beliefs and habits.

Riemann put forward exactly this, {opposite} sense of “explanation” in a posthumous fragment on scientific method:

“If an event occurs, which is necessary or probable according to the given system of concepts, then that system is thereby confirmed; and it is on the basis of this confirmation through experience, that we base our confidence in those concepts.”But if something unexpected occurs, being impossible or improbable according to the given system of concepts, then the task arises, to enlarge the system, or, where necessary, to transform it, in such a way that the observed event ceases to be impossible or improbable according to the enlarged or improved system of concepts. The extension or improvement of the conceptual system constitutes the {`explanation’} of the unexpected event. Through this process, our understanding of Nature gradually becomes more comprehensive and more true, while at the same time reaching ever deeper beneath the surface of the phenomena.”

Thus, “explanation” in Riemann’s sense means a successful {change} in fundamental concepts and assumptions, which have been overthrown by the generation of an event in the Universe, which is incompatible with the previously prevailing beliefs and assumptions. The question, which Riemann does not fully answer, but LaRouche does, is the nature of the {bounding principle} of that process of change.

These remarks bears crucially on the deeper side of the fallacy of Newtonianism. The epistemological equivalent of straight-line action and Cartesian-Newtonian space-time, is deductive reasoning. What we encounter is a strong resistence to the notion of an efficient {bounding} of events, which does not have the form of logical-deductive implication. The existence and form of such bounding principles is an {experimental} question; they cannot be derived from mathematics. Their existence is demonstrated historically, however, by the manner in which the Universe reacts to creative human Reason, by such changes as lead to harmonically-ordered increases in the relative potential population density of the human species. Thus, Nicolaus of Cusa and Kepler understood the ontological significance of spherical (and higher) curvature, as a lawful expression of the principle of perfection of human Reason.

The curvature of “rectangular numbers” Part I

by Jonathan Tennenbaum

Our pedagogical discussions concerning the problem “incommensurability” in Euclidean geometry demonstrated, among other things, that the shift from linear to plane, or from plane to solid geometry cannot be made without introducing new principles of measure, not reducible to those of the lower domain. Thus, the relationship of the diagonal to the side of a square can only be constructed in plane geometry, and is inaccessible — except in the sense of mere approximations — to the mode of measurement characteristic of the simple linear domain (i.e., that embodied in “Euclid’s algorithm”). In the following discussion, we propose to explore that change from a somewhat different standpoint.

I choose, as a point of departure for this exploration, the issues posed by any attempt to compare the areas of various plane figures. The famous problem of “squaring the circle” falls under this domain. But I propose, before looking at that, to start with something much simpler. For example: How can we compare the areas of arbitrary polygons, by geometrical construction? Or, to start with, take the seemingly very simple case of rectangles. Let’s forget what we were taught — but do not know! — namely the proposition that the area of a rectangle is equal to the product of the sides. (Actually, even if the assumptions of Euclidean geometry were perfectly true, the proposition in that form is either false or highly misleading: an AREA is a different species of magnitude, distinct from all linear magnitudes.) In the interest of making discoveries of principle, let us resolve to use nothing but geometrical construction.

Experimenting and reflecting on this problem, the insightful reader might come to the conviction, that the problem of the relationships of area among rectangles of different shapes and sizes, pivots on the following special case: Given an arbitrary rectangle, how to construct “many” other rectangles having the equivalent area. Or perhaps even to characterize the entire manifold of rectangles of area equivalent to the given one.

The first line of attack, which might occur to us, were to find a way to cut up the given rectangle into parts, and rearrange them somehow to form other rectangles. Should we admit any limitation to the shapes and numbers of the parts? To avoid a bewildering bad infinity of options, let us focus first on what would appear to be the “minimum” hypothesis, namely to divide the given rectangle into congruent squares (i.e., squares of equal size). A bit of reflection shows us, that such a division is only possible for the special case, that the sides of the given rectangle are linearly commensurable (i.e., are multiples of a common unit of length). So, for example, if the sides of the given rectangle are 3 and 4 units long, respectively, then by cutting the rectangle lengthwise and crosswise in accordance with divisions of the sides into 3 and 4 congruent lengths, respectively, we obtain a neatly packed array of 12 congruent squares. We discover, that it is possible to rearrange those squares to obtain five other rectangles: 4 by 3 (instead of 3 by 4), 2 by 6, 6 by 2, 1 by 12, and 12 by 1 (i.e., six in all counting the original one, or three if we ignore the order of the sides).

Experiment further. If we start, for example, with a square, and divide the sides into five congruent segments, we obtain 25 congruent squares. The “harvest” of rectangular rearrangements is disappointingly small: all we find is the long, skinny 1 by 25!

Carrying out such simple experiments, the attentive reader might detect a number of potential pathways of further inquiry. One of these would be to ask, for a given total number of congruent squares, how many different rectangles can be formed as arrangements of exactly that number of squares? We can then organize the number into species or classes, according to the resulting number of rectangular arrangements (or “rectangular numbers” as the Greek geometers called them). The class of numbers for which only one rectangular arrangement is possible (disregarding the order of the sides) are known as “prime numbers.” After these, we have a class of numbers with exactly two rectangular arrangements, such as 6, 10, 14, 15, 21, etc. (The otherwise mind-destroying game of “Scrabble” might be put to good use, by employing the wood squares for experiments.)

For the present purposes, however, we would like to construct as many different rectangles as possible out of the original one. We note, that the number of rectangles generated from any given division of the rectangle is very narrowly bounded, and certainly does not include all geometrically constructible ones. How to obtain more? If we stick to the method of division into squares, the only option is to increase the number of divisions. So, for example, we can bisect the unit length in our 3 by 4 rectangle, obtaining a division into 6 times 8, or 48 squares. This raises the total number of rectangles obtained by rearrangement to 10 (5 not counting the order of the sides). By repeated such subdivisions, we might hope to increase the density of population of rectangles so generated, whose areas are all equivalent to the area of the original rectangle. It might be interesting to see how the population grows, as we add new divisions.

But, should we be satisfied with this approach? Aren’t we plunging into a “bad infinity” of particulars? Is there no way to obtain an overview of the whole domain? And remember, our geometrical domain is not limited to linear commensurability of sides. Indeed, a bit of reflections suggests, that for EVERY given segment, there must exist a rectangle, whose area is equivalent to the given one, and one of whose sides is that length. How might we construct such a rectangle?

For a glimpse at a higher bounding of our problem, try the following construction: Take the rectangles constructed from any given rectangle by divisions into squares and rearrangement, as above, and superimpose them by bringing the lower left-hand corners into coincidence and aligning the sides along the vertical and horizontal directions. What do you see?

The Curvature of Rectangular Numbers, Part II

The general task, posed in last week’s discussion, was to generate the manifold of rectangles whose areas are equivalent to a given rectangle. The initial tactic chosen, was to divide the given rectangle into an array of congruent squares, and rearrange them into rectangles of different dimensions, but equivalent area. It became clear, that this tactic only yields a discrete “population” of rectangles (“rectangular numbers”), whose number depends on some characteristic of the number of divisions chosen. On the other hand, if we arrange the resulting rectangles in such a way, that their lower left-hand corners coincide, and their sides are lined up along the horizontal and vertical axes, then a hidden harmony springs into view: the upper right- hand corners of the rectangles, so arranged, appear to describe a HYPERBOLA, or at least a hyperbola-like curve. The idea suggests itself, that the discreteness of dividing and rearranging parts to form individual rectangles, is bounded from the outside by a higher continuity (ordering), whose presence reveals itself in the hyperbolic “envelope” of the rectangles.

To proceed further, let us change our tactic, concentrating on the idea, that there must exist a PROCESS of TRANSFORMATION which generates the entire manifold of equivalent-area rectangles and hyperbolic “envelope” at the same time. We might adopt the attitude, that any pair of rectangles of equivalent area expresses a kind of INTERVAL within the implied “hyperbolic” ordering of the whole.

With this in mind, start with any given rectangle, and consider the following approach. If we triple the length of the rectangle, keeping the width the same, then we obtain a rectangle whose area is clearly equivalent to three times that of the original one. If we then reduce the width of the new rectangle to one-third of its original value, while keeping the length unchanged, then the area of the resulting rectangle (with three times, the length, but one-third the width of the original) will clearly be equivalent to the original rectangle’s area. In fact, we might verify that equivalence in the former, discrete manner, namely by dividing the original rectangle lengthwise into three congruent rectangles, and then rearranging them to obtain the new one. In the same way, we could quadruple the length of the original rectangle and reduce its width to one-fourth, and so on. Obviously, nothing prevents us from applying the same procedure with ANY factor (i.e. not only 3 or 4), or from reversing the roles of “length” and “width” in this procedure.

At this point, something might occur to us, which allows us to “jump” the gap between the discreteness of our former procedure, and the underlying ordering of the problem. Up to now, we have considered as primary a process of multiplying or dividing lengths or widths by some integral number. But now we realize, that the crux of the matter, lies not in this duplicating or dividing up, but rather in the relationship of “INVERSION” between the transformation applied to the length and the transformation applied to the width. This suggests a new approach, which does not depend upon whole-number relationships at all.

Thus, take any rectangle with length A and width B. Now imagine A prolonged to ANY ARBITRARY LENGTH X. Those two lengths, A and X, define an interval. Evidently, what we must do, is to “invert” that interval with respect to B! In other words, construct a length Y, for which the interval (proportion) “Y to B” is (in relative terms) congruent to the interval “A to X”.

The required construction can be approached in many different ways. For example, generate a horizontal line, and erect a perpendicular line at some point P. Starting from P, lay off a vertical line segment PQ, whose length is equivalent to X, and determine a point R between P and Q, such that PR is equivalent to the length A. Next, chose an arbitrary point S, lying to the left of P on the horizontal line, and construct a vertical line segment ST whose length is equivalent to B. Now, generate a straight line through the points T and Q. Leaving aside the case, where that line happens to be parallel to the horizontal axis, the line through TQ will intersect the horizontal axis at some point O. Finally, generate a straight line through O and R. That straight line will intersect the vertical line ST at some point U. Reflect on the relationship formed, relative to “projection” from O, between the line segments on the two vertical lines from P and S. Evidently, the interval of PR to PQ (i.e. A to X) is congruent to the interval of SU to ST, the latter being equivalent to B. Thus, SU gives us the value Y for the required “inversion” of the transformation from A to X. In other words, the transformation of A to X, and the transformation from B to Y are inversions of each other, and the rectangle with sides X, Y will have the equivalent area to the rectangle with sides X and Y.

Consider the case, in which the value of X is changing, and observe the manner in which the positions of O and U vary in relation to X. The hyperbolic envelope is already implicit.

Those skillful in geometry will be able to devise essentially equivalent constructions, which make it possible to generate the hyperbolic envelope and the entire array of equi-area rectangles at the same time. Just to give a brief indication: Start with a rectangle, whose sides A and B lie on vertical and horizontal axes. Let O and M denote the lower left-hand and lower right-hand corner-points of the rectangle. Generate any ray from O, with variable angle, which intersects the upper horizontal side of the rectangle, at a point P. Prolonging the right vertical side of the rectangle upward, the same ray will intersect that vertical line at some point, Q. Now draw the vertical line at P and the horizontal line at Q. Those two lines intersect at a point R. Now examine the relationship of the rectangle with upper right-hand corner R and lower left-hand corner O, to the original rectangle. Examine the motion of R as a function of the angle of the ray from O.

For those who feel the compulsion to scribble algebraic equations, now is the time to kick the habit! The whole point here is to think GEOMETRICALLY. The notion of “geometrical interval” supercedes that of discrete arithmetic relationship…

Science and Life: The Importance of Keeping People in a Healthy, Unbalanced State

By Jonathan Tennenbaum

The following three-part series is ostensibly devoted to some crucial paradoxes raised by the discovery of the so-called “mitogenetic” or “biophoton” radiation of living organisms, by the great Russian biologist Alexander Gurwitsch. At the same time, I hope to provoke reflection on one of the unique and so far irreplaceable functions, which Lyndon LaRouche has performed in the life of our organization. Attention to this point may be the most efficient pathway towards understanding what is really at stake, in the issue of “nonlinearity in the small.”

Part 1: Parmenides revisited

Two leading biologists, Dr. Lebensfroh and Professor Todtkopf, were recently overheard arguing about the nature of living processes. Although the two have opposite opinions, they share a common, underlying error of axiomatic assumption, which is pervasive among even the best scientific professionals today. What is the fundamental error? Here is the dialogue.

TODTKOPF: So, you keep up with this “vitalist” obsession of yours, that there is something unique about living processes. How can you reject the fundamental accomplishment of modern biology?

LEBENSFROH: What you call “biology” has long since degenerated into blatant reductionism and mechanicism, losing sight of the real objective, which is “life.” To me, biology should be defined as the study of exactly {those} aspects of living processes, which distinguish them {absolutely} from non-living processes.

TODTKOPF: I say there are no such differences. A living organism is nothing but a very complex aggregate of molecules, interacting and combining with each other according to the known laws of chemistry and physics. Everyone knows that biology today is just a specialized branch of physical chemistry. The triumph of molecular biology is a great victory of science over naive superstition and metaphysics. For centuries unscientific people clung to the romantic idea, that some sort of “life force” or “living fluid” inhabits the tissue of animals and plants and lends them their “living” quality. But nobody ever found this living force. So it was a great breakthrough, when chemists demonstrated that living organisms are composed of exactly the same atomic elements and particles that we find in the inanimate world, in the atmosphere, in rocks and so forth. Looking for anything more is like grasping for ghosts in thin air. But fanatics continue to defend the notion of a “life force” up to this very day. I remember the uproar which was created when Justus Liebig published his book on “Chemistry and its Applications to Agriculture and Physiology” in 1840, showing that living tissue is composed nearly entirely of the simple elements hydrogen, oxygen, carbon and nitrogen, and that plants can grow on inorganic material alone. Liebig’s proposal to introduce mineral and chemical fertilizers into agriculture met fanatical resistance, even among scientists, who insisted that the nutrition of plants must involve organic material in some essential way. Even today, there is a big market for food grown with “organic fertilizers”, and many people believe that plants grown with mineral fertilizers are somehow different and even poisonous to the health. But these ideas have been refuted long since. There is no special material in living organisms, the atoms are exactly the same there as in this dirty piece of rock.

LEBENSFROH: But in living tissue the atoms are organized and transformed into complex organic molecules, like proteins and DNA or example, which are not found in the inorganic world. Only living processes do that.

TODTKOPF: People like you didn’t want to believe it, when the great chemist Friedrich Woehler succeeded in {artificially} synthesizing the organic substance {urea} from oxygen, hydrogen, carbon and nitrogen in the laboratory. That was 1828. Until then, many biologists and chemists believed that living organisms had their own, fundamentally different chemistry, and that the most important molecules composing living tissue could never be produced outside living tissue. The famous chemist J.J. Berzelius even put forward the “vis vitalis” hypothesis, according to which the characteristic difference between living and nonliving systems lay exactly in former’s supposedly unique powers of chemical synthesis. This idea was the original basis for the division between “organic” and “inorganic” chemistry, which turned out to be just conventional and not fundamental. After Whler, countless other organic molecules were synthesized, and today, we can make amino acids, small proteins (peptides), and pieces of DNA in the laboratory with no trouble. So, there is no special chemistry and no magical synthetic powers of living organisms.

LEBENSFROH: Aren’t you cheating with that argument? You left out the fact, that living human beings — chemists — carried out those laboratory syntheses. So they are still products of living processes, even if the reactions that produce them occur in a test tube. The organic molecules would never arise by themselves, without human intervention.

TODTKOPF: Not true. Researchers have demonstrated in laboratory experiments, that amino acids — the building-blocks of proteins — can be generated by electric discharges in a gas similar to the Earth’s original atmosphere. The Nobel Prize-winning chemist Manfred Eigen has shown, that in a “soup” of chemicals, more and more complex molecules can evolve from simpler ones in a purely spontaneous manner, through a kind of natural selection process among competing chemical reaction-cycles. Given enough time, I am sure all the complex biomolecules would eventually arise in such a self-organizing “chemical soup”. Eigen proposes that the first primitive living organisms actually evolved in this way, and I believe him. It was a gradual process, and there was never a definite point when you suddenly had “life,” and before just a lot of reactions.

LEBENSFROH: You mean to say, that if your mother had only been 5% pregnant and if you were 95% dead, you would still be speaking to me now?

TODTKOPF: Sometimes I feel that way.

LEBENSFROH: But, seriously, you cannot deny that living organisms behave completely differently from non-living matter?!

TODTKOPF: This is just a matter of degree of complexity. Naturally, the more complex a system becomes, the more circus tricks it can perform. But in principle, every chemical process going on in a living organism could be carried out just as well in a test tube. We are already doing DNA synthesis and other sorts of enzymatic reactions that way. It’s just when you put all those molecules and reaction processes together, that you get the effect of life.

LEBENSFROH: What about {growth}? Only living processes grow in a self-similar, exponential way. Whatever you say about the origin of living processes, the {power of growth} distinguishes them absolutely from non-living matter.

TODTKOPF: Really? Crystals can grow too, can’t they? Haven’t you watched how sugar or salt crystals grow in a water solution? Would you say those growing crystals are alive?

LEBENSFROH: No,no, wait a minute. Uh, crystals don’t grow in an exponential way, but actually more like an arithmetic or rather cubic series, as additional layers are added on, surface by surface.

TODTKOPF: And what do you say about a {chemical chain reaction}, as we find in the detonation processes in various explosives? Furthermore, in the 1920s the Russian chemist Semionov discovered the phenomenon of “branched chain reactions”, in which a population of enzymatic molecules grows exponentially, by catalysizing the synthesis of identical molecules in a mixture of reactants. These “autocatalytic” processes display exactly the same growth-curve characteristics, as cultures of bacteria and other living organisms.

LEBENSFROH: But this only works until as the mixture of reactants is used up. After that the process stops, doesn’t it?

TODTKOPF: Don’t living organisms also stop, when their source of nutrition is exhausted? After all, living organisms, like bacteria, never actually grow exponentially. Their growth curve is always an “S-curve”, as growth slows when the bacterial population has reached a maximum density, where the available sources of nutrition become marginalized and the culture reaches an equilibrium or stationary state. And such S-curves are typical of thousands of autocatalytic chemical reactions, which we can make in a laboratory. So in terms of the growth curve, you can’t tell the different between the growth of various chemical species in an auto-catalytic, branched chain reaction, and a population of bacteria which grows in the same way.

LEBENSFROH: But what about the population of the human species? The human population has grown exponentially over history.

TODTKOPF: I don’t think that can continue indefinitely. After all, resources are limited. But even if human multiplication could continue without limit, you mean to say that only human beings are living organisms, and animals and plants are not?

LEBENSFROH: No. The growth of the human population, and its impact on the biosphere in terms of a multiplication of domesticated plant and animal species, demonstrates that the {totality} of living material on the Earth, taken as a whole — what Vernadsky called the biosphere, including human beings — the biosphere has the potential for {unlimited growth} in the Universe. Actually, this was the directionality of evolution even before human culture emerged. So we can say, that living organisms are uniquely characterized by the {potential} for exponential growth, as part of the growing biophere.

TODTKOPF: Well then, from your rather involuted argument you will have to reocognize countless {inorganic} processes on the Earth as “living”, if they are connected with the growth of the biosphere in any way, won’t you? After all, the combustion of oil, or the production of steel, has increased exponentially with the expansion of human population and its economy. So would you include combustion or steel production as living processes?

LEBENSFROH: Of course not. You are just twisting my argument into nonsensical shape.

TODTKOPF: Then where do the inorganic processes leave off, and the “living” process begin? You claim there is a categorical, absolute distinction between the two. Would you say that the oxidation of glucose in cells is a living or nonliving process? It’s really just a form of combustion isn’t it, burning sugar for energy.

LEBENSFROH: This is just a trick of yours, to rip an individual chemical processes out of the organic context of the living process of which it is a part. In fact, the unique characteristic of living organisms is their indivisible unity or “wholeness”, which means that all processes going on in an organism are interconnected and subordinated to a single overall principle, and that all react together as a whole — rather than an assembly of parts — to every change in the organism’s environment. No mere mechanical or other non-living physical system has such characteristics.

TODTKOPF: Wrong again. Modern quantum physics has gone far ahead of you, and identified what are called “macroscopic coherent states” in {non-living matter} — states you would be forced to admit have every bit of that quality of “one-ness” you ascribe to living organisms. Even the wave-front of a light wave displays this quality, as Fresnel already demonstrated in his analysis of the diffraction of a light-beam at a sharp edge: When part of a light-wave encounters an obstacle, the entire wave-front “reacts”, and the direction of propagation is changed. Within scale-lengths of the order of a single wavelength of light, the light wave behaves as an indivisible whole. Beginning the early 1920s, entirely analogous characteristics were demonstrated for beams of electrons. Modern quantum physics teaches us, that even a single electron involves a process distributed over a large region of space, and which “feels” all the event events occurring within that space. Furthermore, we today have countless experimental proofs, that there is no such thing as a truly isolated, independent particle, atom or molecule. Rather, in a certain sense each particle in the Universe “knows” and reacts to what is happening with every other one, without having to be informed by any sort of signal! Our lasers, superconductors and even the semiconductor devices which are the basis of today’s computers and communications systems, are all based on that principle. In such devices, huge numbers of atoms behave as if they constituted a single coherent entity. The fact that we can demonstrate this sort of “holistic” behavior in so-called nonliving systems, has been a major breakthrough, demystifying the characteristics of living organisms and demonstrating, once again, that there is no categorical distinction between living and non-living processes.

LEBENSFROH: You are bluffing. You are ignoring the crucial property of living organisms, which is their ability to {reproduce themselves}, based in the unique process of mitosis or cell division. No reductionist or mechanicist theory could possibly describe such a self-reproducing process.

TODTKOPF: Evidently you are not familiar with the work of John von Neumann on {self-reproducing machines}. Although such machines have not actually been built yet, von Neumann proved their feasibility in principle long ago, and he even worked out how such machines would have to be programmed. Essentially, a self-reproducing machine would consist of a complex of computer-controlled, automated industrial processing-units and robots, all directed by a central computer. The robots gather raw materials from the surrounding area and feed them into the industrial process-units, which in turn produce materials and parts to match those from which the central computer, robots and industrial process-units themselves were constructed. As the final step, the central computer directs the assembly of those parts into a second copy of itself and its robots and industrial processing units. Obviously, such a machine would have to be extremely complex, and indeed, this is the fundamental point that John von Neumann and others have stressed — that there is a lower limit to the necessary, minimum complexity of a self-reproducing machine. This explains why qualitatively new types of phenomena occur when systems become as complex as cells. So a living cell is just an extremely complex kind of self-reproducing machine, with a bit of holism thrown in, if you want.

LEBENSFROH: You mean to say, you are a von Neumann clone.

TODTKOPF: No doubt about it. That’s where all of us modern biologists come from.

What is the fundamental error in this whole discussion? What do Dr. Lebensfroh and his unfortunate colleague have to learn from Gauss’ Determination of the Orbit of Ceres?

The Importance of Keeping People in a Healthy, Unbalanced State

by Jonathan Tennenbaum

Part II

Lebensfroh felt frustrated and a bit depressed after his encounter with Prof. Todtkopf last week. He was sure he had been right, and Todtkopf wrong, when he insisted that living processes could not be reduced to the same physics as nonliving processes. But in spite of this, Todtkopf seemed to have come out ahead in the debate. Todtkopf’s arguments reminded him of prosecutors who can “prove” or “disprove” anything, by a selective arrangement of supposedly unassailable, “hard facts.” Lebensfroh had tried to defend life, and lost his case. It wasn’t any particular argument, but the <whole debate> that had somehow missed the point. Lebensfroh felt embarrassed, like someone who had lost his wallet to a pickpocket.

Returning home, Lebensfroh sank deep into his armchair. He went through the discussion with Todtkopf again in his mind. Where was the mistake? Lebensfroh had presented a series of properties A, B, C, D …, each one of which he considered to be a unique and exclusive property of living processes: the synthesis of complex organic molecules, exponential growth, self-replication, “wholeness,” and so forth. One after the other, Todtkopf returned the argument, by presenting examples of nonliving processes which seemed to have the same properties, and maybe even all of the properties Lebensfroh had come up with. Lebensfroh was dismayed. What he thought he had understood very well before the argument started — namely the unique nature of life — now seemed to have evaporated into something intangible and elusive, even in his own mind.

Suddenly he had a new thought. He recognized it came from something he had read long ago by Cardinal Nicolaus of Cusa, concerning the nature of the circle. The thought was: If someone would specify any set of points A, B, C, D … on the circumference of a circle, would that determine the circle as the curve passing through those points? Well, obviously not, someone else could just connect the points by <straight lines>, getting a polygon, which is not the same as the circle. No number of points, so supplied, could ever suffice to distinguish the circle from a mere polygon. What, then, is the characteristic distinction of the circle?

Lebensfroh’s gloom and frustration disappeared, like the popping of a bubble. In his mind’s eye, Lebensfroh caught a glance of the old cardinal’s face, smiling at him. Lebensfroh smiled, too. “Thanks, Nick,” he heard himself say.

The next day Lebensfroh met Prof. Todtkopf again.

Todtkopf: Well, I hope you have given up your silly idea after our last conversation.

Lebensfroh: Indeed. I will never again lose sight of the false, lying nature of so-called “scientific facts.”

Todtkopf (shocked): What do you mean!? Facts never lie. Facts are the very foundation and essence of truth.

Lebensfroh: Wrong. I say, truth lies entirely outside, above and apart from mere “facts”; and no single fact, nor any collection of facts, however comprehensive, could ever represent truth. Only ideas, not facts, can represent truth.

Todtkopf: Are you crazy?

Lebensfroh: I will show you. See how I draw this circle, and now I mark points A, B, C, D, etc. on it, which represent what you call “facts”….

Todtkopf: Don’t talk to me about geometry. I am a biologist. I don’t go there!

Lebensfroh: The problem is, the conception I want you to understand, cannot be communicated without a certain type of metaphor …

Todtkopf: I am a scientist, not a poet.

Lebensfroh: Well I tell you it is <absolutely impossible> to grasp what a living process is, without metaphor. Because there is an ordering of ideas in science, and the conception of “living process” is a strictly <higher type> than any conception which can be communicated in a linear way. This would be obvious to you if you had worked through Gauss’ determination of the orbit of Ceres, for example. The nature of living processes, and the absolute, “strong” gap separating them from all non-living processes, lies in the characteristics of <change> manifested in the virtually infinitesimally small.

Todtkopf: I have no idea what you mean.

Lebensfroh: Well, I see we’ll have to approach this through an example. I have it! Let’s look at a unique case, which poses the relevant paradoxes in the strongest form: a physical economy, which is a very special sort of living process.

Todtkopf: What do you mean by “physical economy”? I remember reading something about that.

Lebensfroh: I mean the physical process by which a human population reproduces the material conditions for its continuing existence, at ever higher levels of potential population density.

Todtkopf: So it’s more than just the living population and its immediately activity, but also the physical processes in mining and industry, which deal with inorganic materials, as well as things like farming?

Lebensfroh: Of course. Physical economy subsumes the processes of agricultural, mining and industrial production; distribution and consumption of goods; housing, education, and health services; cultural activities, scientific research, administrative and related activity and so on — everything necessary for the maintainance and development of human society from one generation to the next. In a sense, all these things form the tissue and organs of the physical economy as a coherent living entity.

Todtkopf: Ha! Now I have caught you in a contradiction! Just a moment ago you restated your old thesis, that there is a categorical distinction between living and nonliving processes, true?

Lebensfroh: True.

Todtkopf: And according to that you would distinguish between living and nonliving matter, wouldn’t you?

Lebensfroh: Yes.

Todtkopf: Then tell me this. A piece of rock sitting somewhere in a mountain, is that living or nonliving material?

Lebensfroh: Nonliving, of course.

Todtkopf: And when that same rock is mined, and the ore is transported to a factory, and metal is produced, and that metal is worked up into parts, and the parts assembled into a machine, and that machine is integrated into the production process — would you not say, that the material of the rock has become part of the physical economy?

Lebensfroh: Yes.

Todtkopf: So then, if the physical economy is a living process, then the material which constitutes it must be living, must it not?

Lebensfroh (hesitating): Well, I guess so.

Todtkopf: Then one and the same material is both living and nonliving, or else you will have to tell me at what point the rock, or ore, or metal, or machine, became “living,” in your sense!

[Lebensfroh realized he was about to fall into the same trap, as he had done in his earlier debate with Todtkopf. Focussing on his happy idea about the circle, he recovered quickly and continued.]

Lebensfroh: Exactly. That is just the point. We are dealing with a multiply-connected manifold.

Todtkopf: There you go again with your mathematics! Tell me plainly now: do you or do you not regard the machines in a factory as being <living>, in virtue of their being integrated as parts of the “tissue” of the physical economy, which you call a living process?

Lebensfroh: In a sense, absolutely, yes. But the “living” aspect of these things does not lie in the things themselves as isolated entities, but in the characteristics of the process of <change> in which they actively participate. And the chief characteristic of change, which defines a physical economy as <living> (as opposed to pathological, dying states of an economy), is <scientific and technological progress>. That progress takes the form of an incessant series of “pulses” or “shocks” of <change in the organization of production> — shocks which originate in fundamental scientific discoveries of principle, and propagate, like waves, throughout the tissue of the economy. Those pulses or shocks reflect the action of a higher geometry — one characterized by human creative reason — upon the ensemble of lower geometries conposing the tissue of the physical economy.

Todtkopf: You mean to say, without those pulses, the tissue of the economy would degenerate and the economy would “die”?

Lebensfroh: Exactly. And I am sure that something analogous must occur in living processes generally, and on another level, in the creative processes of the mind itself. The great biologist Alexander Gurwitsch had some appreciation of this.

Todtkopf: What you say is amazing.

Lebensfroh: Not really. Imagine how stupid you would be right now, if Nicolaus of Cusa had not helped me get your mind moving.

Science and Life: The Importance of Keeping People in a Healthy, Unbalanced State

by Jonathan Tennenbaum

Part III

At the end of last week’s discussion, Prof. Todtkopf was amazed and a bit overwhelmed by the conception Lebensfroh came up with, that physical economy might provide the key to understanding living processes in general. But later, as he thought back on the conversation, his admiration turned to suspicion, then irritation, and finally rage. The more he thought about it, the more ridiculous it seemed to him to mix up economics and biology as Lebensfroh had done, comparing an economy to a living cell, for example. Todtkopf’s teachers had taught him to beware of sweeping analogies, which might excite our fantasy, but undermine the objectivity that are essential to professional scientific work. Todtkopf saw himself admonishing an audience of his colleagues: “In science the first step is to {define your terms}; and once you have done that, you have to stick to the definitions. If you start to play with metaphors and analogies, as Lebensfroh loves to do, then you can make anything into anything, as if you would say: the solar system is a living process, the galaxy is a living process, an atom is a living process, EVERYTHING is a living process?!! Then we would all feel happy, like Dr. Lebensfroh. Absurd! By throwing words around like that we accomplish nothing of any substance.”

Lebensfroh has to be cut down to size, thought Todtkopf. He should stop acting as if he were superior to us empiricists, just because he has a creative mind. I’ll give him a lesson on what science is all about. He started lecturing again:

“Science is based on empirical fact. That means observing and investigating the real objects in the world around us. To be able to arrange the facts, and to correlate facts in order to adduce general laws, you need to establish a division of the sciences. The sciences are divided according to the different kinds of objects you study. So, biology studies the living organisms which are divided into plant and animal. To determine what a living process is, you start concretely, by studying this specific plant, that specific animal. Nothing to do with economics or anything like that. You keep studying those plants and animals and then you correlate your observations and measurements and draw general conclusions. So, by painstaking investigations, molecular biologists discovered the common molecular basis of living organisms — the amino acids and proteins, the genetic code and so forth. Step by step, we unravelled the mechanisms and we discovered that in each case we examine carefully, we find everything occurs according to the known laws of physics and chemistry — laws verified in hundreds of thousands of laboratory experiments. At least, no one in academia dares refute us. The wispy dreams of the vitalists, have given way to piles of hard facts. This is the triumph science, the triumph of Aristotle, the first biologist and systems analyst!

“So don’t ever forget, Lebensfroh: We empiricists are the ones who do the real work. We know what functions and what doesn’t function in the real world. Don’t stand there and try to tell us how we should do things!” Professor Todtkopf was so preoccupied, that he emptied his coffee cup onto his trousers.

The next day Todtkopf sought out Dr. Lebensfroh.

Todtkopf: Our conversation last week was fun, Dr. Lebensfroh. But speaking as a professional scientist, I must say, it was a waste of time.

Lebensfroh (taken aback): Why that?

Todtkopf: You presented not a single solid scientific fact, but only wild, irrelevant analogies to economics and so forth. I was taken in for a moment, but now no more.

Lebensfroh: Oh, oh, I see you have decayed into your lower state!

Todtkopf: Lower state? Decayed?

Lebensfroh: Well you know, according to modern physics we find that atoms and molecules can exist in different modes or states, which form a discrete series or spectrum that is characteristic for the species of atom or molecule involved.

Todtkopf: Every chemistry student knows that.

Lebensfroh: In the so-called ground states or lower-energy states, atoms and molecules are typically inactive and inert. But if we irradiate them with photons of the right wavelength, for example, we can raise them into a higher-energy, excited state. They become highly reactive, they begin to emit radiation, they are more lively and interesting in every way. We can get lasing and all sorts of wonderful things to happen.

Todtkopf: And?

Lebensfroh: But if they are left to themselves, and taken out of the special environment we have created, the atoms and molecules tend to decay back to their lower-energy states, and become lazy and boring again. So it is with PEOPLE, too.

Todtkopf: There you go again with your analogies and metaphors! What does that have to do with me?

Lebensfroh: Because last week at the end of our discussion I had pulled you up to an excited state for a while and now you seem to have slipped back down. The difference is elementary and very easy to observe, when one knows what to look for. People in higher (creative) states of mind think of the Universe in terms of {change}, while in your lower state, you think of it in terms of arrangements of objects.

Todtkopf: What difference does that make? Thinking is thinking.

Lebensfroh: Not so. If you were to stay in your present state, you would be incapable of making any fundamental discovery.

Todtkopf: How do you know? I can look through a microscope as well as you!

Lebensfroh: Maybe even better than me, but you won’t {discover} anything. Because a fundamental discovery is not the discovery of some property of an object, but a {change} in the characteristics of our own mental processes, a change in the way we {think} about the Universe as a whole. It occurs entirely inside the mind. And that is the beginning of actually changing the Universe itself. But it can’t happen if your mind is in the deadened state, typified by a fixation on objects or object-like images.

Todtkopf: Challenge me. I will show you you’re wrong.

Lebensfroh: Fine. The other day you asserted molecular biology had for the first time identified the chemical basis for living processes?

Todtkopf: Yes, of course.

Lebensfroh: Then tell me, what is the difference between a living cell, and the same cell immediately after it has died? The molecules stay the same. Even many reactions keep going for a while, as they might in the non-living environment of a test tube.

Todtkopf: Um…Uh… Well, eventually the normal processes stop and the cell disintegrates. You can see this in a microscope.

Lebensfroh: I am not asking what {eventually} happens, as a {result} of the event of the cell dying. I mean the event itself. What is it {precisely}, that has happened at that moment?

Todtkopf: Obviously, there was some divergence from normal functioning, and the cell did not recover.

Lebensfroh: {Why} didn’t it recover? As the Russian biologist Gurwitsch and others showed, sometimes living cells can recover from the grossest sorts of disturbances. So, for example, Gurwitsch centrifuged fertilized egg cells until the visible structures in the cell had been destroyed, and yet the cells reorganized themselves and developed into adult organisms. What is it that occurs, at the moment when a living process, which was viable before, loses that capability?

Todtkopf: Actually, I must admit I don’t know. Maybe there is no simple general answer. Of course there are millions of papers about aging of tissue and various damage mechanisms which can lead to the death of cells. But actually, I don’t recall anyone having posed exactly the question you are asking, in such a straighforward way.

Lebensfroh: Isn’t that a bit strange? After all, you were just claiming the molecular biologists had uncovered the molecular basis for the main processes which occur in living organisms. But as for such a central issue in biology, as I now have raised, you haven’t even begun to address it. Doesn’t that suggest some problem with you thinking?

Todtkopf: I see what you mean. But maybe the answer is very complicated.

Lebensfroh: If you had studied how Gauss determined the orbit of Ceres, you would at least know how the question would have to be approached experimentally. What is the characteristic of the orbit of a comet, for example, which is headed for a collision with the Sun? What is the {change} in orbital {characteristics}, between a “healthy” orbit and an orbit which might differ at first only imperceptibly from the healthy one, but lead inexorably to the destruction of the comet?

Todtkopf: How can you compare the processes of a living organism with the orbit of a comet? Another of your wild analogies.

Lebensfroh: I am not comparing the two as objects. I am talking about how we have to {think} about two problems that share a common, crucial methodological feature.

Todtkopf: Well, it doesn’t help me to bring in the astronomical example. I saw that long article in Fidelio, but I didn’t work it through.

Lebensfroh: Why not?

Todtfroh: My friends all told me it is very difficult.

Lebensfroh: Why in the world, should it be regarded as an argument {against} doing something, to say it is difficult? If what Gauss accomplished were just trivial, so people could swallow it at one gulp, like a doggie cookie, then it wouldn’t be worth much, would it?

Todtkopf: I guess not.

Lebensfroh: And didn’t Gauss himself work on this for months, and other scientists spend years and decades or even lifetimes struggling to work through a crucial paradox and make a fundamental discovery of principle, coming back to it again and again from different angles until they had succeeded, for the benefit of mankind, in mastering it? Didn’t Beethoven oftentimes spend years developing a single composition?

Todtkopf: He did.

Lebensfroh: Then we should be happy when the essentials of a crucial discovery, and relevant materials, have been put together in such a way that we don’t have to waste time on non-essentials, but can get to the real issues directly. Because, truly, we live in a world where there is no time to waste. So we should concentrate on the difficult things, and brush trivial things aside.

Todtkopf: I agree. But can you at least tell me what Gauss’ work has to do with biology?

Lebensfroh: The oldest, classical problem in astronomy, is that when you observe the motion of the Sun or any planet in the sky, that motion actually results from many different motions, all acting during in any arbitrarily small interval of the observered motion. So, the motion of Mars in the sky, for example, involves Mars’ own orbital motion, the rotation of the Earth, the orbital motion of the Earth with respect to the Sun, the precession of the equinoxes, and even still other, more subtle and partly even not-yet-discovered cycles. The subtler point is, none of these motions is strictly independent from the other, but each one reacts to the existence of the others.

Todtkopf: Then, how is it possible to disentangle them?

Lebensfroh: There is no formal mathematical solution. But there does exist a method of {experimental measurement} based on so-called analysis situs, which Kepler applied in a masterful way to his founding of modern astronomy. The crucial point is, that the principles or “dimensionalities” of action we are looking for are axiomatically distinct, linearly incommensurable principles; each is characterized by a different characteristic curvature in the infinitesimally small. Their mutual action generates dense singularities. Secondly, the ensemble of such principles must be harmonically ordered according to a still higher principle.

Todtkopf: How do you know that?

Lebensfroh: That is Kepler’s higher hypothesis, that our Universe is ordered in that sort of way. He demonstrated that the harmonic organization of motions of our solar system is uniquely coherent with that hypothesis, and in his snowflake paper he did the same thing for the microscopic domain, too — at least provisionally.

Todtkopf: I will have to believe you. But get to my question: what does this have to do with biology?

Lebensfroh: Very much, obviously. But in our discussion the particular issue keeps coming up, that the processes in living tissue are determined by more than one fundamental ordering principle. We have one set of principles — the one you associate with “ordinary physics and chemistry”, and which you and your colleagues observe operating also within living organisms, at least to a very great extent. However, in living tissue another, higher set of principles — a higher geometry, in effect — is superimposed upon those “inorganic” principles. If fact, we can even say, that the higher principle {rules} the lower one, even though the effect of the higher geometry might only appear as a virtually infinitesimal displacement from the pathway, that the process would have followed, had only the lower, inorganic principles been active. Nevertheless, the overall cunulative effect of that “infinitesimal deviation”, is enormous. This sort of situation is quite familiar from astronomy. There, the most powerful, “tectonic” forces are the ones connected with what appear at first as barely perceptible, infinitesimal deviations or anomalies within otherwise well-determined orbits.

Todtkopf: What you say seems strange to me. How can it be that a “strong” force appears as the most infinitesimal?

Lebensfroh: Here is another case, where a key point of method can hardly be communicated effectively, without geometry. But this time maybe you will offer more patience than last time I tried.

Todtkopf: I am definitely in an excited state.

Lebensfroh: Good. Now take this piece of paper, and observe how I role it into a cylinder. No problem, eh?

Todtkopf: Very easy.

Lebensfroh: And now I role it into a conical shape.

Todtkopf: Also no problem.

Lebensfroh: And many other shapes are possible, obviously. But what about giving the paper a spherical shape, or even part of a sphere. See, here I have a globe and I am trying to bend the paper onto its shape.

Todtkopf: I see, it doesn’t work. You get creases all over and it still doesn’t really fit.

Lebensfroh: And what would happen if I tried to make part of the surface of the globe into a flat surface?

Todtkopf: You would tear it, for sure, if it were made of some material like paper which doesn’t stretch.

Lebensfroh: Is that problem a matter of how large the portions of surface I use?

Todtkopf: Evidently not.

Lebensfroh: So, then, the characteristic which causes these violent creases and tears — and I guess you will agree, these would be typical of “strong forces” — is manifested as a virtually {infinitesimal} difference at the level of a tiny section of the spherical surface vis-a-vis the flat surface. Of course, when I look at larger portions of the surfaces, the discrepancy in shape and characteristics becomes macroscopically evident.

Todtkopf: OK, I get it. So you want to say, for example, that we should think about the higher principle acting in living tissue as a kind of “curvature” imposed on otherwise relatively “flat” geometry of non-living physio-chemical processes.

Lebensfroh: Wonderful!

Todtkopf: So that, if we just examine a small, isolated aspect of the living process, the effect of that curvature might appear virtually infinitesimal. But, if this your approach is correct, somewhere in there we must find extremely intense forces of tearing or wrenching between the geometries. Because they are axiomatically incompatible. What form would those “creases and tears” take?

Lebensfroh: That question obviously takes us beyond mathematics, into the domain of experimental biophysics. This is exactly the area of Alexander Gurwitsch’s fundamental work, which led him to the discovery of the so-called “mitogenetic radiation”, or constant photon emission from living tissue. This radiation is so extremely weak, many orders of magnitude weaker than the metabolic energy of the tissue itself — so weak that most scientists today regard it as an irrelevant, mere curiosity devoid of biological or biochemical significance. This is because they don’t understand the elementary point you just grasped. Alexander Gurwitsch and his followers developed an elaborate series of unique experiments based on the characteristics of this very weak radiation, and all directed at disentangling and measuring the higher principles of ordering of living processes.

Todtkopf: What did they discover?

Lebensfroh: Well, this was literally a life’s work, and worth more than five minutes’ discussion. But without my going into the experimental method, perhaps you might in conclusion like to hear how one of Gurwitsch’s students summarized some of the main {conclusions} of that work. Actually, the conclusions are {questions}: they lead into an entirely new domain of biology, which has barely been explored up to this day. Here is the quote:

“The conclusion was that the harmonic movements observed in a normal cell are due to a certain factor related to the cell as a whole and this factor is not destroyed or inactivated by the destruction of the visible intracellular structures or processes. Hence, {space-time connections between separate intracellular structures or processes are not due to any properties of the structures themselves}. A further conclusion was, that together with stable structures in which the molecules are bound by means of various types of chemical bonds, there are {unstable} molecular constellations in which the molecules are not connected with each other by any of such bonds, but where their association is maintained by a continuous influx of energy… Such labile, energy-dependent molecular constellations were designated by A.G. Gurwitsch as {‘unbalanced molecular constellations’}. … However, the continuous influx of metabolic energy is a {necessary} condition, but {not the only one} for the existence of unbalanced molecular constellations. Their existence is elicited by a certain dynamic factor, whose action, although somehow connected to a continuous utilization of metabolic energy, is quite independent.”

Todtkopf: What are those “unbalanced molecular constellations”? I don’t know of such a thing in chemistry, even today.

Lebensfroh: Well first of all, you might have fun thinking about the last of Gurwitsch’s conclusions, mentioned above, in relation to physical economy. What is involved by the impact of scientific and technological progress on the investment cycle (metabolism) of free energy and energy-of-the-system of an economy? As for Gurwitsch’s “unbalanced molecular constellations”, I think we illustrated that principle in our very conversation today.

Todtkopf: How so?

Lebensfroh: Well obviously the living process is a constant battle to keep those molecules from slipping back into their accustomed, banal, stupid, boring inorganic state. What must be supplied, to accomplish that, is not “energy” in the ordinary sense, but rather something akin to what Nicolaus of Cues did for me the other day, and what I have tried to do for you in these last two talks. Don’t you think those great men are to be honored and emulated, who constantly raise people upward toward the passionate pursuit of truth. These are the real benefactors, fathers and leaders of the human race!

A note on the fate of Gurwitsch’s work:

The discovery of so-called “mitogenetic radiation” by the Russia biologist Alexander Gurvitsch, as a biproduct of Gurwitsch’s investigations into the higher principles of organization of living processes, was regarded by many leading scientists in the 1920s and early 1930s as one of the most far-reaching experimental discoveries in modern science. Among those was V.I. Vernadsky, a personal friend of Gurwitsch from the beginning of his researches at the Crimean town of Simferopol in 1918. Gurwitsch’s decisive 1923 experiments, established that 1) all living tissue is a source of sustained, though highly variable and (in scalar terms) {extremely weak} radiation in the ultraviolet range of the light spectrum; 2) the process of cell division (mitosis) can be triggered by the absorption of no more than a {single photon} of such light by a suitably disposed cell; and 3) the existence and function of such “mitogenetic radiation” is intimately connected with the manner in which all local processes in a living organism — e.g. on the cellular, molecular and even atomic scales — are subordinated to a principle of organization unique to the living organism as a whole. By using mitogenetic radiation as a crucial experimental method in embryology, physiology, the study of the nervous system and other areas, Gurwitsch and his collaborators made one remarkable discovery after another, continuing through Gurwitsch’s death in 1954.

Starting no later than the end of the 1920s, systematic operations were launched to “kill” the new area of research. These included a widely-publicized hatchet-job done on behalf of the Rockefeller Foundation by one A. Hollaender. By the end of the 1930s, Gurwitsch’s scientific reputation in Western countries had been significantly undermined, only to be virtually buried under the onslaught of ultra-reductionist currents of molecular biology after World War II. While the main lines of Gurwitsch’s work continued to be pursued in the Soviet Union — including in military-related domains –, the efforts of Hollaender et al. established the “consensus opinion” in the West, that Gurwitsch’s radiation did not exist; or in case it did exist, had no scientific importance. With the rapid overall decline in the quality of science in the Soviet Union from the 1960s and especially the 1970s on, the focus on the fundamental implications of Gurwitsch’s work was nearly lost there, too.

It was only in the mid-1970s that Gurwitsch’s work began to be revived in a serious way, with the work of Fritz Popp and is collaborators in Germany and other countries. From 1985 on, Lyndon LaRouche personally and his collaborators have played a crucial, indispensible role in keeping work in this and related areas alive internationally. In every case that has been examined so far, the results of Gurwitsch’s laboratory have been confirmed. In the meantime, technological developments make it possible to design new species of experiments which would not have been possible in Gurwitsch’s time.

In retrospect, it is obvious that a major motivation for burying Gurwitsch’s work, was that it threatened to derail the British plans, notably supported by the Harrimans, Rockefellers and others, to establish “race science” and eugenics as “authoritative scientific doctrines”. This program was resumed immediately after the war, and took the included form of a massive promotion of radical-mechanistic, reductionist forms of molecular biology and genetics, which had already begun to be developed by Max Delbrueck and others in the middle 1930s, with the support of the Rockefeller Foundation’s Warren Weaver. Of course, these operations went hand-in-hand with the promotion of behaviorist psychology, mechanistic theories of nerve function (Hodgkin-Huxley, John von Neumann, Norbert Wiener etc), the work of von Neumann and others on formal logic, “artificial intelligence”, “self-reproducing machines”, “information theory” and so on and so forth. The British side, often clothed in “holistic” trappings, included the Huxleys, Joseph Needham, J.S. Haldane, Waddington, Bernal, Russell of course etc. The Cambridge side of the British elite were predominantly biologists, following in the putative footsteps of Aristotle himself. Of course, the so-called “biologism” of Haekel et al. was an important current flowing into the Nazi movement, and into today’s New Age and green movements.

Incidentally, I personally had the occasion to visit Hollaender in his office together with Fritz Popp around 1985, not long before Hollaender’s death at the age of over 90. Hollaender admitted having being deployed by the Rockefeller Foundation to Russia with the sole purpose to “investigate” Gurwitsch and his laboratory, bringing back the story that Gurwitsch’s experimental technique was allegedly “sloppy” and his results “unreliable”. (Hollaeder subsequently carried out and published in 1937 his own botched series of experiments, allegedly failing to discover any evidence of Gurwitsch’s radiation.) Confronted with Popp’s detailed measurements of mitogenetic radiation using modern photomultiplier instruments, Hollaeder admitted without blinking an eyelash, that he “had always suspected Gurwitsch had been right.”

The Simplest Discovery, Part I

by Jonathan Tennenbaum

The fundamental crisis of civilization is forcing the question: Where do the ideas come from, which we find in our own heads and those of our fellow human beings, and which determine Mankind’s ability or inability to survive the onrushing crisis?

The “short answer” is, that apart from the products of oligarchical manipulation, corruption, and decay, everything {positive} in our culture — not only science and technology, but the concepts of everyday life, and language itself — derives from nothing but the generation and assimilation of validated discoveries of principle, made by individual human minds, as measured on the metric of increase in the per-capita potential population density of the human species. The power of the oligarchy, of course, depends to a large extent on the success of its own massive efforts to cover up and distort the historical generation of culture (including science), while promoting popular belief in various varieties of empiricism and so-called “innate” or “self-evident” ideas.

It is worth stressing, that the battle against oligarchical obfuscation of the history of ideas, was a center of concern to the “American” republican circles around Schiller, Humboldt, and Gauss et al. at the beginning of last century. Moreover, exactly the concept of positive human culture as the result of integration of individual acts of resolution of fundamental paradoxes by the Platonic method of hypothesis, was key to the revolutionary work of (among others) Gauss, Weber, and Riemann on the “anti-entropic” geometry of physical space-time. Riemann discusses this explicitly in his posthumous fragments on epistemology, which provide a most useful background for comprehending his 1854 paper on “The hypotheses underlying geometry.” Consider, in particular, the following passage translated from Riemann’s posthumous fragment, entitled “Attempt at a theory of the fundamental concepts of mathematics and physics as the basis for the explication of Nature.” (Note, that in this location, Riemann employs the terms “Begriff,” concept, and “Begriffssystem,” system of concepts, in a sense congruent with Lyn’s use of “fundamental assumption” and “hypothesis,” respectively).

“On the basis of the concepts, through which we grasp the natural world, we not only constantly supplement our observations, but, in addition, we determine certain future observations in advance as necessary, or — in case our system of concepts is not sufficently complete — as probable; on this basis it is determined, what is `possible’ (i.e., also what is `necessary’ or that whose opposite is impossible); furthermore, the degree of possibility (the `probability’) of every single event so judged possible, can be mathematically determined, when the concepts are sufficiently precise.

“If an event occurs, which is necessary or probable according to the given system of concepts, then that system is thereby confirmed; and it is on the basis of this confirmation through experience, that we base our confidence in those concepts.

“But if something unexpected occurs, being impossible or improbable according to the given system of concepts, then the task arises, to enlarge the system, or, where necessary, to transform it, in such a way that the observed event ceases to be impossible or improbable according to the enlarged or improved system of concepts. The extension or improvement of the conceptual system constitutes the `explanation’ of the unexpected event. Through this process, our understanding of Nature gradually becomes more comprehensive and more true, while at the same time reaching ever deeper beneath the surface of the phenomena.

“The history of the exact sciences, as far as we can follow it backwards in time, demonstrates that this, in fact, is the pathway by which our knowledge of Nature has progressed. The systems of concepts, which form the basis of our present understanding of Nature, were generated by progressive transformations of older conceptual systems; and the reasons that pushed forward the generation of new modes of explanation, can in every case be traced back to contradictions or improbabilities arising in older modes of explanation.

“Thus, the generation of new concepts, insofar as it is accessible to observation, occurs by this process.

“Herbart, on the other hand, has provided proof, that those concepts, upon which our conceptualization of the world is based, but whose origins we can neither trace back in history, nor in our own development, because they are transmitted together with language without being noticed — all of those concepts, insofar as they are more than mere forms of connection between simple sense perceptions, can be derived from the above source; and need not be attributed to some special property of the human soul, assumed to predate all experience (as Kant claimed to do with his categories).”

Often it is most instructive, in exploring the implications of a fundamental principle such as Riemann’s, to focus attention on the most deceptively simple cases — cases of the sort fools would be likely to dismiss as being “too obvious to be worth thinking about.”

Take, for example, the everyday concept of a “day.” What could be more self-evident? Does Riemann actually mean to say that there is a real, creative {discovery} embedded in that idea? What would have been the paradox or paradoxes, whose resolution gave birth to the concept of “a day”? Evidently, the discovery involved predates history in the usual sense. What we might do, is try to “project” ourselves mentally back to a hypothetical, very, very distant point in time, at which the concept of “a day” did not exist, and then ask: What paradoxes {must intrinsically} confront a mind in the process of freeing itself from a naive, beast-like belief in the primacy of sense-perception? First, reflect on the following:

Could we discover anything without memory? Is a pot-headed Yahoo, who cannot remember what he saw or did 5 minutes earlier, able to make scientific discoveries? Would a Yahoo ever have been able even to discover the existence of a “day” as a recurring cycle of light and darkness? Or was the development of poetry, as a means of development of the powers of memory, crucial to the emergence of human civilization?

Pre-Socratic Greek tradition often spoke of the origin of the Universe in terms of the creation of Order (Cosmos) out of Chaos. Does this not exactly describe the subjective process by which a human mind frees itself from the blind impulses of “animal instinct” and “sense certainty”? The world of the existentialist Yahoo, or a newly-born infant, is a kind of Chaos, a “kaleidoscope of feelings” replacing each other in more or less rapid succession. Mankind could not survive, were it not possible to awaken a power of {creative discovery} in the infant or the supposedly infant-like, primitive man — a mental function energized by the most powerful form of human emotion, {Agape}. It is that agapic power, inseparable from the faculty of {memory} as understood by the Renaissance, which conquers the Chaos of bestiality and creates the Cosmos of human development as an ordering of successive acts of discovery.

Next, consider the elementary paradox of change, as it is addressed by the simplest of astronomical discoveries. The following exploration is hypothetical, but necessarily touches upon a discovery actually made (and in fact, made repeatedly in various forms) in human history.

You are a prehistoric human being, living perhaps 500,000 years ago. On a beautiful clear night, you seek a place to lie down under the open sky. Gaze up, from there, at the magnificent canopy of the heavens! The myriad stars shine down on you in majestical silence, like little lights afixed to a lofty dome. Here is peace, here is rest! You close your eyes and relax.

You awaken later that night. As your eyes once more open to the sky, you are struck with a sudden sense of strangeness. Something is different! Something has happened! The stars seem to have changed. Looking around, you recognize a group of bright stars, whose form you remember having remarked before you took your nap. That group of stars is no longer where it was before; the stars have changed position!

Changed? How is that possible? You stare intently at the stars. Not the slightest motion is perceptible; only a gentle twinkling while they remain, seemingly immovable, in their places.

A paradox! On the one hand, your faculty of sense perception, swears to you that the stars are fixed and motionless. On the other hand, you remember that the same faculty has earlier testified, no less insistently, of an arrangement of stars in the sky, which is different from the one it now reports!

Intrigued, you repeat the experiment, but with a variation: you ask a friend to keep watching the stars, without interruption, during the time your eyes are closed. The experiment is performed. Once again, you find an undeniable change in the positions of the stars, when you look at the sky again after a nap. Your friend, however, swears he never saw the stars move!

The paradox strikes deep into your mind. Whatever follows, will depend on how you respond to the paradox. However you respond — or even if you do {not} respond — that response will reflect some sort of {hypothesis}, an hypothesis generated nowhere but inside your own mind.

Shall you merely conclude that your eyes (or those of your friend) have lied to you in some arbitrary fashion? Or that the Universe itself is maliciously arbitrary? If so, then how would human existence be possible?

Or is there another way out? Perhaps we should not {completely} reject the evidence of our senses. Perhaps it were better to assume, that although our sense perceptions in themselves do not represent reality, still there must be some implicitly discoverable, lawful relationship between sense perception and reality. This is the pathway of science.

Choosing that pathway, the paradox moves us to hypothesize the existence of something, which our senses — in virtue of some lawful limitation of the same — cannot grasp: to hypothesize a concept of a {process of change}, which in itself is {invisible} to the senses, but yet efficiently accounts for the observed (or rather, remembered) {difference} in positions! That adduced concept, of an invisible — but efficient! — process of change, is an object of a different sort than a sense perception (including the paradoxical entity we commonly identify as the “perception of motion”). It is not sufficient to account for that new concept, by merely saying: “the stars move too slowly for our eyes to see.” The point is, that the paradox just presented, evokes the potential of a {new quality of relationship of our mind to the Universe}.

A change in the substance of our mind! Prior to the explosion of the paradox, you looked at the Universe (the starried heavens) as an object of sense perception. Now, you are looking at the Universe from the standpoint of a process of discovery, which stands in ironical contrast to naive belief in sense perception. To the reflecting mind, that {difference in mental attitude}, from before to now, provokes the hypothesis of {higher species of change} — a process of improvement of human cognitive powers, which is invisible to our senses, but real and earthshakingly powerful nonetheless.

Turning once more to our nightly observations, what shall be our next step? Does our power of discovery give us the capability to hypothesize, not only the existence, but also the {form} of the process of change of position of the stars? How would we discover the coherence between the paradox of the stars’ motion, and a similar paradox, posed by the behavior of the Sun? And how could we do that, using nothing more than the means which were available to prehistoric Man?

Lest the reader find the above discussion “too trivial” to be important, consider the following. Nearly everyone today is faced (or will be soon) with a congruent form of paradox: On the one hand, most people would claim that their most deeply held “values and beliefs,” being absolutely self-evident (to them!) are fixed and unchangeable. On the other hand, comparing those people’s “deeply held personal values” of today, with the corresponding values held “self-evidently” by those same people 30 years ago, we find almost nothing in common! If mankind is to survive, an increasing ration of leading and ordinary citizens must be brought to discover, as an “enemy image,” the process by which the oligarchy was able to induce that radical, downward “paradigm shift” in their own minds.

THE SIMPLEST DISCOVERY, PART II

by Jonathan Tennenbaum

Have you ever stopped to consider, how a human being, a “mere infinitesimal” on the scale of the world as a whole, could actually come to know the vast dimensions of the solar system, or to measure astronomical cycles hundreds or thousands of times longer than the brief span of his or her individual life? The existence of such powers of cognition, by which the “infinitesimal” can know the macrocosm within its own internal mental processes, is the central issue in the bitter, millenial conflict between the human species and the oligarchical “Gods of Olympus.” Witness the words of Aeschylus’ Prometheus (1):

“Believe not, that I from pride or stubbornness 
Keep silent. Heart-rending thoughts I nurture, 
Watching myself thus trodden under foot. 
And yet to the new Gods, they — Was it not I 
Who granted them their fitting honors? 
But, of this I’ll say nothing. Besides, it were to those who know 
That I would address you. But, of the dire need of Men 
Let me tell, how I made them, foolish at first, 
To be full of thought and empowered with Reason. 
I say this not to complain of them, 
But only to explain the goodly intention of my gifts. 
They, who had eyes from the first, but saw not, 
Who had ears, yet heard not; but like figments 
Of dreams, their entire life long 
Mixed all things blindly together, and knew nothing 
Of bricklaid houses and walls, 
But lived deep-down in sunless caves 
Like hords of ants, 
And knew nothing: no sign to fortell the winter storm, 
Nor the spring rich in flowers, nor the fruitful 
Summer, no sure measure. Without Reason did they act 
In everything, ’til I made them heed the rising and setting 
Of the stars, so difficult to distinguish. 
And number, a most ingenious invention, 
I created for them, and the invention of writing 
As a monument to all, and Mother of the Muses. 
And ’twas I that first put the wild beasts under yoke, 
That they do service to the plough and bear burdens, and so 
Lift many a heavy task from the backs of men. 
And to the wagons I hitched, eager willing to obey, 
Horses, the splendor of wealth. 
And to sail o’er the seas — none but I 
Invented the shipman’s winged sails. 
Yet I, who for mortals such things 
Created, can find nothing for myself 
To deliver me from my present plight.”

Not without cause did Aeschylus emphasize the earliest discoveries of astronomy, connected with the construction of a solar calender, as crucial events in the emergence of human reason as “the sure measure” of things.

Implicitly, the discoveries made by our pre-historic “colleague” in connection with the “invisible” motion of the stars, refute everything university students have been taught to believe about science and “liberal arts” since the mid-1960s. Astronomical cycles — beginning with the “day” — are neither objects of sense perception, nor are they “robust statistical correlations.” Rather, the astronomical cycles emerge as {conceptions} created in the human mind, through a process of generation and creative solution of paradoxes.

From this standpoint, let us push our exploration of prehistorical discoveries a few steps further, to identify paradoxes which {necessarily} must have arisen, even though we do not now know the specific historical circumstances.

Our prehistoric observer notes: (i) The positions of the stars appear to undergo a constant process of change. (ii) But at the same time, certain arrays of stars, identified and fixed in memory through poetic (mnemonic) devices already from earliest times, remain seemingly unchanged throughout the course of a night, reappearing every night with the same distinct form. Also, apart from the appearence and disappearance of stars on the horizon, the overall configuration of the constellations in relation to each other in the sky — “the constellation of constellations!” — remains unchanged.

This paradox of “change” combined with “no change” evokes the notion, that the “invisible” motion of the stars, has an implicitly intelligible {form}. That paradoxical idea becomes a specific thought-object, undergoing its own process of evolution in the direction of a notion of a {universal, rotational action} subsuming both the process of change in the night sky, and the daily motion of the Sun.

Indeed, observation of the rising and setting of the Sun, and studying the Sun’s overall using such means as observation of the shadows cast by poles (gnomon), demonstrates an overall {coherence} between the nightly motion of the “constellation of constellations” and the motion of the Sun during daytime. As singularities of the hypothesized universal action, we get (among other things) the differentiation of East, West, North and South as determinate directions on the Earth’s surface.

In this way, we revolutionize the empirical notion of a “day” as a mere “yin-yang” alternation of light and darkness. Instead, we conceive the day as an astronomical cycle, subsuming an {increasing density of distinct events} within a single ordered totality. Just as the gnomon’s shadow progressively transits the markings of a primitive sundial, including the meridean defined by the position of longest shadow; so the cycle of the “day” subsumes and orders the events of rising and setting of stars and constellations, and their transit across definable angular positions as defined by the sightings of a primitive stellar observatory. From the development of these methods, our predecessors established the regular division of the day, and an indispensible means for harmonically ordering the activities of society.

But, there is a far-reaching paradox embedded in this splendid hypothesis of the day’s rotational cycle as a universal ordering principle! Looming long on the horizon of our prehistoric astronomer’s mind, but now growing in urgency, is the realization, that the day itself is subject to {change}. For example, the array of constellations, which are visible in the sky just before sunrise, is strikingly different in winter than in summer. To investigate the origin of this difference, identify a star or constellation, whose setting in the West immediately preceeds the rising of the Sun in the East. Within a few days, we become aware of a slight delay in the appearance of the Sun, after the selected star or constellation sets in the West. The delay keeps growing: the Sun seems to be “slipping” backward in time relative to the stars! That apparent slippage constitutes a new, anomalous degree of change. Again, the question is posed: what is the exact {form} of this change?

Our prehistoric astonomer juxtaposes this solar anomaly with a whole cluster of paradoxes, connected with the empirical cycle of “the year.” The empirical notion of a year as a mere alternation of “hot” and “cold” seasons, or periodic recurrence of monsoons, floods or other natural phenomena, bespeaks the nearly bestial state of Man before Prometheus bestowed his gifts. The mere counting of days between recurrence of some terrestial event, leads to erratic results, falling far short of the “sure sign” promised by Prometheus.

Worse, was the attempt to arbitrarily impose upon society, a non-existent correlation between changes in season and the cycles of the Moon. So, the Babylonians (and others) insisted on a calender based on the socalled synodic lunar month, as defined by the recurrence of the full moon after approximately 29.5 days. After the passage of a mere 18 “years” of 12 synodic lunar months each, winter now occurs in the months where summer used to be, and vice versa! The attempt to “fix” this monstrous failure with the addition of special days and alternation of “longer” and “shorter” months, while rejecting the primacy of the solar cycles and insisting on the cult of the Moon, (or some “rotten compromise” between the two) is more than typical of the psychosis which dooms every oligarchical empires to collapse. Although the present Western calender is entirely solar-based, and our months have no correlation to the phases of the moon, the term “month” still remains as an apparent relic of Babylonian lunacy.

In contrast, by adducing a new, “solar long cycle” from the {anomaly} posed by the slight discrepancy between the solar motion and daily stellar motion, our prehistoric astronomer was eventually able to invent a “sure measure” of the seasonal cycle, which remains true over centuries and even millenia! The result is best demonstrated by the spherical sundials of ancient Greek times, which registered not only the daily trajectory of the Sun, but also the cycle of variation of the Sun’s (approximately circular) pathway in the sky, over a period of (approximately) 364 days. That cycle subsumes the cycle of change in the relative lengths of night and day, as well as the angles of inclination of the Sun’s rays to the Earth’s surface, providing in turn an intelligible basis for the variation of the seasons.

But, the manner in which the yearly solar cycle “modulates” the daily one, ordering the variations of the latter, implicitly poses a new array of paradoxes. For example: If the day is variable, might not the year be so also? And in fact, careful observation of the loci of rising or setting of the Sun and the stars, by means of suitable horizon markers and observation points, revealed a very slight — but distinct — anomaly in the solar cycle. From this, the ancient astronomers were able, thousands of years ago, to adduce an approximately 26,000 year-cycle of the so-called precession of the equinoxes! The result is a third, “long cycle” modulating the year. The latter, according to our best present knowledge, essentially determines the cycle of ice-ages, together with a fourth anomaly, namely the elliptical character of the earth-sun orbit.

Still another paradox exploded in the repeated, failed attempts to fit various among the multiply-connected solar cycles, as well as planetary and other cycles, into a single calender. This included the search for a single “great cycle” subsuming all the others, such that the end of such a “great cycle” would mark the simultaneous end of all the shorter cycles. The work of the Pythagorians, on the fallacy of “linear commensurability” put an end to such Babylonian “systems analysis,” and posed yet a new level of paradox:

If there exists no grand mathematical system which can combine and account for the various cycles, then how can we conceptualize the “One” which subsumes the successive emergence of new astronomical cycles as apparent new degrees of freedom of action in our Universe? How do we master the paradoxical principle of Heraclitus, that “nothing is constant except change?”

———————————————————

(1) This version is my first-draft translation of lines 436-471 in the “Reclam” series German translation of Aeschylus’ drama, which seemed somewhat clearer than an English translation I had seen earlier. Bruce Director, who first called my attention to this selection, told me there are better English translations. But, I think the point is well enough made for the present purpose.

Spring Cleaning For The Mind: On `Proof,’ Part I

by Jonathan Tennenbaum

 “You have to prove your case…” “Demonstrate to me on my own terms, that what you say is right…” “Where are your facts? Give me the hard facts!” “I agree with you, but my wife…” “That sounds exaggerated. How can you be so sure?…” “Why should I believe you? I heard something different from my friends and high-level contacts!”

Most arguments tend to be a waste of time, because they avoid the really sensitive issue, underlying everything else: What does it mean to KNOW something, as opposed to merely having an opinion, belief or strong impression? And how is actual knowledge communicated? By what sort of deliberative processes might human beings arrive at shared, valid judgements of reality? What is the authority, by which a scientist (for example) might uphold a truth, otherwise regarded as obviously wrong or absurd by the overwhelming majority — or even every single one — of his colleagues?

At first glance, the mathematical notion of “proof”, as historically associated with Euclid’s elements of geometry, would seem to provide an {ideal} model of rational argumentation on any subject. According to this method, one first seeks to identify, as a “common denominator” and foundation for argument, those most elementary propositions and facts, as are acknowledged to be self-evident and true by all thinking persons. Then one seeks to reduce all other truths to those elementary ones, by means of logical deduction.

Unfortunately, most people nowadays lack even a rudimentary acquaintance with the old-fashioned treatment of Euclidean geometry as a lattice-work of theorems deduced from a set of definitions, axioms and postulates. They are thereby deprived of a most useful means, with which both to conceptualize the devastating fallacy of deductive notions of knowledge, and to grasp the LaRouche-Riemann correction of that fallacy.

Accordingly, I propose to approach the problem, this time not through geometry per se, but with reference to the practice of political organizing, whereby the issues of “knowledge” and “proof” are posed again and again, on a daily basis.

The trouble starts, typically, when the organizer asks himself or herself: “How do I CONVINCE this person to do [what they should do]?

This almost instinctively leads to the question: “How do I acquire the necessary AUTHORITY to move the mind of this person?” Having difficulty locating his or her INTERNAL authority — and Lyn’s authority — in a rigorous process of discovery of universal principles, the organizer tends to fall back on a dangerous ruse: To appeal to the purported authority of certain, strongly-held beliefs and opinions in the minds of the people one is attempting to organize, as the basis for eliciting agreement with the proposed proposition

In other words: the organizer wittingly or unwittingly adopts, as the standard of “proof”, that which public opinion accepts as “convincing arguments” — RATHER than those processes, by which reality can actually be made known to the cognitive processes of the individual, sovereign human mind. This inevitably leads, in the form of argument too commonly practiced nowadays, to the following parody of formal deductive method:

1. Select a set of basic, commonly-accepted concepts and a set of basic propositions, which appear so self-evident, that they are generally regarded as true beyond any doubt (or at least taken to be so by the person you are trying to convince!). This selection of concepts and “self-evident truths” plays a role analogous to the set of definitions, axioms and postulates in Euclidean geometry.

2. Now formulate the proposition you wish to “prove,” in terms of the adopted system of basic concepts, fulfilling the demand of your interlocutor: “to express your point in terms I can understand.”

3. Supplement, as required, the array of “basic, self-evident truths,” with a complementary array of “facts” — perceptions of events, as expressed and interpreted in terms of the given basic concepts, axioms and postulates, and having a comparable, self-evident quality of “hardness” and authority. “Facts” of the form, typically: “I heard him say it myself,” “I saw it with my own eyes, on television,” and so forth.

4. Now construct a (more or less rigorous) chain of deductions, showing that the proposition you are putting forward is a logical consequence of the given array of “basic truths” and “hard facts.”

5. Now tell the person: “You see! My proposition is a theorem of the postulates and facts, whose authority you accept. Therefore you must accept what I am saying.”

A bit of honest reflection, will show that the mode of argument, used by most people most of the time, does indeed converge on a parody of the mathematical-deductive method, along the lines just sketched. And, when this method fails to achieve the desired result — as it most often does fail, and in a deeper sense ALWAYS fails –, then we tend put the blame on the “irrationality” of the person we are arguing with, or on a purported lack of a sufficient arsenal of “hard facts” to back up our argument. Yet, the essential folly was on the organizer’s own side.

In the spirit of “spring cleaning,” let us look more carefully at this purported solution to the dilemma of “proof.” A very simple observation reveals a devastating fallacy — a fallacy of such incredible virulence, that it can bring down entire civilizations!

The very nature of a DEDUCTIVE argument (by definition!) is that it systematically EXCLUDES from consideration, everything except the original array of definitions or concepts, axioms, postulates and purported “facts” — the latter being framed on the basis of those same, generally-accepted concepts and axioms. Our argument was confined entirely to the “virtual world” of our interlocutor’s concepts and assumptions. At no point did we ever address reality itself! At no point did we oblige, or even encourage our interlocutor to actually DISCOVER anything about the real world!

We were arguing as if at a blackboard, in a room without windows. The very form of our argument was such, that no universal physical principle could EVER be discovered or otherwise known by such means. To the extent we “succeeded” in convincing our interlocutor, we actually perpetrated a fraud. Because, the “agreement,” thus elicited, is purely accidental and has occurred in the absence of any real process of cognition.

All this brings us to an agonizing paradox: If we cannot base ourselves in the authority of generally-accepted, “self-evident” truths, then what can we start with? What do we do, when, in the now-typical case, our interlocutor’s basic axioms are incompatible with every essential principle of reality, as these have become known to us, above all through the work of Lyn?

Archimedes reportedly once said: “If you give me a lever and a place to stand, I can move the Earth.” Our task, indeed, is to move the Earth. Where, then, shall we stand?

SPRING CLEANING FOR THE MIND: ON PROOF, PART II

“Science is a manifestation of action in human society…. One cannot know a scientific truth by logic, but only by life. ACTION is characteristic of scientific thought.” –Vladimir Vernadsky

In the first installment of this discussion, I set forth an elementary paradox, arising constantly in political organizing, and which might be restated in brief as follows:

In the course of arguing any point with a person, most of us have a nearly instinctive tendency, to seek for some commonly-shared, fundamental beliefs, values or opinions — plus commonly-accepted “hard facts” — as a basis for an essentially deductive “proof” of the point we are arguing for. But, what if the person, we are arguing with, seems to have a completely opposite set of basic assumptions?

Sometimes, in this situation, organizers resort to a clever, but ultimately disastrous form of cheating. They think to themselves: “This guy here has some beliefs and opinions which are totally opposed to our axioms. He will freak out if I show him what LaRouche really stands for.

“So let me find some a couple of specific issues where he will agree with us.” In other words: avoid a confrontation on the axiomatic level, by carefully selecting a small {subset} of theorems derived from “LaRouche’s axioms,” which happen at the same time to be theorems in the other guy’s axiom-system, or at least to be {consistent} with the latter — even though the two sets of axioms are themselves mutually inconsistent!

Of course, the “common ground” secured this manner, is entirely spurious, and can fall apart at any moment. But exactly for this reason, it becomes a trap for the organizer foolish enough to have cheated in the first place! For, the unfortunate organizer now has a stake in maintaining the illusory “agreement,” and attempts to constantly “screen” the contact from any direct confrontation with what we really stand for. The result is profoundly demoralizing for everyone involved.

Only {apparently} opposite to the indicated tendency, is the “super hardline” tactic, typified by: “No compromises! If the guy doesn’t agree with then hang up on him immediately!” Either way, we are avoiding the real issue, which is how to confront and {change} the fundamental axioms of thinking in another person.

Before returning to our organizing situation and a proposed solution to the paradox, let’s stop to clarify in our minds the crucial distinction between a fundamental axiom, and a mere isolated opinion or theorem.

The characteristic of a fundamental axiom or principle of thinking (they signify essentially the same thing, in this context) is, that each such axiom or principle implicitly shapes the {entirety} of our thinking. A change in any axiom implicitly changes how we think about each and every other object of thought. To put it another way: such principles, imbedded in our mind as singularities in the form of fundamental ontological assumptions, shape the entire {geometry} of our mental processes. All other ideas, opinions, judgments etc. are determined by that geometry, but do not determine it.

What determines the outcome of a person’s action upon the Universe, is not the apparent, literal content of his or her individual judgments and opinions per se, not positions on this or that issue, but the geometry of their mental processes as a whole. And that geometry is what we are acting upon, with greater or lesser success, when we organize.

But is it really possible, in a fundamental sense, to change people’s minds in such a profound way? When we attack our own and others’ nagging doubts on this account, an epistemological monster invariably pops up in front of us: the Kantian paradox, in one or the other of its countless reincarnations. For example: If “the way a person thinks,” the form of conciousness, is given {a priori} — as “pure reason,” inborn and prior to all experience — then it would seem to be unassailable and impervious to any fundamental change. For, all a posteriori evidence, including our attempts to argue with the person, will simply be interpreted within the given geometry of the mind, without having any effect upon that geometry. (“I am as I am, so you can’t change me.”)

But if, on the other hand, there is no “pure reason,” but the fundamental axioms of thinking are a more or less arbitrary product of our upbringing, education, environment, etc. then where is the standard of truth? How could we ever know anything for sure? (“All judgments are relative. My opinion is as good as yours.”)

Now, as a matter of fact, the history of religious beliefs and cultures generally, as well as the historical development of physical science, demonstrates beyond any doubt, that sweeping changes in the pervasive “geometry” of human mental processes {do} in fact occur — both in individual persons, and in entire societies. They occur all the time in periods of rapid, generalized scientific progress, as typified by the Golden Renaissance.

On the other hand, we have direct experience, in recent decades, of a process of rapid cultural degeneration, which is not a consequence of this or that wrong opinion, but rather the result of negative changes in the sets of fundamental axioms underlying practically all cultures of the world.

Not only do such positive and negative changes occur, but (as LaRouche has demonstrated in most devastating fashion) they are, in every known case, ultimately the result of conscious intentions.

But changes in the prevailing axioms of society, have {physical} effects, effects that are manifested in gross terms on the historical scale of the rise and collapse of cultures and civilizations. These same effects are {measurable}, on even much shorter time-scales and more precisely, by the methods of physical economy. The possibility of judging, measuring, and forecasting in advance, the net impact of alternative choices of fundamental axioms of thinking, on the power of entire societies to maintain and improve their physical existence per capita, is inseparable from {cognition}. By correlating the measurable physical effects produced by successive such choices, with the quality of the human mental processes — implicitly reproducible in our own mind — which generate either positive or negative series of choices of axioms, we can judge the relative truthfulness of those mental processes, their degree of agreement with the laws of the Universe.

The Kantian paradoxes pop up, automatically, when the implications of cognition are ignored.

Now reflect on the quoted statement by Vernadsky. Take the case of a creative scientist, discovering a new universal physical principle. What is the subject of the discovery? Not a so-called “objective physical Universe,” not an object “out there,” supposed to exist as if independently of human activity! No! The creative scientist is deliberating on his or her own {thinking processes}, and those of his colleagues and society generally, with a view toward correcting or improving upon those fundamental principles that govern our thinking about how we act upon the Universe.

The judgment validating the discovery takes the form of an {inequality}: the demonstration, that the discovery of a superior principle, and the accompanying modification of the aggregate array of pre-existing principles, implies a {higher} potential rate of increase in Man’s per-capita powers to command the Universe, than was previously achievable. In the simplest case, the inequality is satisfied by detecting and correcting a systematic falllacy in our way of thinking about the world.

This is our model of a non-deductive “proof”: proof by {discovery}, proof by {improvement}. The ultimate criterion is not {logical} in nature, but {moral}: advancing the common good of Mankind.

To the extent we can account for the geometry of our mental processes, as the accumulated effect of an ordered series of such discoveries of improvement of geometry — each of which can be qualified as a discovery of universal principle relative to our own action in the world — then our knowledge and practice becomes fully intelligible in the form of a self-subsisting Riemannian manifold. Only then do we really know what we are talking about. Then we can {prove} what we know to any person, who is willing and capable of reenacting, in his or her mind, the process by which we came to know what we know.

Now turn back to our organizing situation. The preceeding train of thought points to a radically different approach, than the failed, pseudo-deductive procedure we examined in the first installment of this series.

First: don’t waste time on theorems, but get at the axiomatic issues immediately, by the least-time path. Use theorems only as vehicles to address the axioms. Second: “prove” by bringing people to discover how their own mental processes become more powerful, the world more intelligible and their ability to change it, stronger, when they adopt a superior axiomatic standpoint to the demonstrably flawed one, they held up to that point.

Somebody might retort: “Surely you don’t mean we have to go through all that epistemological stuff with our contacts! We have no time for that.”

But someone might think we had a lot of time to waste, considering the woefully low efficiency of much of our organizing, and the sheer man-hours expended! The most decisive thing to be conveyed, can be conveyed as if in an instant, by little more than a happy shift in attitude or mood on the part of the organizer. This is nothing unknown to an experienced organizer: we do succeed, part of the time. The problem is, to do what succeeds, {all} the time.

So, instead of getting tied up arguing the truth or falsity of proposition

, consider something like the following:

“Look, the reason you get taken in by the kind of nonsense you are telling me now, is, that you never studied what LaRouche has to say about the difference between human beings and animals.”

“What do you mean?”

“You voted for Bush (or Gore), didn’t you?”

(A moment of embarrassment.)

“Well, in view of what has happened to us all, as a result of that kind of mistake, wouldn’t you agree, that the difference between Man and beast should be the key issue in all politics?”

“Fantastic! I never thought of that.”

Contrary to Kant, the form of conciousness is not a God-given, fixed entity. It is subject to deliberate improvement, by our God-given powers to organize!

{Dynamis} vs. {Energeia} — A Sketch

by Jonathan Tennenbaum

Since at least the time of Plato and Aristotle, and most likely even long before Pythagoras, the struggle between oligarchical and republican conceptions of physics has turned on the relationship between what the Greeks called {dynamis} and {energeia}. To a rough first approximation, the Greek {dynamis} might be rendered, in its broad usage, variously as “ability,” “potential,” “potency,” “power” (German {Vermoegen}, {Faehigkeit}, {Kraft}, etc.); whereas {energeia} corresponds (roughly) to “activity” (German {Taetigkeit}) and (in Aristotle, especially) to “actuality” in the sense of “actively existing.”

Plato’s dialogs demonstrate, however, that Plato and his circles possessed a precise and highly developed scientific conception of {dynamis}, having no direct equivalent in today’s degenerated modern language usage.

Perhaps the best illustration of that degeneration, and of its causes, is the freak-out by virtually every modern translator, at the implications of a celebrated passage in Plato’s {Theaetetus}, to which Lyn has often referred. This is the place where the young Theaetetus recounts to Socrates a preliminary discovery concerning the nature of the “powers” connected with the doubling, tripling etc. of a square, and which lie beyond the domain of simple linear magnitudes. Rejecting the implications of Plato’s actual term, {dynamis}, modern translators typically try to bring the passage into conformity with the “academic correctness” of textbook mathematics, using “root” or “surd” in place of “power” and apologizing in footnotes for the supposed “inappropriateness” of Plato’s choice of language!

Actually, as the {Theaetetus}, the Meno and other dialogs demonstrate, Plato’s conception of {dynamis} belongs uniquely to the domain of {physics}, not mathematics per se. In particular, the subject of Theaetetus’ account is not to solve an equation, but rather to discover the unseen principles of generation of the Universe — physical principles! –, focussing for this purpose on the paradoxical characteristics of the visual domain.

It is Plato’s conception of {dynamis}, as revived and developed by Nicholas of Cusa and Kepler, that leads to Leibniz’ founding of physical economy and what Leibniz called “the science of dynamics,” as opposed to Newton’s mechanics; the pathway leads from there into the work of Gauss and Riemann, and finally to Lyndon LaRouche’s discoveries in physical economy. It is not by accident that Lyn, in his book {In Defense of Common Sense}, for example, cites exactly the indicated passage of Plato’s {Theaetetus}, in the context of presenting his own conception of “rate of increase of relative potential population density” through the process of individual human discovery and the successive integration into social practice, of new physical “powers.” That latter conception constitutes, in my view, the highest development reached so far, in “unfolding” what was implicit in Plato’s {dynamis}.

To throw further light on these matters, I propose now to take a brief look at the oligarchical side of the coin, which goes very clearly back to Aristotle. What sticks out immediately, in examining Aristotle’s {Metaphysics}, is his insistence on the primacy of {energeia} as opposed to {dynamis}. That insistence went hand-in-hand with Aristotle’s attack on metaphor and the Platonic ideas. Aristotle writes, for example ({Metaphysics}, Book IX):

“Since all abilities (powers) are either inborn, as are our senses; or are acquired by practice, as the ability to play a flute; or are acquired by learning, as the powers of the sciences; in all cases one can gain such powers, as are acquired by practice or learning, {only} through the aid of something that was {already} realized (actualized)…

“For from the potentially existing, the actually existing is always produced by an actually existing thing, e.g. man from man, musician by musician; there is always a first mover, and the mover already exists actually. We have said in our account of substance that everything that is produced is something produced from something and by something, and that the same in species as it…

“Obviously, then, actuality ({energeia}) is prior both to potency ({dynamis}) and to every principle of change.”

Rather than get entangled in the ins and outs of Aristotle’s theory of existence and becoming, focus on the systematic, axiomatic flaw in Aristotle’s whole manner of argumentation: He rejects — or at least disregards, as if it were nonexistent — the power of human creative discovery, of human reason, and of a creative principle underlying the Universe as a whole. In other words, Aristotle denies the possibility of a {self-developing, or self-actualizing potential}, that which Nicholas of Cusa later called the {posse-est} ({posse} corresponding to Plato’s {dynamis}). Lurking behind Aristotle’s notion, that existence can only flow from what he calls “actually existing things,” is a mind-set, which can attribute “actual existence” only to such objects and motions, as have the quality of objects of sense perception.

These points require more elaboration. For the present purposes, however, as a short-cut and in order to throw the issue of “dynamis vs energeia” into strategic perspective, I propose turning to one of the more effective British operations of the 19th century, one which — as so much British wickedness — drew originally from Aristotle: The Cult of Energy

From the early decades to the middle of the 19th century, parallel with operations leading to the unleashing of the Confederacy and the US Civil War, a scientific cult was launched by Lord Kelvin and the Thomas Huxley-Herbert Spencer “X-Club” circles, Hermann Helmholtz, Rudolf Clausius et al, directed against the influence of Leibniz and his successors, including Gauss in particular. Although that cult involved several interrelated “theme parks” — such as the so-called Darwinian theory of evolution and Herbert Spencer’s fraudulent concept of an “iron law of progress” — we might fittingly refer to it as “the Cult of Energy.”

Crucial to the operation was the relative success, achieved by the conspirators, in foisting two fraudulent formulations on the scientific community: “First and Second Laws of Thermodynamics,” and their monstrous corollary, the supposedly inevitable “heat death of the Universe.”

The utopian political thrust of the operation was more or less obvious from the beginning, but became luridly explicit, among other things, in the “Energeticist Movement” associated with Wilhelm Ostwald around the turn of the 19th century. Ostwald advocated a World Government based on the use of “energy” as the universal, unifying concept not only for all of physical science, but also for economics, psychology, sociology and the arts.

Although the energeticists and the myriad, competing materialist (including “Diamat”), reductionist and positivist movements and countermovements of the late 19th century and early 20th century, are now mostly forgotten, the axiomatic germ of the Cult of Energy remains deeply embedded in European culture, like the modified genome left over in the tissues of a patient after an acute lentivirus infection has subsided. In particular, for over a century nearly everyone has been miseducated to believe, that “energy” is an objective scientific reality, and the First and Second Law constitute proven scientific truths.

Not accidentally, the Kelvin-Helmholtz doctrine of “energy,” became a key feature of Anglo-American geopolitics, from the British launching of Middle East “oil politics” at the beginning of the 20th century, to the orchestration of the so-called “energy crisis” of 1973-74, and, not least of all, the present march toward a new Middle East war. This is not to say that “energy” per se (or “oil supplies”) has anything really significant to do with the present war drive. Rather, the reasons, why people permit themselves to be manipulated into tolerating actions leading to perpetual war and a new “dark age,” are inseparably connected to those axiomatic flaws in thinking, that underlie popular belief in the cult doctrine of “energy.”

The common origins of the “energy” doctrine and utopian geopolitics go much further back than the launching of the modern energy cult itself, by Helmholtz, Kelvin et al. From the standpoint of economics, the energy doctrine represented nothing but a rewarming, under “scientific” guise, of old feudalist, and specifically physiocratic doctrines of supposedly fixed “natural resources,” ignoring the function of the human mind in discovering and realizing new physical principles. On the other hand, anyone who has thought through what Lyn and others have written on Gauss’ early work concerning the “Fundamental Theorem of Algebra,” should immediately recognize, in the so-called “First and Second Laws of Thermodynamics” exactly the same essential fallacy, that Gauss refuted in his 1799 attack on the “utopian” mathematics of Euler and Lagrange. Not accidentally, the Euler-Lagrange doctrine of “analytical mechanics” created the mathematical foundation for the Helmholtz-Kelvin energy doctrine. Conversely, the manner in which Gauss generates the algebraic “powers,” in the cited 1799 work, by principles lying entirely outside the mathematics of Euler and Lagrange, is characteristic of the way Man acts as an instrument of the anti-entropic development of the Universe.

On one level, the fallacy of the “First and Second Laws of Thermodynamics” is simply this: these laws have never been demonstrated to be properties of the real Universe, but only properties of certain closed mathematical-deductive systems, which ignorant or malicious physicists {claim} to represent the real Universe, but that manifestly do not. On this level, the fraud is identical to that of so-called economists, who claim to be able to deduce theorems about the real economy, from supposed self-evident properties of “money.” In fact, the elementary error, revealed in the very title of Newton’s famous “Principiae mathematica philosophiae naturalis” (“Mathematical Principles of Natural Philosophy”) finds itself reproduced, countless times, in textbooks dealing with non-existent “Financial Principles of Economics.”

Contrary to popular academic belief, there are no actual experiments establishing the validity of the “First and Second Laws of Thermodynamics” as {universal} physical principles. To the extent those “laws” have a certain empirical correlate at all, they are both circumscribed by a purely {negative} principle, identified already by Leibniz long before the Kelvin-Helmholtz gang came along: the impossibility of a so-called “perpetuum mobile” or “perpetual motion machine” — a hypothetical subsystem of the Universe, able to generate a net surplus of power in the course of a closed cycle, in which the system is supposed to return to its exact original state, without any other net change in the surrounding Universe.

Just as in the case of so-called “impossible” or “imaginary” numbers, the source of the supposed “impossibility” involved is not a limitation of the real physical universe. The limitation is located rather in the notion of a “machine,” as a system describable by the “utopian” Euler-Lagrange form of analytical mechanics. To put it another way: To the extent a physical system is either chosen or forced to mimic the characteristics of a “machine” in the indicated sense, it will appear to obey the First and Second Laws of Thermodynamics. But the Universe as a whole is not a machine; the Universe not only {never} returns to an earlier state, but its successive states are strictly {incomparable} with each other from a formal-mathematical standpoint. Thus, the extrapolation of the so-called “First and Second Laws” to the Universe as a whole constitutes the crudest, most elementary sort of scientific error.

If “Universe” refers to the most generalized form of Man’s action upon Nature — no other Universe could be known to us! — then the “state of the Universe” changes fundamentally with each discovery, by some human mind, of a new universal physical principle (power). A formal-mathematical system, which (to a first, “engineering” approximation) may have more or less adequately described Man’s physical-economic activity up to that point, now breaks down, as technologies, based upon the new principle, transform the physical economy to the effect of increasing the relative potential population-density of the human species beyond any apriori “limits.”

The very fact of the successful increase in human population potential, by some three orders of magnitude over documented history and prehistory, attests to the existence of a self-developing “power,” lying entirely outside the domain of visible or visible-like objects, but commanding the visible Universe to an increasing extent.

This brings us back to the fundamental flaw of Aristotle’s {energeia}.

Utopianism and the Enlightenment

Before the modern cult of energy could be created, Aristotle had first to be reincarnated in the so-called the “Enlightenment” of Paolo Sarpi et al., as a crucial component of the Venetian operation to destroy the influence of the Renaissance and the nation-state principle, and to plunge Europe into decades of religious war.

Sarpi’s “Enlightenment” based itself essentially on Aristotle, but with some differences that are relevant to the mindset of the Utopians to this day. The quarrel between the Enlightenment ideologues and Aristotle was not essentially a matter of substance. From their standpoint, Aristotle was excessively cautious and old-fashioned, wrapping his conclusions in endless distinctions and qualifications. Furthermore, Aristotle felt obliged to at least quote the existence of opposing views; while Locke, Descartes et al. went for a “clean break,” blatantly ignoring the entire preceding history of philosophy and science, and promoting the crudest, “post-modernist” sort of reductionism.

In this way, the creation of the modern cult of energy out of Aristotle’s {energeia}, represents just one more case of “putting lipstick on a pig.”

An Incredible Discovery Of Archimedes

by Jeremy Batterson

Archimedes’ discovery of the method of determination of the volume of a sphere was a discovery of such beauty and with such astonishing implications, that Archimedes, before his death, instructed that it be engraved upon his tombstone. And, yet, almost none, in our day, have ever worked through its proof, although it stands as a precursor of Leibniz’s later idea of the integral, as well as, it seems to me, hinting at the existence of the unseen domain which Gauss and others would later investigate. I had thought, myself, that this proof must have involved enormously complicated calculations, when, in fact, it is very easily accessible to any who desires to do a bit of mental work, so that, once having worked it through, we realize how totally laughable it is that every person who has EVER studied geometry has not worked through this and other problems of the ancient Greeks. It is so ridiculous that it would be as if you were to go to a modern university to study the subject of economics and not even study LaRouche’s works. What would YOU think about such a laughable thing?

Beginning with a sphere, Archimedes circumscribed it with a cylinder whose height was equal to the diameter of the sphere it circumscribed, but whose diameter was twice the diameter of this sphere, with the two solids based on a common center, namely, the center of the base of the cylinder. Finally, at this same common center, he placed the apex of a cone, whose height was equal to that of the sphere and cylinder, and with base equal in diameter to that of the cylinder. Thus, all three solids had the same height and the same common central axis, with the cylinder having a constant diameter all up and down this common axis, but the sphere and cone having constantly CHANGING diameters along its length. To draw a cross section of this construction, which will be needed for our further elaboration, draw a circle, and denote two opposite poles of this circle as “A” and “B”, so that line AB is the diameter D of this circle. Next, draw a rectangle whose shorter side is D and whose longer side is 2D, such that the circle is exactly centered within it, with point A lying on the center of one of the longer sides of the rectangle. Now, from point A, draw the two lines of maximum possible length from A to the opposite side of the rectangle, which will cause these lines to terminate at the far corners of the rectangle, points D and E, producing a triangle ADE, which is half of a square. Let the corner of the rectangle above A be point C and the corner below A be point F. This diagram represents a cross section of the sphere (circle), cone, (triangle) and cylinder (rectangle).

And now, Archimedes did something surprising! (Remember it was Archimedes who said: “Give me a fulcrum large enough and I will move the earth!”) If you ever played on a see-saw, as a kid, you may remember the principle of the fulcrum: A far lighter weight can lift a heavier weight, across a fulcrum, if the length of the balancing board between the two weights is longer on the side of the lighter weight. Thus, a toddler can hold his heavier teenage sibling up in the air this way, by simply placing the fulcrum far away from his end of the balancing board, and much closer to the end at which his sibling is sitting. Think of the fulcrum as being the “sun”, and the distance of balancing board between it and the end of the shorter length of board concerned as being the “perihelion” of an orbit, with the longer distance being the “aphelion.” The principle is that the heavier weight, of weight X times that of the lighter weight, equally balances the lighter weight when its distance from the fulcrum is 1/X the distance of the lighter weight from the fulcrum. Thus, for example, were the heavier weight twice the weight of the lighter, it would have to be 1/2 the distance from the fulcrum as this lighter. Or put otherwise, the product of the heavier weight and its distance from the fulcrum must equal the product of the lighter weight with its distance from the fulcrum, if they are to balance out. (X times 1/X = 1/X times X.) Behold, now, how Archimedes used the principle of the fulcrum in his discovery:

First, imagine the cut, line M2M, in our diagram, passing horizontally down the exact center of the construction, from the center of DC to the center of EF. This line passes exactly through the center O of the sphere. and also intersects both the sphere and cone at two points M3 and M4. Line OM3 is the radius of both the cone and the sphere at this particular point and is also the radius R of the sphere. If R is one, than the AREAS of the two corresponding circular slices of the cone and sphere will both equal 2?. The CYLINDRICAL cut, however, creates a circle of radius 2R, and hence, an area of 4?. Archimedes now asks, if the SUM of areas of the conical and spherical cuts is 2?, thus exactly HALF the area of the cylindrical cut, and if we treat the relative areas as WEIGHTS, where will these differing weights have to be to balance each other out? Leaving the cylindrical cut where it is, at distance R from A, he places a fulcrum at point F, and moves the areas of the conical and spherical cuts TWICE R, or D, to the other side of the fulcrum, to point G!

Now, project from any ARBITRARY point X along axis AB a perpendicular which extends upwards to line DC, intersecting this line at point X2. This line intersects the sphere at Xs and the Cone at Xc. We know, from Jonathan Tennenbaum’s pedagogical on Archytus that the line XXs, which is the radius of the circular cut of the sphere at this point, is the geometric MEAN between line XA and line XB. Thus, it follows that length XA/XXs = XXs/XB. Now, imagine point X, on the axis AB, as it travels from A to B. As its distance from A widens, the cylinder’s radius XX2, and, hence, corresponding WEIGHT of the cylindrical cut, remains constant, since all cylindrical cuts in the construction have the same radius. However, its DISTANCE XA FROM A is increasing, so that the weight needed to balance it out must either increase, or pass further and further away from the fulcrum. Let us now look more closely, then, at the SUM of the areas of the circular and conical cuts, the counterbalancing weight concerned:

Since all conical cuts in the construction result in icosoles right triangles, it follows that, for any such cut, the corresponding vertical radius (the line XXc in our diagram) will always equal this cut’s corresponding distance XA from A, and, hence, its lateral distance from F. Henceforth, let us call the line XA, which is the same as the radius of the conical cut, “MINOR.” Similarly, we will call the line XB “MAJOR” and the line XXs the geometric “MEAN” of these two extremes. (Don’t be confused by the fact that MINOR becomes longer than MAJOR once X crosses point O, but, rather, think of it as being the first of the two extremes. We could also call the two “origin” and “destination,” for example.) The sum of proportional areas of the conical and spherical cuts, will thus be, respectively, (MINOR times MINOR) + (MAJOR times MINOR), the spherical cut’s proportional area being such because MEAN times MEAN will always equal MAJOR times MINOR. Since the two proportional areas have a common factor, namely, MINOR, and since the sum of lines MINOR and MAJOR is the DIAMETER D of the sphere, the sum of the proportional AREAS of the conical and spherical cut must, thus, always be D times MINOR. Meanwhile, the proportional area of the cylindrical cuts, as we noted, remains D times D, since D is the radius of all cylindrical cuts.

As we recall, for the two weights to balance each other, the weight of the cylindrical cut times its distance from A (hence its lateral distance from F) must equal the sum of weights of the conical and spherical cuts times their distance from A. So, where does any particular pair of conical and spherical cuts balance out their corresponding cylindrical cut, assuming that we leave the cylindrical cut in its original position of distance MINOR from A? We know that DD times MINOR must equal some UNKNOWN DISTANCE times (D times MINOR.) What is that unknown distance? Indeed, it is D, since we can clearly see that

DD times MINOR = D times (D times MINOR)! And, this must be true for ALL cases, since the sum of proportional areas of the spherical and conical cuts is ALWAYS (D times MINOR), as we showed above! Thus, FOR ALL CASES, THE SUM OF WEIGHTS OF THE CONICAL AND SPHERICAL CUTS WILL BALANCE OUT THEIR CORRESPONDING CYLINDRICAL CUT’S WEIGHT AT A DISTANCE D FROM THE FULCRUM!!

Now, since this be true, it follows that, were we to make all possible infinite numbers of cuts through the construction, and thus encompass its entire VOLUME, we would end up with the cylinder remaining exactly in its original position, with all its weight focused at its center, namely the point directly beneath the origin of the circle, or point M on our diagram. This point is at R, or (1/2)D distance from the fulcrum. Meanwhile, the entire volume of both the cone and sphere would be squashed together into a PLANE, a circle balanced at point G, which is at distance D from the fulcrum. This balancing ratio of D:(1/2)D tells us that the weight of the SUM OF VOLUMES of the cone and sphere, contained within this squashed up plane, must be exactly HALF that of the Cylinder! But we are trying to find the weight, and, hence, the VOLUME of the sphere. Archimedes already knew (from Eudoxus, I believe) that the volume of the cone was 1/3 that of the cylinder encompassing it. Thus, If the total volume of the cylinder were 1, then the volume of the cone would have to be 1/3, while the volume of the sphere would have to be that which, when added to 1/3, yields 1/2 of 1. Thus, the volume of the sphere must be 1/6 that of the cylinder.

Archimedes took this one step further, by noting that the sphere which exactly encompasses the sphere would be of diameter D, instead of 2D, but have the same HEIGHT, and, hence, would have a volume 1/4 that of the larger cylinder he had used in his construction. Thus, the sphere would have a volume of FOUR times 1/6, which is 4/6 or 2/3 that of the sphere which encompassed it. However, since the volume of a cylinder is the area of its base, times its height, and since the height of the cylinder encompassing the sphere is 2R, this cylinder’s volume would be (?R2)2R. Since the sphere must be 2/3 of THIS volume, it is (4/3)?R3. Hence the famous solution for the volume of a sphere.”

Now, ask yourself this: Did Archimedes figure this out by “tinkering,” by playing around with different shapes, until the right arrangement popped out, “by magic,” or, rather, did he find this particular construction BECAUSE he was proceeding from a principle? For example, imagine the following possibility: Since he knew that MEAN times MEAN was equal to MAJOR times MINOR, might he have, then, asked which form would create the circumstance wherein its cut would equal MINOR times MINOR, and seen instantly that this must be a isosceles right triangle, hence a section of the particular cone he used in the construction? Similarly, might he have then asked which Cylinder, when coupled with this arrangement, led to a certain lawful result? Or might it have been something similar to this, perhaps far more elaborated?

Now, is that beautiful, or what?

The Crab Nebula And The Complex Domain

By Jonathan Tennenbaum

It is a fair guess, that the Crab Nebula will play a comparable role, in a coming series of revolutions in astrophysics, to that of Mars’ anomalous motion in Johannes Kepler’s launching of modern astronomy four centuries ago. The anomalies of the Crab Nebula confront us directly with the issue of the interrelationship between the Sensorium of perception and the manifold of efficient physical principles, that Lyn addressed in his recent paper on “Visualizing the Complex Domain”. Ironically, any person who has mastered Lyn’s paper, will be incomparably better qualified, to grasp the fundamental question posed by the Crab Nebula, than 99% of today’s professional astrophysicists!

The present state of astronomy and astrophysics, exemplify the way empiricism has killed science. The Platonic method of hypothesis, upon which Johannes Kepler founded the science of astrophysics, has been suppressed. Instead, “scientific method” is fraudulently equated with the practice of interpreting and “explaining” data according to the supposedly authoritative “laws of (textbook) physics”. Thereby, the contemporary astrophysicist degrades himself to the level of an animal, that interprets sense perceptions according to blind instinct. The equivalent of animal instinct, which controls the afflicted scientist’s mind in this case, is adherence to “accepted norms of academic performance”, engrained in the student through drilling in the methods for “getting the right answer”, and enforced among professionals by fear of being ostracized from the “scientific community”.

Moreover, astrophysics has been perverted by the monstrous concoction known as “modern cosmology”, with the imposition of arbitrary, ivory-tower doctrines, such as the “Big Bang”, that have no basis in the actual astronomical evidence. Continuing in the line of the bogus, entropic “theories” of Laplace and Kant, the “Big Bang” and related fairy-tales work as a cover for a veritable inquisition against original scientific inquiry, and of a suppression of scientific evidence, that rivals the Dark Ages of Aristotle and Ptolomy.

The resulting, “official” doctrine of the Universe, admits no true principles, no generation of ideas, and therefore no possibility for Man to transform the Universe. It is expressly designed to make people feel tiny, impotent and morally indifferent, as Kant recommended in his monstrous 1755 treatise, “General Natural History and Theory of the Heavens, or Attempt to Account for the Nature and Mechanical Origin of the Entire Universe according to Newtonian Principles.” By imposing a false, Euclidean-Cartesian projection of reality in terms of a supposed primacy of scalar extension, galaxies and other astronomical objects — which in fact are located at “virtually zero distance” from us in the causal ordering of the Universe — were made to appear “hopelessly far away” and inaccessible to Man. By the same token, human Reason and Man’s own activity in the Universe, were made to appear as if totally irrelevant, and Man himself to shrink to almost nothing, relative to the inhuman vastness of thousands, millions or billions of light years, that supposedly characterizes the “objective” Universe around us.

But, the situation is overripe for revolution. Ironically, while the process of creative hypothesis-formation in astrophysics has all but collapsed, major advances have occurred in the technology of astronomical observation, exemplified by the development of X-ray and gamma-ray telescopes; the stationing of telescopes and other astronomical instruments in orbit; and the advent of “very long baseline” interferometry, creating the effect (“synthetic aperture”) of a radio telescope the size of the Earth’s diameter. These technological advances have led to an unprecedented proliferation in the number and variety of astrophysical anomalies, just waiting for a new Johannes Kepler to appear on the scene, and to liberate science from the chains of the Enlightenment.

Will such a new Kepler emerge from the ranks of a victorious LaRouche Youth Movement? Many Keplers, we should hope! In the meantime, the following introduction to the Crab Nebula should provide a foretaste of the delights in front of us.

A first look at the Crab Nebula

The Crab Nebula (M1 in the classification of Messier) is located in the night sky in the constellation Taurus. While not directly visible to the naked eye, it appears in low-power telescopes as an approximately elliptically-shaped, luminous cloud, whose long axis describes an angle of 5 minutes of arc “on the celestial sphere”. The apparent minor axis of the Crab is about 3 minutes of arc across (1).

The Crab Nebula was first noted around 1731, as an oval-shaped nebulous patch in sky. By the middle of the 19th century, with the rapid improvement of telescopic instruments, a complex of irregular filaments became visible within the nebula, inspiring the name “the Crab”. In the course of the 20th century, the region of the Crab Nebula was found to be a powerful source of radiation, spanning practically the entire known electromagnetic spectrum — from radio waves, microwave and infrared radiation, across the visible spectrum, all the way through the ultraviolet and X-ray ranges, into the domain of ultra-short-wavelength gamma-rays (“cosmic rays”).

In the meantime the development of astrophysical instrumentation, has made it possible to register the emissions from Crab Nebula over a large section of the above-mentioned “registers” of electromagnetic radiation, mapping the intensity (and sometimes polarization and spectral characteristics) of the radiation received, in the given wavelength interval, as a function of direction on the celestial sphere. The result is a growing array of images, all covering the same angular area on the celestial sphere, but differing very greatly from one electromagnetic “register” to the other, and also changing in time in a most extraordinarsy fashion.

Figure i s a recent, very beautiful photograph of the Crab in the visible wavelength-range, taken by the European Southern Observatory.

Figure 1

Figure 2 shows a set of four images, made with visible, infrared, radio frequency and X-ray radiation — all totally different! (Note: the images are not all on the same scale.)

Figure 2

Seeing with your mind, not just with your eyes

The contrast between the images immediately raises the question: If the “real” object not any one of those images — projected, as it were, on the extended Sensorium of astronomical instruments — then what kind of object is it, that we are observing? The question prompts a brief aside, before getting on with the Crab.

Take a very simple illustration from everyday life. You report to someone, “I saw Jonathan today”. This statement could mean different things, and be truthful or untruthful, depending on how you intended the verb, “saw”. If “to see” meant nothing more than an act of visual perception, then the statement could not possibly be true; since for sure you perceived only Jonathan’s face, not the actual person! For, a human personality is not a visible object! A human personality — a mind — can be recognized and known only to the cognitive processes of another mind. The report, “I saw Jonathan”, could only be truthful, if the verb, “saw”, subsumed a cognitive process in your mind, by which you identified and conceptualized a specific human personality, lying “behind” the image of the face and other effects, your sense perception reported to you.

What then is the character of the object of astronomical observation? What does today’s astronomer have in mind, when he says “I have been observing the Crab Nebula”? Does he simply mean, that he has pointed his telescope or other instruments at a certain luminous smudge in the heavens, and registered certain signals? No doubt, the astronomer means more than that. He will claim he was observing a “real object out there”. But, what sort of object does he have in mind? How could he demonstrate that “it” actually exists, as an efficient entity in the Universe, in the way he thinks it does?

From the standpoint of naive sense-certainty, the modern astrophysicist does not “observe” the Crab Nebula in any direct sense (the luminous smudge is anyway not directly visible to the naked eye!) What he observes, is something happening to certain physical systems, called scientific instruments, which the astronomers have developed as “generalized sense organs”. The action, the “happening” ,is occuring at the location of the instruments, not at the putative location of the Crab Nebula, many light years away! Sometimes the events the astronomer studies, are nothing but certain harmonic correlations of phase among signals generated in a network of instruments. Yet he ascribes these events to that remote, unseen object: the Crab Nebula!

All this underlines the fact, that there is no simple, self-evident relationship between the processes of perception or “observation”, and “the object itself”. That relationship depends exclusively on the cognitive powers of the human mind, to adduce the existence of thought-objects, in the form of principles lying beyond the reach of mere sense perception, and to demonstrate their efficiency over the phenomena of the Sensorium.

The Crab in the Sensorium of “multiwavelength astronomy”

Now return to the images themselves. Figure 3 shows photographic images of the Crab Nebula, taken in blue and red wavelengths of visible light. Note, that the complex fabric of filaments, that appear in red light, but are virtually absent in blue. That outer, filamentary “cocoon” of the Crab displays characteristic, sharply defined spectral lines, differentiating it from an inner core, whose emission is continuously distributed over the entire electromagnetic spectrum.

Figure 3

Figure 4 is a closeup of the filaments, taken with the Hubble orbital telescope in visible wavelengths. The highly-organized morphology of the filaments is astonishingly reminiscent of certain types of living tissue.

Figure 4

Figure 5 shows the Crab photographed in the visible light range, but with filters selecting light with different angles of polarization. The striking difference in overall strength between the two, shows that the whole, gigantic system possesses a strong axis of polarization. Presumably, the Nebula as a whole is powerfully magnetized.

Figure 5

Most extraordinary, Figure 6 shows an image produced in the shortest of the four wavelength-ranges, X-rays, by the orbiting X-ray telescope Chandra during 1999-2000 The X-ray-emitting region appears to coicide with the core of the Crab Nebula. It is organized around a clearly-defined axis which coincides roughly with the long axis of the Crab as a whole, as well as the axis of polarization; it has a toroidal structure with smaller concentric rings and a variety of rapidly-changing features.

Figure 6

Figure 7 shows a superposition of photos taken in the visible and X-ray wavelengths, from which you can see the location and proportions of the X-ray-emitting “core” to the Crab Nebula as a whole (here the outer filaments do not show) This can also be seen on Figure 1, where the outer filaments come out strongly, and the core is faintly visible.

Figure 7

Located on the axis of symmetry of the X-ray torus, in the middle of the entire structure, lies a highly anomalous object, key to the entire Crab Nebula: a star that emits repeated, powerful pulses of electromagnetic radiation, in almost the entire spectrum from radio to gamma-rays, at a precise rate of 30 pulses per second! The stills and time-lapse movie of this “Crab pulsar” show huge jets of magnetized, X-ray-emitting plasma, flowing outward from what are presumably the polar regions of the star, along the axis of the torus in both directions, then curving off on both sides as if to form an “S”-shape, and perhaps continuing out into the outer tissue of filaments.

Again, the harmonic features of X-ray core region, plus its striking left-right dissymmetry, are coherent with Leonardo and Pasteur’s observations on the characteristic morphology of living processes.

The reader should now view the time-lapse movie of the X-ray region of the Crab, which can be downloaded from the website: http://chandra.harvard.edu/photo/2002/0052/

The movie referred to is the middle one of the three on the cited page, entitled “Chandra timelapse movie”. It is made from 7 successive images of the Crab, taken at approximately 21-day intervals by the Chandra orbiting X-ray telescope between November 2000 and April 2001. Figure 8 shows the sequence of stills from which the movie was made. (Note: The sequence is repeated several times, in a loop, to make it longer. The resulting impression of periodic pulsation, is an artifact of the editing, and has nothing to do with the much shorter, 33 millisecond pulsing of the central star.

Figure 8

This time-lapse movie displays most strikingly, what has long been discussed as a central, anomalous feature of the Crab: processes in different locations of this gigantic object, are evidently synchronized with each other, in a fashion that can hardly be explained on the basis of a point-to-point propagation of “signals” or other influences between those locations. Note the coherent, synchronous changes in prominent features of the X-ray-emitting core, including the evolution of “hot spots” on the inner X-ray ring, the concentric outward-moving shock waves, as well as synchronous changes on the larger, concentric “torus”.

These changes are occurring at what, for a system of astronomical dimensions, is an extraordinarily rapid rate. From the movie, in fact, there is nothing to suggest to us, that we are looking at an object possibly many light-years across. There is nothing in the pattern of changes of the Crab as a whole, for example, that suggests that the Crab experiences any significant limitation connected with the finite velocity of propagation of light, for example.

This raises the question, how “big” the Crab Nebula “really is”. Interestingly, it is the growth process of the Crab that provides the chief means for estimating its approximate scale.

A growing anomaly

Figure 9a shows an image of the Crab, taken in 1973, and Figure 9b shows an image of the crab taken in 2000. Close comparison of the superimposed images, particularly the details of the filamentary structure, suggests that the Crab Nebula — or at least, the outer, “cocoon”-like outer shell of filaments — is constantly expanding! Systematic comparison of photographs, taken over the last 80 years, show that the outer filaments of the Crab are expanding radially from the center of the Crab, at an overall average rate of 0.1 seconds of arc per year as seen from the Earth. Accordingly, the apparent angular size of the Crab, as seen on the celestial sphere, grows by twice that amount, i.e. 0.2 seconds of arc per year as seen from the Earth.

Figure 9a

Figure 9b

A crucial additional piece of evidence, is the peculiar spectrum of the visible light received from the Crab, sections of which are shown in Figure 10 and Figure 11. Alongside the continuous spectrum emitted from its central region, the overall spectrum of the Crab contains an array of discrete emission lines, originating mostly in the surrounding filaments, at wavelengths that are characteristic of certain known chemical elements.

Figure 10

Figure 11

There is, however, a very striking difference to the Earth-bound spectra. The difference shows up most clearly in Figure 11, where the spectra of light from Crab Nebula is “scanned” at varying positions along its major axis. The strongest set of lines — a group of three lines characteristic of the element oxygen — appear “double” and split into two sets, expanding a “necklace”-like shape as the scan moves toward the middle of the axis. One set is “down-shifted” from their normal positions toward the longer wavelengths; the other “up-shifted” toward shorter wavelengths. Note that the gap is biggest in the middle of the Crab Nebula, while the two sets of lines approach each other toward the ends of the axis.

What is the Crab telling us, with this bizarre “necklace” of spectral lines?

So far I have mainly just described the observations, steering clear of any elaborate interpretations of the observed, anomalous characteristics of the Crab Nebula in terms of the “standard textbook knowledge” of physics. We are now approaching a point, where the application of “standard theory” can have the useful result, albeit of a negative sort: it leads us into what, for “standard theory” itself, is an insoluable paradox.

The splitting of the spectral lines from the Crab has a simple interpretation in terms of the known principles of propagation of light. Assuming the whole ellipsoidal “shell” of the Crab is indeed expanding — as the angular growth of its projection on the celestial sphere suggests –, the light coming from the portions of the expanding “shell”, that are expanding toward us, should be upshifted in frequency (i.e. toward shorter wavelengths); while the light from portions of the shell moving away from us will be shifted toward lower frequencies and longer wavelengths. Based on presently-known principles, the wavelengths of the emitted radiation would be expected to increase or decrease, respectively, by a proportional amount equal to the ratio of the velocity of expansion of the shell, to the rate of propagation of light.

Now, the actually observed magnitude of the upshift and downshift in the lines is on the order of 0,4% of the wavelengths involved. From the above reasoning we would have to conclude, that the rate of expansion of the Crab’s shell were also of the order of 0.4 percent (or about 1/230th) of the rate of propagation of light — corresponding, in linear-scalar terms, to a velocity of 0.4% of 300.000 kilometers per second, i.e. about 1300 kilometers per second. (This assumes, of course, that the characteristics and rate of propagation of light are the same in the vicinity of the Crab, as on Earth.)

But when we compare this estimate for the rate of radial expansion, derived from the magnitude of the spectral shift, with the apparent size and rate of expansion of the Crab as seen from the Earth, we come to the conclusion, that the Crab Nebula must be enormously large — many light-years in diameter!

Recall, that the Crab’s rate of radial expansion, as observed from the Earth, amounts to 0.1 second of arc per year. We just concluded, however, that light propagates 230 times faster than the radial motion of the outer filaments of the Crab. That would mean that a light wave, propagating along the Crab’s major axis, would traverse in one year a segment 230 times longer than the yearly increase in distance from the center. The angular size of that segment, as seen from the Earth, would therefore be about 230 x 0.1 seconds of arc = 23 seconds of arc, or about 0.38 minutes of arc on the celestial sphere. As I mentioned earlier, the apparent major axis of the Crab corresponds to roughly 5 minutes of arc on the celestial sphere. Our conclusion: a light wave would take 5/0.38 years, or about 13 years, to propagate from one end of the Crab to the other!

More refined estimates, based on the same method, yield something closer to 10 light-years for the major axis of the Crab. For such a length to subtend an angle of 5 minutes for arc, as seen from the Earth, the distance of the Crab Nebula from the Earth would have to be about 6300 light years. At least, this is what simple geometry would lead us to conclude.

The shadow of a physical principle

The above estimates serve to clinch the paradox of the Crab’s coherent behavior, which I already pointed out in discussing the time-lapse movie.

If, on the one hand, we assume that the Crab is really such an immense object as the above argument implies, then how is the Crab able to maintain the synchronicity and coherence of the rapidly chaning processes occuring in different regions, situated many light years apart? That coherence could only be due to a “something” that were acting isochronically, at all loci of the Crab simultaneously!

If, on the other hand, the assumptions underlying our estimate of the size of the Crab are invalid, then “something” is acting to the apparent effect, of drastically changing the assumed characteristics of the emission and propagation of light, and the assumptions of geometry, upon which our estimate of the scale-dimension of the Crab Nebula was based.

Either way, the anomaly of isochronic action “outside” the domain of the “standard physics” accounts of “chains of cause and effect” cannot be made to disappear. Different attempts at interpretation mere change the location and apparent form of the anomaly — just as different methods of mapping a curved surface onto a flat plane, “blow up” in different ways.

It is by focussing in on this sort of irreducible paradox, that we become able to go beyond the Sensorium and any formalistic interpretation of the Sensorium, to conceptualize the real object that has generated the anomaly.

A number of other, gross anomalies of the Crab Nebula should be brought into the picture, that share the same underlying character. (A more rigorous treatment, to be developed, would replace the scalar measures, employed in the following brief exposition, by appropriate geometrical magnitudes for a corrected, anti-Euclidean representation of the Sensorium.)

Chief among the anomalies to be mentioned, is the circumstance, that the Crab Nebula is a powerful emitter of cosmic rays — in fact, one of the most powerful ones known — in the form of photons of ultra-short-wavelength light (gamma rays) having wavelengths trillions of times shorter than those of visible light, and thereby quantum energies many orders of magnitude larger than all known types of nuclear reactions, including presently known forms of “matter-antimatter” reactions.

In fact, the entire spectrum of radiation emitted from the Crab Nebula is drastically “upshifted” relative to that of our Sun. While our Sun has most of its output in the visible and near-visible range, most of the gross power output of the Crab is in X-rays, with a substantial part extending into the gamma-ray region. Combining the above estimate of the scale-dimensions of the “Crab” and of its distance from the Earth, with the intensity (brightness) of the radiation received in the vicinity of the Earth, one arrives at the conclusion, that the overall radiation output of the Crab Nebula must be approximately one hundred thousand times that of our Sun, but with the greatly “upshifted” spectrum.

Recently additional anomalies have come to light, which are coherent with the same “curvature”. It was discovered last year, that the periodic electromagnetic pulses, attributed to the central star (pulsar) of the Crab, contain high-power subpulses lasting only about 2 billionths of a second. On the assumption, that the known characteristics of light emission apply to the Crab, the effective sources of such nanosecond subpulses could be no larger than about 60 centimeters across — the distance light travels in 2 nanoseconds! But to produce signal of the observed strength at the Earth 6000 light-years away, the emitting region would have to achieve a power-density corresponding to a billion times that generated in the core of an H-bomb detonation! Alternatively, the effect of a tiny source region could be schieved through isochronic, coherent emission from a larger region of the Crab, according to the principle of a laser.

In both cases, “something” is acting to “shape” the Crab Nebula processes to the effect, of shifting its activity toward the higher energy-flux-density registers of coherent electromagnetic action. Let us look more carefully at this aspect.

Conical-spiral functions

All evidence points to the role of the Crab’s pulsating star as the “motor” and “organizer” of the entire Crab Nebula, and to the likelihood, that this star is a rapidly rotating body, making one revolution every 33 milliseconds (30 cycles a second), which is the period of the star’s apparent pulsation. The axis of rotation of that central star, would coincide with the axis of symmetry of the toroidal structure, revealed in the Chandra X-ray images, which in turn coincides roughly with the major axis of the ellipsoidal form of the Crab as a whole. From a very slight, but measurable slowing-down of the observed rate of pulsation it is surmised, that the Crab is constantly converting a portion of the rotation action of the pulsar, into various forms of electromagnetic radiation, and other forms of work that might be going on.

Now since electromagnetic radiation, as it projects into a generalized Sensorium, also has the characteristics of rotational action, the form of the general effect we are looking at, is the transformation of low-frequency action (rotation of the star at 30 Hz) into high-frequency registers of action (X-ray radiation at 10**17 Hz, gamma rays at 10**26 Hz or more).

Aha! What we have, minimally, is a form of conical spiral action: not simple rotational action, but rotational action which constantly transforms itself to higher registers of rotational action.

Let’s not forget, however, that we are not dealing with a simple geometry. We have to locate the real object behind the images projected on the surface of our generalized Sensorium in different wavelengths — and the conical-spiral characteristics of action adduced so far — , within the real Universe: the Universe of three interconnected, Vernadskian phase spaces of nonliving, living and cognitive species of physical principles.

The flux of high-frequency radiation in the core of the Crab is such, that condensed matter, of the familiar earthly sort, could not exist there. Instead, we have a highly polarized plasma being acted upon by the rapidly spinning, intensely magnetic star — a setup suited, we would presume, to actually generate, by polarized fusion or analogous sorts of processes, the kinds of organized forms of “matter” that would be a precondition for further evolution in the direction of a solar system in the direction of Biosphere and Noosphere phases of development. The conical function of “upshift” of electromagnetic radiation in the Crab would thus be multiply-connected with a second conical function, expressing the generation of an evolving “Mendeleyev table” of “eigenstates of matter” within the core region of the Crab.

This suggests the notion of a manifold of multiply-connected conical action, as a necessary feature of the region of tangency between an anti-entropic intention guiding a physical process, and the phenomena generated by that process in a generalized, spherically-bounded Sensorium. Therefore, the Crab Nebula should not be conceived as an object in a Euclidean-Cartesian “three dimensional space”, but rather as a singularity in terms of that doubly-multiply-connected domain. (We can surely do better, but that is our first shot!)

Does the Crab Nebula have a “personality”?

The more we investigate the Crab, the more it closely it resemble the kind of object, that Lyndon LaRouche hypothesized, many years ago, as an early phases of development our own solar system:

“Currently, our best knowledge is, that the Solar system began as a fast-spinning, youthfully exuberant solitary Sun in the universe at large. According to Kepler’s principles, this young Sun spun off some part of its material into a disc orbitting the Sun itself. If we assume polarized nuclear fusion occurring within that disk, then it were possible for polarized fusion, and, presumably, only polarized fusion, to have generated the observed periodic table of the Solar system. That fusion-generated material from the disk would have been “fractionally distilled” into approximately the Platonic orbits defined by Kepler.”

Granted, this view of the evolution of the solar system is totally at odds with the “official” account, both in nominal content and, most importantly, in spirit.

Pick up any astronomy book or research paper, and you will find, in ritual propitiation to established academic doctrine, the Crab Nebula constantly referred to as a “supernova remnant”. Note the implied, entropic misconception of the Universe, expressed by that expression. We are supposed to think of the Crab, not as a process evolving lawfully toward higher states of organization, but as a mere “remnant”, a “left over” from an exploded star. In an aging, entropic Universe there would seem to be no room for the “youthful exuberence” of stars to generating their own, brand-new planetary or analogous systems. Not surprisingly, none of the astrophysical specialists predicted anything like the features revealed in 1999-2000 by the Chandra X-ray telescope, despite the elaborate ivory-tower mathematical models they develop to “explain”, after-the-fact, earlier observations. In fact, Lyndon LaRouche has once again been shown on the mark, while the so-called experts were way off.

In reality the Crab Nebula displays all the characteristics of a happily evolving Keplerian system, including the driving, organizing role of its central singularity, in exact accordance with Kepler’s conception of our Sun. The Crab continues its exuberant development, and no Earthbound pessimist can do anything about it!

This brings us back to the question, of the nature of astrophysical “objects”. Is the Crab Nebula merely an “effect” or phenomenon of the overall laws of the Universe, like the attraction of magnets or the blue color of the sky? Or are we justified in associating to it a certain, individual character or personality — a character that could only be known as a thought object to the mind? Certainly, the quality of exuberant passion, could pertain only to a Leibnizian monad, not a mere physical effect.

Any rigorous exploration of this question should adopt the Lyn’s suggestion, from some years ago, to organize experimental scientific inquiry around a “3×3” schema: We make three rows, one for each of the three Vernadskian sub-phase-spaces of the Universe, corresponding to the domain of ostensibly non-living processes, of living processes, and of the processes associated with the action of cognition in the Universe. Then, we make three columns, corresponding to the microphysical, macrophysical and astrophysical ranges of scale-lengths of experimental investigation. To establish the validity of any purported universal physical principle, we must demonstrate its efficiency vis-a-vis all 9 of the 3×3 domains of experimental investigation — the latter understood, of course, not in the sense of “objective science”, but as domains subsumed by Man’s action upon the Universe.

The most fascinating question, posed immediately by what we have said here, is the relevance of the Crab Nebula to the manifestations of the principle of life on the astrophysical scale.

Beyond that, do we not perceive a certain playful, daunting character to the anomalies, the Crab Nebula seems to throw at us? The face seems far away, but the smile is very close.

A Note: Why Modern Mathematicians Can’t Understand Archytas

A Note: Why Modern Mathematicians Can’t Understand Archytas

by Jonathan Tennenbaum

“As for me, I cherish mathematics only because I find there the traces of the Art of Invention in general, and it seems to me I have discovered, in the end, that Descartes himself did not yet penetrate into the mystery of this great science. I remember he once stated, that the excellence of his method, which appears only probable in terms of his physics, is proven in his geometry. But I must say, that it is precisely in Descartes’ geometry that I recognized the principle imperfection of his method… I claim that there is an entirely different method of geometrical analysis, than that of Vieta and Descartes (i.e. algebra), who did not go far enough, because the most important problems do not depend at all upon the equations, to which Descartes reduces his geometry.” (Leibniz, Letter to Princess Elisabeth, late 1678)

For example: the catenary, which requires {physical substance} for its generation, could not exist in the world of Descartes, Lagrange and Euler!

Looking through recent, standard presentations of Archytas’ famous construction for doubling the cube, demonstrates how far modern mathematics has fallen below the level of thinking that prevailed in Plato’s circles over 2300 years ago! Typical is a discussion of the doubling of the cube, on a webpage authored by J.J. O’Connor and E.F. Robertson (footnote 1). Although the text includes some interesting quotes and references, when the authors get to Archytas’ actual construction, they shamelessly revert to the school-boy routine of “using coordinate geometry to check that Archytas is correct”. Imposing a Cartesian coordinate system, they write down algebraic equations in x,y,z for each of the three intersecting surfaces (cone, cylinder and torus), and combine the equations to show that the desired proportionalities “somehow come out”. Magic! Readers, foolish enough to engage in this meaningless exercise, will not only have learned less than nothing about Archytas’ actual discovery; worse, the cognitive processes of minds will have been “turned off” altogether.

The stunning sophistication of Archytas’ synthetic-geometrical approach, when viewed in terms of standard accounts of ancient Greek mathematics, suffices to demonstrate, that those standard accounts are grossly inadequate, and that the actual physical conceptions underlying his work have been suppressed. In fact, most of the crucial original documents of Greek science have been lost or destroyed, while the living continuity of Greek science was broken off through the “dark age” imposed under the Roman Empire. As an included result of that process, the surviving version of Euclid’s famous “Elements” — a compendium whose axiomatic-deductive mode of presentation buries the essential ideas and historical process of development of Greek science — subsequently became, or was made into, the nearly exclusive source for classical Greek geometry, as well as the model for elementary mathematics education for many many centuries. Among other things, Euclid’s “Elements” obfuscated the natural ordering of development, even in visual geometry, by beginning with {plane geometry} and the supposedly self-evident concepts of “point” and “straight line” as irreducible entities, proceeding only in the final chapters to the constructions of so-called solid geometry. Whereas, in the first and most “elementary” visual geometry is not “flat” {plane} geometry at all, but rather {spherical} geometry — the form of geometry associated with astronomy as Man’s oldest science.

These and related circumstances, explain why the greatest scientific thinkers, from the Renaissance through to Kepler and Leibniz, directed much of their efforts to reconstituting the actual method and “soul” of classical Greek mathematics, which could at best only be read “between the lines” of Euclid and certain other, mostly fragmentary surviving texts, and for which the surviving dialogs of Plato were the single most important source.

Crucial to this process, was the Renaissance revival of the isoperimetric principle, of circular and spherical geometry, and the significance of the five regular solids. Exemplary, in one respect, was the way Pacioli and Leonardo Da Vinci’s “Divina Proportione” in effect “turned Euclid on his head” by emphasizing the primacy of Euclid’s famous Thirteenth Book. Kepler carried the polemic further, developing a first approximation to a true physical geometry from the standpoint of the crucial evidence of the regular solids. This led directly into Fermat’s, Pascal’s and Leibniz’s reworking of such items as Apollonius’ Treatise on conic sections, in the context of a growing focus on the conception of higher-order, multiply-connected manifolds, which evidently lay at the center of the discussion among Plato’s scientific collaborators. Thus, there is a direct line from Archytas and Apollonius, into the work of Gauss and Riemann.

From this standpoint I propose, for those eager to dig into the matter in some detail, the following observations on Archytas’ construction for the doubling of the cube. Although my observations are somewhat technical, and do not aim at a full representation of his discovery, they should help put us on a fruitful track, repairing some of the damage caused by modern misrepresentations.

I assume, in the following, that the reader already has some familiarity with Archytas’ construction, from previous discussions by Bruce Director and others, including the relationship between doubling the cube, and the general problem of constructing two “mean proportionals” between given lengths a and b. (footnote 2)

The Geometry of Physical Events

Note, firstly, that by deriving the solution by means of an intersection of a torus, cone and cylinder Archytas situates the problem explicitly in the domain of multiply-connected, “polyphonic” circular action. Observe the emphasis on {verbal action}, reflected in the classical account of Archytas’ construction, by the geometer Eudemus:

“Let the two given lines be OA [= a] and b; it is required to construct two mean proportionals between a and b. Draw the circle OBA having OA as diameter where OA is the greater [of the two]; and inscribe OB [as a chord on the circle] of length b, and prolong it to meet at C the tangent to the circle at A. … Imagine a half-cylinder which rises perpendicularly on the semicircle OBA, and that on OA is raised a perpendicular semicircle standing on the [base] of the half-cylinder. When this semicircle is moved from A to B, the extremity O of the diameter remaining fixed, it will cut the cylindrical surface in making its movement and will trace on it a certain bold curve. [The latter motion generates a section of a torus–JT.] Then, if OA remains fixed, and if the triangle OCA pivots about OA with a movement opposite to that of the semicircle, it will produce a conical surface by means of the line OC, which, in the course of its movement, will meet the curve drawn on the cylinder at a particular point P….”

What a contrast between the indicated polyphonic conception of geometry, and today’s mind-deadening “set theory”! In Archytas’ construction, P arise not as an intersection of static “point sets”, but as the locus of a physical {event}, whose process of generation involves three (or actually, six) simultaneous degrees of action. Achytas designs the process in such a way, that the event, so generated, will possess exactly the required “projective” relationships. In particular, the required “two mean proportionals” are OQ and OP, where Q is the projection of the point P, constructed as above, onto the plane of the original circle OBA.

But, before attempting to derive Archytas’ construction by ourselves, let us look at the simpler case of the relationship between the geometric mean and circular action. This gives us a suitable jump-off point for tackling the problem solved by Archytas.

Harmonic Proportions and Circular Action

Circular rotation provides the simplest, characteristic case for the generation of harmonic proportions among what are ostensibly scalar magnitudes (line segments, for example), as a “projected” result of higher-order action.

Construct a circle with a given diameter OA. A point P, moving along the circle between O and A, gives rise to an array of invariant harmonic proportions, in the following manner.

Connecting P with the endpoints of the diameter, O and A, produces a triangle OPA, whose shape changes with P’s position, but whose angle at P is always a right angle. Now project P perpendicularly to the line OA, calling the point of projection “Q”. Evidently, the triangle OPQ is also a right triangle (right angle at Q), and it shares a common angle at O with the original right triangle OPA. The two triangles are thus constantly similar, throughout P’s motion, and the corresponding ratios of the sides will be equal. In particular, OQ:OP = OP:OA. This amounts to saying, that the length OP is the {geometric mean} between OQ and OA. By inverting the order of construction, we can generate the geometric mean of any two given lengths OQ and OA, using the circle. Just project Q onto the circumference of the circle, to get the point P.

The geometric mean was also known in ancient Greek times as a “single mean between two extremes”. Doubling a {square}, requires constructing such a single (geometrical) mean between 1 and 2. To double a cube, however, we need {two} means between 1 and 2, or in other words a series of simultaneous proportions of the form: OB:OQ = OQ:OP = OP:OA where OB and OA have lengths 1 and 2, respectively.

Now, since the above construction already generates “half” the required proportion, namely OQ:OP = OP:OA, the following strategy immediately suggests itself:

“Introduce a {second} degree of rotation, generating the “other half” of the double proportion, namely: OB:OQ = OQ:OP.

“Thus, all we need to do is to somehow combine the two circular actions, in order to generate an {event}, at which both conditions are realized {simultaneously}; this will give us the required double mean: OB:OQ = OQ:OP and at the same time OQ:OP = OP:OA.”

Carrying out this strategy does lead to a construction for the double mean, albeit one that is open to certain criticisms. I present it briefly, because it already points in the direction of multiply-connected action.

A Preliminary Thrust

In fact, to get the proportion OB:OQ = OQ:OP in the indicated manner, we need to construct a {second} circle, of diameter OP, such that (i) the point Q (P’s projection on the diameter of the first circle) also lies on the second circle, and (ii) Q projects to a point B on the second circle’s diameter OP, such that the distance OB has the required length 1.

A bit of geometry, shows that requirement (i) is actually satisfied for {all} positions of P on the first circle; requirement (ii), however, is fulfilled only for {one} position of P (and its symmetrical image). How might we generate that locus as a constructible {event}?

Simple, in principle! Imagine, that for each position of P, as P moves along the first circle, a corresponding circle is constructed around OP as diameter. This process produces a continuous {family} of circles, whose diameters OP are changing angle and length as P moves (footnote 3). Now mark off, on each diameter OP, a point B’, such that OB’ has length 1; and let Q’ be the corresponding point on the circumference of the corresponding circle, so that B’ is the projection of Q’ onto OP. (Of the two possible choices for Q’, choose the one lying inside the original, first circle.) The points Q’, so determined, describe a {curve} inside the first circle. Looking at various positions of P, we can easily see that the curve has points on both sides of the first circle’s diameter OA, and must therefore {cross} it at some point.

That crossing is the required event! At that moment, the points Q and Q’ coincide, and both parts of the indicated double proportion hold simultaneously. OQ is the side of the cube, whose volume is twice that of the cube with unit length.

Now one might, with some justification, object, that no method was presented above, for how to actually {draw} the curve defined as the locus of points Q’. It is obviously not enough to simply demand: “Mark, on each one of the infinite family of circles, a corresponding point Q'”. For, if we were to begin to mark circles and points one at a time, we would never have anything more than a discrete set, and would never arrive at a continuous curve. (footnote 4)

On the other hand, it is quite possible, with a bit of ingenuity, to design a relatively simple {physical mechanism} that traces the required curve as a product of the motion of P along the circumference of the original circle. The resulting method is akin to the tactic of Nichomedes, who used a mechanically-generated curve called the {conchoid}, to double a cube.

Back to Archytas

From this standpoint we may now better appreciate the singular breakthrough of Archytas, who went far beyond the above “ad hoc” methods, to discover a higher-order approach to the problem which anticipates Gauss’ 1799 grounding of the complex domain, by over two millenia!

By applying a new degree of rotation to the first circle, to generate a {torus}, Archytas takes us, by implication, into an {entirely new universe}. Instead of trying to build the solution “from the bottom up”, as before, we can now proceed more “from the top down”.

The torus in question, is obtained by rotating the original circle first into the vertical plane (with O fixed) and then rotating the resulting circle around the vertical axis through O. For any point P on the torus, the vertical cross section through the axis of the torus, is a circle of diameter OA (of length 2); if Q denotes P’s projection onto the horizontal diameter of that circle, the proportion OQ:OP = OP:OA will hold — now as an invariant relationship for the {entire surface} of the torus.

Now it is easy to introduce additional degrees of circular action, generating additional harmonic relations. If, for example, we cut the torus by a vertical cylinder which contains the vertical axis (and the point O), then any point P, lying on the intersection of the two surfaces, automatically belongs to {two} circles: (i) the vertical section of the torus through P, as already described, giving rise to the relation OQ:OP = OP:OA, and (ii) the vertical section of the cylinder through P. In the vertical projection of that circular section onto the horizontal plane, P projects to the already-mentioned point Q. Lying on the projected circle, Q generates a {second} set of harmonic relations, of the form: OR:OQ = OQ:OD, where OD is the diameter of the cylinder (and the projected circle) and R is Q’s “lateral” projection onto the diameter OD. So far the length of OD (the diameter of the vertical cylinder) is variable; we can chose any value we want. Archytas chooses it to be equal to OA, in which case the projected circle coincides with the original one. But in principle many other choices would be possible.

In any case, we have room for introducing still a {third} principle. Remember, that our immediate object is to generate an event at which, in addition to the invariant relation OQ:OP = OP:OA, a second relation OB:OQ = OQ:OP, or 1:OQ = OQ:OP (since OB = 1) holds. Compare this with the relation we just generated using the cylinder: OR:OQ = OQ:OA.

A bit of reflection, shows that the latter relationship is equivalent to: 1:OQ = OQ:(OA x OR)

Thus, to get the relationship we are looking for, namely 1:OQ = OQ:OP, all we need to do, is generate an event, where OA x OR = OP. Since OA has length 2, this amounts to saying, that OR should be 1/2 the length of OP.

How are OP and OR related? Very simply, as one can see: R is the direct, perpendicular projection of P onto the axis OA, i.e. the point at which the vertical plane through P, drawn perpendicular to the axis OA, intersects that axis. Taken by itself (leaving aside the other constraints on P), the requirement that OR be equal to 1/2 OP, is equivalent to saying, that P lies on a certain {cone} with apex O and axis OA. The required cone can easily be constructed; this, indeed, is the preliminary step which Eudemus describes.

By this road — admittedly a bit bumpy in places –, we arrive at Archytas’ construction. This time not to verify it, but to derive it by ourselves. — — ——————————

1. See www.history.mcs.standrews.ac.uk/history/HistTopics/ Doubling_the_cube

[Try http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Doubling_the_cube.html]

2. Briefly: by “two mean proportionals” are understood two magnitudes x and y, between given magnitudes a and b (a assumed larger than b) such that b:x = x:y = y:a. Doubling the cube corresponds to the special case a = 2 and b = 1. The first of the two mean proportionals, x, corresponds to the side of a cube having double the area of the unit cube. The second mean, y, corresponds to the area of a side of the new cube.)

3. Alert readers will note here the traces of {conical action}, which becomes more explicit in Archytas, and finally emerges in full clarity in Gauss’ complex domain.

4. For related reasons, mathematics as generally conceived cannot represent true continuity, but at best describe certain {results} of continuous action. Only the direction of development of mathematics, set forth by Leibniz in his original conception of the calculus, and continued by Riemann, provides a pathway of development of mathematics in the direction of ever more adequately representing the reality of continuous action in the Universe. The case of the catenary is exemplary.